Discover how ChatGPT could worsen the Cyberwar.

How ChatGPT Could Worsen the Cyberwar

Chatbots have been a popular and useful tool for more than a decade, as companies rely on chatbots of varying complexity to perform routine tasks, like rudimentary customer service or tech support. Fueled by artificial intelligence, chatbots are able to decipher requests and provide information utilizing connected databases. In recent years, public-facing chatbots capable of drawing from the internet have become popular distractions for everyday web users. Easily the most famous of these is ChatGPT.

Unfortunately, ChatGPT has proven to be such a smart and capable chatbot that its power has recently been harnessed for evil. Cybercriminals are using ChatGPT to create malware, which could make it much more difficult for users to achieve cybersecurity in the near future.

What Is ChatGPT?

ChatGPT is an AI chatbot developed by OpenAI, a San Francisco–based AI research laboratory interested in creating friendly AI systems. OpenAI boasts one of the most powerful supercomputers, on which ChatGPT and its other AI projects are run. ChatGPT was extensively trained using a generative pre-trained transformer language model, meaning that the chatbot is adept at understanding and communicating in human-like text, and the chatbot continues to learn through supervised and reinforcement techniques.

ChatGPT is currently free for public use, which makes it one of the most powerful and versatile AI tools that anyone can access. Unlike other AI chatbots on the web, ChatGPT is capable of more than maintaining a basic conversation with human users; its high level of human language comprehension allows it to perform creative tasks, like composing music, writing poetry and drafting essays. What’s more, unlike other chatbot AIs that have been overtaken by deceitful and harmful inputs by users, ChatGPT can recognize counterfactual information and dismiss bigoted prompts with no effect. ChatGPT does remember previous conversations with individual users, which allows for more personalized experience and may improve applications of the chatbot in the future.

Why Is It Making Malware?

Just as ChatGPT is designed to filter out prompts for racism and sexism, OpenAI claims that the advanced chatbot is supposed to ignore prompts for malware creation — but in practice, users have discovered that with enough bullying, ChatGPT will comply with requests for any type of toxic content. By badgering the chatbot repeatedly to reveal code for malicious programming, amateur cybercriminals can gain access to codes that can form the foundation for new malware programs, and these scripts tend to be much more advanced than the typical beginner cybercriminal is capable of producing.

Already, malware developed with help from ChatGPT is hitting the internet, and information security experts are working hard to alert users to the remarkable threat posed by these new attack tools. It seems that ChatGPT’s codes allow for the creation of advanced polymorphic viruses, which repeatedly changes its signature files and appearance to evade detection from security products. Traditional antivirus and antimalware solutions are less effective at blocking these threats. In the past, polymorphic malware was rare because of the high level of sophistication and difficulty in coding, but through ChatGPT, any would-be cybercriminal can deploy a large number of complex malicious programs across the web.

What Can We Do to Stay Safe?

Fortunately, protecting against malware created by ChatGPT requires many of the same skills and tools that users need to stay safe from other types of emerging threats. Cybersecurity professionals have long known of the potential of artificial intelligence to advance malware, and solutions are already available for users interested in improving their cyber defenses. Premium security solutions from top-level cybersecurity firms already utilize new systems and strategies for recognizing and thwarting a broader range of attacks. Specifically, users should invest in security tools that rely upon indicators of attack (IOAs) which are effective at interrupting ongoing attacks of any nature rather than focusing on specific types of malware.

Additionally, all users should continue using devices and data with a high level of cyber hygiene. Strong passwords, data backups, safe downloads and more will help users avoid most malware, regardless of its origins or complexity.

The malicious scripts generated by ChatGPT are only the first indications of a new trend in malware creation, in which cybercriminals come to rely on AI tools to develop and improve their methods of attack. By investing in more advanced security solutions now, users can remain protected even as the landscape of cyber threats becomes more perilous into the future.