ChatGPT - new favorite tool for hackers?

The AI software ChatGPT is expected to do a lot: write newspaper articles, write theses - or program malware. Is ChatGPT developing into a new tool that will make it even easier for hackers and cybercriminals to create malware? Security researchers Prof. Dr. Claudia Eckert and Dr. Nicolas Müller from the Fraunhofer Institute for Applied and Integrated Security AISEC classify the potential threat posed by ChatGPT to digital security.

Security experts have already shown that ChatGPT can be used to create malware, and the bot can also be used for social engineering. Will ChatGPT become a new favorite tool of hackers with little technical knowledge?

Since anyone can use ChatGPT to generate texts or simple programs automatically with this AI-based software, hackers will also do it to generate malicious code, for example. At present, it is impossible to assess how good these generated programs will be, but simple variants, for example, for the automated generation of phishing emails, but also of code that can be used to carry out a ransomware attack, have already been demonstrated. What is clear is that easy-to-use options have existed for a long time, so that even hackers:inside can carry out attacks without prior knowledge. However, these are not based on AI, but are, for example, online available collections of executable attack programs, so-called exploits, which exploit known vulnerabilities. With ChatGPT, an easy-to-use tool is now additionally available, so that own malicious code can be generated and quickly put into circulation. From Fraunhofer AISEC's perspective, ChatGPT is a serious threat to cybersecurity. We assume that the knowledge base of subsequent software versions will be significantly expanded and that the quality of responses will also improve, since with the underlying technique of re-enforcement learning and using human feedback, such further development is foreseeable. Closing security gaps, eliminating vulnerabilities at an early stage is therefore the be-all and end-all to ward off these attacks.

ChatGTP has the potential to make the world of cyberattacks accessible to an even broader user base.

Prof. Dr. Claudia Eckert Geschäftsführende Leiterin des Fraunhofer-Instituts für Angewandte und Integrierte Sicherheit AISEC

Is ChatGPT only interesting for "script kiddies" or also for more experienced cybercriminals?

Hackers must have competencies from various areas for successful attacks. ChatGPT can therefore, in my view, already be interesting for IT experts. The dialog communication form and the chatbot's ability to provide explanations, generate code snippets or describe commands that can be used for requested tasks, such as the correct parameterization of analysis tools, can also provide very helpful support for experts. ChatGPT's answers can provide the desired result faster than a classic Google query, which, for example, does not generate code parts tailored to the query. For experts, ChatGPT could therefore make a contribution to faster knowledge expansion, assuming that the experts are able to quickly check the chatbot's answers for plausibility and correctness.

Aren't there already many very simple ways to obtain malicious code, e.g. simply by clicking on the Darknet ("malware as a service")? Is ChatGPT just another option or how is it different from the already existing options for hacker:ins?

As stated above, ChatGPT is another tool in the bouquet of existing hacker:ins tools. From my point of view, ChatGPT could partly take over the part of a virtual advisor in the future, who can be consulted on a wide variety of issues to prepare attacks by hacker:ins. In our view, however, the damage potential that such software can have in the long term is far more serious; some people are already talking about game changer software for cybersecurity. Even though ChatGPT, according to internal rules, refuses to generate attack code when specifically asked for it, this can of course be circumvented by clever wording. ChatGTP has the potential to make the world of cyberattacks accessible to an even broader user base, to generate a variety of tailored attacks in a dedicated manner, and to advise even the most unsophisticated hackers on how to do this successfully.

From Fraunhofer AISEC's perspective, ChatGPT is a serious threat to cybersecurity.

Dr. Nicolas Müller Wissenschaftlicher Mitarbeiter in der Abteilung Cognitive Security Technologies des Fraunhofer AISEC

Do we need to be prepared for cyberattacks - from the creation of malware to its distribution - to be driven by AI in the near future? Is this already happening today?

Yes, we certainly assume that simple waves of attacks, such as phishing campaigns can be generated and executed AI-based. To this end, for example, AI-based phishing emails can be generated that contain, for example, a link hiding AI-based ransomware code. The mails can be automatically distributed to selected groups of addressees. These are attacks that belong to the large class of social engineering attacks that can be carried out AI-based in the future even more effectively than has been done so far. The AI software generates genuine and convincing-looking texts so that victims fall for it and reveal sensitive information, for example. Nevertheless, it should be noted that while the technology behind it (Language Model) completes sentences exceptionally well, it cannot - like humans - bring together and relate complex contexts and prior knowledge from a wide variety of fields. Answers to questions therefore often sound plausible with ChatGPT, but are ultimately not based on human understanding, but on a statistical distribution over word contexts.

Are there also positive aspects for the security industry associated with ChatGPT? Can security experts also use the bot for their work?

Security experts can also benefit from ChatGPT, e.g. to detect vulnerabilities in software. But ChatGPT can also be a support for software developers. Thus code components could be analyzed automatically by ChatGPT and hints from ChatGPT for the improvement of the code quality in the development cycle could be considered. Thus, fewer potential opportunities exist to attack software. ChatGPT could make besides a contribution in the qualification of coworkers. However, with all application areas it must always be considered that ChatGPT frequently supplies wrong or even freely invented answers and will also supply in the future. It is therefore important to keep an eye on both the risks and the opportunities of ChatGPT, but also to be aware of its inherent limitations.

***

Prof. Dr. Claudia Eckert is managing director of the Fraunhofer Institute for Applied and Integrated Security AISEC.

Dr. Nicolas Müller is a research associate in the Cognitive Security Technologies department of Fraunhofer AISEC.

The Fraunhofer Institute for Applied and Integrated Security AISEC is one of the leading international institutions for applied research in the field of cyber security. Its expertise ranges from embedded and hardware security, automotive and mobile security to security solutions for industry and automation.

Bayern Innovativ News Service

Would you like to receive regular updates on Bayern Innovativ's industries, technologies and topics? Our news service is the right place for you!

Register now free of charge