5 Takeaways from HackerOne Discussion on AI

0

On July 27, HackerOne hosted a virtual panel discussion and roundtable with ethical hackers from the HackerOne community, to discuss the Artificial Intelligence (AI) landscape and collaboration within the hacking community on harnessing AI for ethical hacking purposes. 

From the wide-ranging discussion, we gathered 5 takeaways on the risks introduced by the latest AI innovation popularised by ChatGPT.  Commonly referred to as “generative AI”, this technology is powered by “large language models” (LLM) that identify patterns and structures within existing data, to generate new texts, images, audios, videos, and more. 

Image, Clockwise: Michiel Prins, HackOne  Co-founder and Head of Professional Services; Dr Katie Paxton-Fear; Lupin; Rez0; Gavin Klondike.  

1. Generative AI can enhance attackers’ arsenal of tools, expanding the attack surface and increasing potential security breaches – in scale and frequency. Existing attack pathways can be more efficiently compromised using tools such as “ChatGPT” and a cyber underground version “WormGPT”. These tools empower cybercriminals with features such as:  

  • Automation and deployment of code, enabling low level cybercriminals to easily replicate existing ransomware attacks or experienced cybercriminals to quickly repackage existing malicious modules to build customized malware 
  • Crafting of highly convincing phishing emails, and in particular, enabling non-native language speakers to expand targets to foreign victims 
  • Creation of “Deep Fakes” and synthetic voices, enabling sophisticated impersonation and social engineering methods that convince unsuspecting users to give away privileged access and information, or to click on malicious links. 

     

2. Novel attacks introducing new risks are also emerging. One example is “prompt injection attacks”, where attackers “jail-break” generative AI with specialised prompts. This is where attackers, through carefully crafted inputs, manipulate generative AI chatbot interfaces (such as ChatGPT), causing it to deviate from its original non-benign instructions.   

This vulnerability was reported [1] in May 2022 to OpenAI (by Jon Cefalu) as a “responsible disclosure”, and publicly released [2] in September 2022.  

Such an attack could also be surreptitiously inserted in plugins or libraries, causing the model to generate unintended output. Risks could include:  

  • disclosing sensitive information 
  • producing inappropriate content, such as to propagate false, misleading or biased information 
  • executing malicious code 

     

3. “Data poisoning” is also an emerging risk, where attackers manipulate AI model’s training data, causing the AI tool to behave in undesirable ways.  

Information that users directly provide to the generative AI chatbot session is used to train the AI model.  Thus, users are warned, for example by ChatGPT’s OpenAI, not to share “any sensitive information in your conversations”. [3] 

The recent high-profile ban by Samsung Electronics of ChatGPT (and other generative AI tools) underscores rising concerns over the risk of leaking sensitive information. (According to various reports, one engineer allegedly entered Samsung’s source code into ChatGPT to fix a software bug, another loaded a transcription of an internal meeting into ChatGPT to create meeting notes). 

However, aside from such data leaks, another risk could arise from this reliance of AI models on training datasets.   

Known as “data poisoning”, this is where attackers tamper with the data, for example, by injecting the model with biased or false data.  One often cited example is an AI tool to detect spam email. Attackers could identify key terms that appear in “good” emails and insert them in spam emails.  By “learning” from this corrupted dataset, attackers could force the AI model to misclassify spam emails as “good”. 

4.  In the race to capitalise on the rapidly evolving and promising AI market, releasing innovative AI services may take precedent over baking security into the product from Day 1. 

While companies gear up to respond to consumers’ expectations, security considerations may become secondary.  One consequence as competition intensifies could be the exacerbation of software supply chain risks. 

For example, when building a new software product, developers may rely on ChatGPT to suggest certain external tools/ plugins/ libraries, without taking the time to systematically perform necessary security checks.   

Threat actors may exploit this weak security practice and introduce malware into these third-party codes, which if incorporated by the developers, could spread the malware throughout the entire software product. 

5. When mitigating new and novel attacks, it is important not to overlook basic security principles. 

Emerging risks such as “prompt injection attacks” or “data poisoning” are in part due to insufficient controls over data access rights. 

For instance, in the case of a “prompt injection attack” where an attacker tricks the chatbot into divulging confidential information, a countermeasure could be implementing varying levels of access (or authorisation) rights for the user, the AI model and the confidential database. 

In addition, a high-profile incident in Italy in March 2023 also served as a strong reminder for firms to ensure that use of the technology complies with the relevant data protection and privacy laws. (This was the Italian government’s temporary ban [4] of ChatGPT over concerns of privacy breaches – specifically, the “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot). 

All in all, while there is much talk about AI “replacing” humans, it appears that there would be huge demands for the “human touch” to address the risks introduced by AI – at least in the near term. 

References  

[1] https://www.preamble.com/prompt-injection-a-critical-vulnerability-in-the-gpt-3-transformer-and-how-we-can-begin-to-solve-it  

[2] https://twitter.com/goodside/status/1569128808308957185  

[3] https://help.openai.com/en/articles/6783457-what-is-chatgpt  

[4] https://www.reuters.com/technology/chatgpt-is-available-again-users-italy-spokesperson-says-2023-04-28/  

 

Share.

Comments are closed.

Visit Us On TwitterVisit Us On FacebookVisit Us On LinkedinVisit Us On Youtube