Home // International Journal On Advances in Software, volume 17, numbers 3 and 4, 2024 // View article


Goal Hijacking Using Adversarial Vocabulary for Attacking Vulnerabilities of Large Language Model Applications

Authors:
Patrick Levi
Christoph P. Neumann

Keywords: security; artificial intelligence; large language models; jailbreaks; adversarial attack

Abstract:
The fast advancements in Large Language Models (LLMs) are driving an increasing number of applications. Especially in the context of retrieval augmented generation techniques, LLM applications are widely distributed for public use or as proprietary applications. Together with the growing number of users, we also see an increasing number of attackers who try to outsmart these systems. They want the model to reveal confidential information, specific false information, or offensive behavior, compromising the information security, reliability, and trustworthiness of this otherwise revolutionary technology. To this end, they manipulate their instructions for the LLM by inserting separators or rephrasing them systematically until they reach their goal. Our approach is different. It inserts words from the model vocabulary. We find these words using an optimization procedure and embeddings from another LLM, the attacker LLM. We prove our approach by goal hijacking two popular open-source LLMs from the Llama2 and Flan-T5 families, respectively. We present two main findings. First, our approach creates inconspicuous instructions, and therefore it is hard to detect. For many attack cases, we find that even a single word insertion is sufficient. Second, we demonstrate that we can carry out our attack using a different model than the target model with which we carry out our attack. We conducted variations of our study to investigate the effect of the main attack parameter on the success of the attack. Furthermore, we investigate the effect of selected text generation strategies of the LLM on attack success. While our attack remains successful, in particular, the softmax temperature seems to influence the attack success.

Pages: 214 to 225

Copyright: Copyright (c) to authors, 2024. Used with permission.

Publication date: December 30, 2024

Published in: journal

ISSN: 1942-2628