Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents

1Zhejiang University, 2Rutgers University
ICLR 2025 Accepted
Agent Attack

Overview of the LLM Agent Attacking Framework, including Direct Prompt Injections (DPI), Observation Prompt Injections (OPI), Plan-of-Thought (PoT) Backdoor, and Memory Poisoning Attacks, which target the user query, observations, system prompts, and memory retrieval respectively of the agent during action planning and execution.

Abstract

Although LLM-based agents, powered by Large Language Models (LLMs), can use external tools and memory mechanisms to solve complex real-world tasks, they may also introduce critical security vulnerabilities. However, the existing literature does not comprehensively evaluate attacks and defenses against LLM-based agents.

To address this, we introduce Agent Security Bench (ASB), a comprehensive framework designed to formalize, benchmark, and evaluate the attacks and defenses of LLM-based agents, including 10 scenarios (e.g., e-commerce, autonomous driving, finance), 10 agents targeting the scenarios, over 400 tools, 27 different types of attack/defense methods, and 7 evaluation metrics.

Based on ASB, we benchmark 10 prompt injection attacks, a memory poisoning attack, a novel Plan-of-Thought backdoor attack, 4 mixed attacks, and 11 corresponding defenses across 13 LLM backbones.

Our benchmark results reveal critical vulnerabilities in different stages of agent operation, including system prompt, user prompt handling, tool usage, and memory retrieval, with the highest average attack success rate of 84.30%, but limited effectiveness shown in current defenses, unveiling important works to be done in terms of agent security for the community. We also introduce a new metric to evaluate the agents' capability to balance utility and security. Our code can be found at https://github.com/agiresearch/ASB.

💡Introduction



💫 ASB is a comprehensive benchmarking framework designed to evaluate various adversarial attacks and defenses of LLM-based agents.

💫 Compared to other benchmarks, ASB's key advantages lie in its inclusion of multiple types of attacks and defense mechanisms across diverse scenarios.

💫 This not only allows the framework to test agents under more realistic conditions but also to cover a broader spectrum of vulnerabilities and protective strategies.

⚔️Attack Methods on Agents

⚔️Attack and Defense Types🛡️

🎬Agent Scenarios

We aim to attack target agents across 10 distinct domains (IT management, investment, legal advice, medicine, academic advising, counseling, e-commerce, aerospace design, research, and autonomous vehicles), each represents a unique challenge and functionality. Figures below provide a comprehensive overview of these agents, detailing their purposes, capabilities, descriptions, normal tools and related selected external attack tools (some of their names have slightly altered). You can also look up all the tools here.
PS: The figures we used are from public domains (Flaticon & Pexels).

🧠LLMs Used

We employ both open-source and closed-source LLMs for our experiments. The open-source ones are LLaMA3 (8B, 70B), LLaMA3.1 (8B, 70B), Gemma2 (9B, 27B), Mixtral (8x7B), and Qwen2 (7B, 72B), and the closed-source ones are GPT (3.5-Turbo, 4o, 4o-mini) and Claude-3.5 Sonnet. The leaderboard of LLMs is here.
Agent Attack

We show the number of parameters and the providers of the LLMs used in our evaluation in the following figure.

📊Experiments

📏Evaluation Metrics

Agent Attack

We introduce the evaluation metrics in the figure above. Generally, a higher ASR indicates a more effective attack. After a defense, A lower ASR indicates a more effective defense. The refuse rate is measured to assess how agents recognize and reject unsafe user requests, ensuring safe and policy-compliant actions. Our benchmark includes both aggressive and non-aggressive tasks to evaluate this ability. Higher RR indicates more refusal of aggressive tasks by the agent. If BP is close to PNA, it indicates that the agent's actions for clean queries are unaffected by the attack. In addition, lower FPR and FNR indicate a more successful detection defense.



📈Results

We have conducted extensive experiments on both attack and defense of agent, and the experiments are rich in results.

Here, we could give you an overview on our experiments. The experiments we conducted are as follows:

⚔️Agent Attack: We evaluated the agent attacks with 5 attack types on 13 LLM backbones.

🛡️Agent Defense: We evaluated the agent defenses against all four types of agent attacks.

💪LLM Capability vs ASR: We evaluated the correlation between backbone LLM leaderboard quality and average ASR across various attacks.

Conclusion

📏We introduce ASB, a benchmark for evaluating the security of LLM agents under various attacks and defenses.

💥ASB reveals key vulnerabilities of LLM-based agents in every operational step.

🛡️ASB provides a crucial resource for developing stronger defenses and more resilient LLM agents.

💡In the future, we will focus on improving defenses and expanding attack scenarios.

BibTeX

@article{zhang2024agent,
        title={Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents},
        author={Zhang, Hanrong and Huang, Jingyuan and Mei, Kai and Yao, Yifei and Wang, Zhenting and Zhan, Chenlu and Wang, Hongwei and Zhang, Yongfeng},
        journal={arXiv preprint arXiv:2410.02644},
        year={2024}
  }