Ken Huang, CSA Fellow and Co-Chair of AI Working Groups, CEO of DistributedApps.ai, AI Book Author
Idan Habler, Staff AI Security Researcher at Intuit, specializing in AI Application Security
1: Introduction
Google's A2A (Agent-to-Agent) protocol holds immense promise for the future of AI, enabling seamless communication and collaboration between autonomous agents. However, this newfound power also introduces significant security challenges. As we move beyond simple AI interactions and embrace complex, agentic systems, it's crucial to proactively address potential threats to ensure responsible and secure deployments. Following my previous CSA's article of threat modeling of OpenAI's Responses API leveraging the MAESTRO framework to provide a detailed security analysis of the A2A protocol. Our goal is to demonstrate how MAESTRO, a layered and AI-centric threat modeling approach, can be systematically applied to identify and mitigate potential risks in A2A implementations.
2: The A2A Protocol: A Foundation for Agentic Collaboration
Before diving into the threat model, let's recap the A2A protocol. In essence, it provides a standardized way for independent AI agents to communicate and cooperate. Key components include:
Agent Card: A public metadata file (usually at /.well-known/agent.json) describing an agent's capabilities, skills, endpoint URL, and authentication requirements.
A2A Server: An HTTP endpoint that implements the A2A protocol methods, receiving requests and managing task execution.
A2A Client: An application or agent that utilizes A2A services. It sends requests (such as tasks/send) to an A2A Server's URL
Tasks, Messages, and Parts: These form the core of the communication model, defining the units of work, exchanges between agents, and the structured content being shared.
Parts: the primary content unit within a Message or Artifact.
Artifacts: refers to outputs generated by the agent during a task (e.g., generated files, finalized output in structured format).
In a typical A2A flow, an A2A client initiates the process by obtaining the Agent Card from the A2A server's identified URL. The client launches a task by sending a request that includes the initial user message and a unique Task ID. The server processes the task by either streaming progress updates and artifacts over SSE events (e.g. task status updates, artifacts) or synchronously returning the finalized Task object. Whenever supplementary input is necessary, the client sends additional messages keeping the same Task ID.
While the A2A protocol lays a solid foundation, we need to ensure it's resilient against potential attacks and vulnerabilities.
3: Threat Modeling Agentic AI: Introducing MAESTRO
Traditional threat modeling frameworks often fall short when applied to agentic AI systems. These systems can autonomously make decisions, interact with external tools, and learn over time – capabilities that introduce unique security risks. That's why we'll use the MAESTRO framework, a seven-layer threat modeling approach specifically designed for agentic AI. MAESTRO offers a more granular and proactive methodology uniquely suited for the complexities of agentic systems like those built using A2A.
MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) provides a structured, granular, and proactive methodology for identifying, assessing, and mitigating threats across the entire agentic AI lifecycle.
MAESTRO in a Nutshell:
Extends Existing Frameworks: Builds upon established security frameworks like STRIDE, PASTA, and LINDDUN, but adds AI-specific considerations.
Layered Security: Recognizes that security must be addressed at every layer of the agentic architecture.
AI-Specific Threats: Focuses on the unique threats arising from AI, such as adversarial machine learning and the risks of autonomous decision-making.
Risk-Based Approach: Prioritizes threats based on their likelihood and potential impact.
Continuous Monitoring: Emphasizes the need for ongoing monitoring and adaptation.
The Seven Layers of MAESTRO:
Foundation Models: The core AI models (e.g., LLMs) used by the agents.
Data Operations: The data used by the agents, including storage, processing, and vector embeddings.
Agent Frameworks: The software frameworks and APIs that enable agent creation and interaction (like the A2A protocol).
Deployment and Infrastructure: The underlying infrastructure (servers, networks, containers) that hosts the agents and API.
Evaluation and Observability: The systems used to monitor, evaluate, and debug agent behavior.
Security and Compliance: The security controls and compliance measures that protect the entire system.
Agent Ecosystem: The environment where multiple agents interact, including marketplaces, collaborations, and potential conflicts.
4: Why MAESTRO for the A2A Protocol?
Even with well-designed security features built into the A2A protocol, MAESTRO helps us systematically analyze potential threats across all layers. Crucially, MAESTRO is designed to address the complexities introduced by:
Non-Determinism: AI models often exhibit non-deterministic behavior, meaning that the same input can produce different outputs. This makes it harder to predict and control agent behavior.
Agent Autonomy: A2A-enabled agents are designed to operate autonomously, making decisions and taking actions without human intervention. This increases the potential for unintended consequences.
Dynamic Nature of Agent Identity: As agents learn and adapt, their capabilities and behaviors can change over time. Moreover, they can dynamically obtain and present verifiable credentials, altering their trusted status.
Considering these factors is essential in addressing the following potential problems:.
Unintended Tool Use: How does the MAESTRO framework safeguard against agents using tools incorrectly or maliciously due to non-deterministic decision-making?
Message Injection: With autonomy in play, how does MAESTRO prevent attackers from manipulating agent behavior by injecting malicious content into messages?
Data Poisoning: Given the dynamic nature of agent identity, how do compromised data sources impact decision-making during task execution?
Cross-Layer Attacks: How does MAESTRO mitigate vulnerabilities in one layer being exploited to compromise another, especially considering the complexity added by non-determinism?
Multi-Agent Interactions: In a connected ecosystem, how can unintended consequences be prevented when multiple A2A-enabled agents interact, given their autonomy and changing identities?
MAESTRO forces us to consider how these factors can amplify existing risks and introduce new vulnerabilities within A2A-based systems.
5: Mapping the A2A Protocol to MAESTRO's Layers
To effectively apply the MAESTRO framework, let's map the key components of the A2A protocol to the corresponding layers:
This mapping highlights that the A2A protocol primarily resides at Layer 3 (Agent Frameworks), but it directly interacts with Layers 1, 2, 5, and 6. Layer 4 is always relevant, and Layer 7 becomes critical when deploying multiple interacting agents.
6: Threat Modeling Results: Applying MAESTRO Layer by Layer
Now, let's apply the MAESTRO framework to identify potential threats, vulnerabilities, attack vectors, risks, and mitigations.
6.1. Layer 1: Foundation Models
This layer of MAESTRO focuses on vulnerabilities that can be exploited at the Foundation Models level in the A2A protocol, with particular attention to the impacts of non-determinism.
Threat T1.1: Message Generation Attacks (Evasion): An attacker crafts malicious input to cause the agent's model to generate incorrect, biased, or harmful messages, bypassing safety mechanisms during communication. The non-deterministic nature of model outputs makes it more challenging to reliably detect and prevent these attacks.
Vulnerability: Model's inherent sensitivity to input perturbations.
Attack Vector: Attacker injects malicious content into messages exchanged with the agent.
Risk: High (High likelihood, potentially high impact). Can lead to misinformation, reputational damage, or even harmful actions if the agent controls real-world systems.
Mitigation:
M1.1.1 Input Validation: Implement strict input validation before sending the content to the agent's model. Sanitize input.
M1.1.2 Output Verification: Check the generated message's content for harmful content, contradictions, or unexpected behavior before sending it to another agent. Use content filtering. Employ techniques like ensemble methods or adversarial training to improve the robustness of output verification against non-deterministic model behavior.
M1.1.3 Careful Prompt Design: Design prompts to be less susceptible to adversarial attacks. Use few-shot examples to guide the model.
Threat T1.2: Model Extraction: Excessive or crafted interactions facilitated through A2A could provide enough details regarding a proprietary model's behavior or parameters, enabling inference or theft of the model. The autonomy of agents may intensify this issue by facilitating loosely controlled interaction patterns that result in unexpected information leakage.
Vulnerability: The model's sensitivity to inference attacks through interaction patterns; insufficient monitoring or rate limiting to identify or prevent probing.
Attack Vector: The attacker sends multiple crafted queries\tasks using the A2A protocol to examine the answers of the model's used by the agent and deduce its attributes.
Risk: Medium (Low likelihood, potentially high impact).
Mitigations:
M1.2.1 Strict Rate limits: Enforce rate limits on A2A interactions for each session \ user \ agent.
M1.2.2 Anomaly Detection: Observe query patterns for anomalies that suggest probing or data extraction attempts.
6.2. Layer 2: Data Operations
MAESTRO's Layer 2 addresses vulnerabilities within data operations that are used by the A2A Protocol. The autonomy of A2A-enabled agents adds a new layer of complexity, as they may access and process data in unpredictable ways.
Threat T2.1: Data Poisoning (Message Parts): An attacker injects malicious content into messages exchanged between agents, compromising the data used for decision-making. The dynamic nature of agent interactions means this data poisoning can spread rapidly and have cascading effects.
Vulnerability: Insufficient validation of data within message Parts.
Attack Vector: Attacker sends messages with malicious TextParts, FileParts (containing compromised files), or DataParts.
Risk: High (Medium-high likelihood, high impact). Can lead to incorrect agent behavior, misinformation, or security breaches.
Mitigation:
M2.1.1 Strong Validation: Implement strict validation on all message Parts, including file integrity checks, schema validation for DataParts, and content filtering for TextParts.
M2.1.2 Least Privilege: Limit agent access to sensitive data based on the principle of least privilege.
M2.1.3 Provenance Tracking: Track the origin and lineage of data within messages to assess its reliability. Extend provenance tracking to account for the identities of sending agents and any transformations applied to the data.
Threat T2.2: Sensitive Information Disclosure: An agent unintentionally disclos es sensitive information (PII, confidential information) in A2A communications or artifacts caused by excessively broad permissions, data management mistakes, or model hallucinations.
Vulnerability: Excessive agent data access; insufficient output validation/filtering; unsecured data management logic; model unpredictability.
Attack Vector: A legitimate or malicious interaction compels the agent to retrieve and disclose sensitive material to which it has access, despite the inappropriateness of such disclosure in the current context.
Risk: Elevated (Medium likelihood, Medium-High impact).
Mitigations:
M2.2.1 Automated PII Redaction: Employ automated processes for the detection and redaction of personally identifiable information (e.g., platform functionalities such as Gemini's filters).
M2.2.2 Fine-Grained Access Controls: Implement robust access controls according to agent roles and task context.
M2.2.3 Context-Aware Guardrails: Add guardrails to prevent agents from sharing sensitive and restricted information.
6.3. Layer 3: Agent Frameworks (A2A Protocol)
This MAESTRO layer covers vulnerabilities within the A2A Protocol itself, with emphasis on the dynamic nature of agents and their identities.
Threat T3.1: Unauthorized Agent Impersonation: An attacker impersonates a legitimate agent, gaining access to sensitive information or manipulating other agents. The dynamic nature of agent identities, as evidenced by changing credentials or verifiable claims, further complicates this threat.
Vulnerability: Weak authentication mechanisms, reliance on easily spoofed Agent Cards.
Attack Vector: Attacker creates a fake Agent Card, compromises agent credentials.
Risk: High (Medium likelihood, high impact).
Mitigation:
M3.1.1 Decentralized Identifiers (DIDs): Require agents to use DIDs for identity verification. Implement mechanisms to periodically refresh and re-validate DID documents to detect identity changes.
M3.1.2 Secure Authentication: Implement strong authentication mechanisms, such as DID-based signatures or mutual TLS, to verify agent identities. Use time-stamped signatures to prevent replay attacks and ensure that credentials are valid at the time of communication.
M3.1.3 Agent Registry: Implement a trusted agent registry to validate the legitimacy of agents. The registry should be capable of handling dynamic agent attributes and be continuously updated.
Threat T3.2: Message Injection Attacks: An attacker injects malicious content into A2A messages, manipulating the behavior of receiving agents. Agent autonomy amplifies this threat, as compromised agents may propagate malicious messages without human intervention.
Vulnerability: Insufficient validation of message content, lack of integrity protection.
Attack Vector: Attacker modifies message Parts or manipulates message metadata.
Risk: High (High likelihood, potentially high impact).
Mitigation:
M3.2.1 Digital Signatures: Implement digital signatures for all A2A messages to ensure integrity and non-repudiation. Implement multi-signature schemes to require endorsement from multiple trusted agents before a message is considered valid.
M3.2.2 Input Validation: Implement strict input validation on all message content, including message Parts and metadata.
M3.2.3 Content Filtering: Use content filtering to detect and block malicious content in messages.
Threat T3.3: Protocol Downgrade Attacks: An attacker forces agents to use a less secure version of the A2A protocol. With non-deterministic agent interactions, this can open up additional attack vectors that are harder to anticipate.
Vulnerability: Lack of strong protocol version negotiation mechanisms.
Attack Vector: Attacker manipulates protocol negotiation messages to force the use of an older, vulnerable version.
Risk: Medium (Low likelihood, potentially high impact).
Mitigation:
M3.3.1 Secure Protocol Negotiation: Implement secure protocol negotiation mechanisms, such as Transport Layer Security (TLS) with mutual authentication, to ensure that agents use the most secure protocol version.
M3.3.2 Deprecation Policy: Clearly define and enforce a deprecation policy for older protocol versions.
Threat T3.4: Malicious A2A Server Impersonating a Trusted Company: An attacker sets up a malicious A2A server disguised to appear as if it's operated by a trusted company or organization, potentially deceiving agents into communicating with it and divulging sensitive information or executing malicious tasks.
Vulnerability: Agents rely on the Agent Card for identifying A2A servers, and Agent Cards can be spoofed or manipulated. Lack of robust server-side authentication and trust mechanisms.
Attack Vector: The attacker creates a malicious A2A server with an Agent Card that contains false or misleading information (e.g., a fake name, logo, or URL resembling a trusted company). They might also compromise a legitimate, but less-critical, server and repurpose it to run the malicious A2A implementation. The attacker then uses various techniques (e.g., DNS spoofing, social engineering, or simply relying on misconfigurations) to lure agents into communicating with their server instead of the legitimate one.
Risk: High (Medium likelihood, potentially very high impact). If successful, this attack allows the attacker to:
Steal sensitive information: Harvest data exchanged between agents and the malicious server.
Compromise agents: Inject malicious code or instructions into the agents.
Disrupt A2A operations: Cause agents to malfunction or fail.
Spread misinformation: Use the compromised server to disseminate false or misleading information to other agents in the ecosystem.
Damage the reputation of the impersonated company: By associating the malicious server's actions with the trusted company's name and brand.
Mitigation:
M3.4.1 Decentralized Identifiers (DIDs) for Server Identities: As with agents, A2A servers should be identified using DIDs, and their DID documents should be verifiable and regularly updated. The A2A protocol should mandate DID-based authentication for server-to-agent communication. This offers a stronger level of trust than relying solely on the Agent Card.
M3.4.2 Certificate Transparency (CT) for Agent Cards: Implement a mechanism akin to Certificate Transparency (CT) for SSL/TLS certificates. Agent Cards could be registered with a public log (e.g., a blockchain or distributed ledger), allowing agents to verify that the Agent Card is legitimate and hasn't been tampered with. A certificate authority system would be needed to vouch for the identity of the A2A server before it can be registered in the CT log.
M3.4.3 Mutual TLS (mTLS) Authentication: Enforce mutual TLS (mTLS) authentication between agents and A2A servers. This requires both the client (agent) and server to present certificates to verify their identities.
M3.4.4 DNSSEC for Server Domain: If the A2A server's Agent Card includes a URL with a domain name, ensure that the domain is secured with DNSSEC to prevent DNS spoofing attacks.
M3.4.5 Agent Registry Verification: Before interacting with an A2A server, agents should consult a trusted agent registry to verify that the server is legitimate. The registry should be maintained by a trusted authority and should provide information such as the server's DID, the organization that operates the server, and a list of known security vulnerabilities.
M3.4.6 Agent Card Signature Verification: Agents should cryptographically verify the Agent Card using the server's public key or DID, ensuring that the card hasn't been tampered with.
M3.4.7 Multi-Factor Authentication for Critical Operations: For sensitive operations, agents should require multi-factor authentication (MFA) before communicating with an A2A server.
M3.4.8 Behavioural Analysis and Reputation Systems: Implement behavioural analysis to detect unusual server activity patterns that might indicate impersonation. A reputation system could be used to track server trustworthiness based on past interactions and reports from other agents.
M3.4.9 Auditing and Logging: Maintain detailed audit logs of all communications between agents and A2A servers, including server identities, timestamps, and message contents. This helps with forensic analysis and incident response.
M3.4.10 Honeypot Servers: Deploy honeypot A2A servers to attract attackers and gather intelligence about their techniques.
6.4 . Layer 4: Deployment and Infrastructure
This layer, according to MAESTRO, addresses the deployment and infrastructure used to run the A2A protocol, considering the dynamic demands placed by autonomous agents.
Threat T4.1: Denial of Service (DoS) Attacks: An attacker overwhelms A2A servers with requests, making agents unable to communicate. The autonomous nature of agents means that a DoS attack can rapidly cascade through the ecosystem.
Vulnerability: Insufficient capacity to handle traffic spikes.
Attack Vector: Attacker floods A2A servers with requests.
Risk: High (Medium-high likelihood, high impact).
Mitigation:
M4.1.1 Robust Infrastructure: Use redundant, geographically distributed infrastructure to minimize downtime.
M4.1.2 DDoS Protection: Implement robust DDoS mitigation measures.
M4.1.3 Rate Limiting: Implement rate limits to prevent excessive requests. Implement adaptive rate limiting that adjusts based on real-time network conditions and agent activity patterns.
6.5. Layer 5: Evaluation and Observability
This layer in MAESTRO considers the monitoring and logging infrastructure that agents use to see data, while acknowledging the challenges presented by non-deterministic agent behavior.
Threat T5.1: Manipulation of Logging Data: An attacker modifies or deletes log entries to hide malicious activity. The complex and unpredictable nature of agent behavior makes it harder to distinguish between legitimate activity and malicious actions masked by manipulated logs.
Vulnerability: Insecure logging infrastructure, insufficient access controls on log data.
Attack Vector: Attacker gains access to the logging system.
Risk: Medium-High (Medium likelihood, potentially high impact).
Mitigation:
M5.1.1 Secure Logging Infrastructure: Use a secure logging infrastructure with strong access controls.
M5.1.2 Log Integrity Monitoring: Use checksums or digital signatures to verify the integrity of log data.
M5.1.3 Anomaly Detection: Employ advanced anomaly detection techniques that can identify unusual behavior patterns, taking into account the inherent non-determinism of agent actions.
6.6. Layer 6: Security and Compliance
MAESTRO ensures security and compliance are thoroughly reviewed with this layer, including the challenges posed by dynamic agent identities and autonomous decision-making.
Threat T6.1: Unauthorized Access to Agent Credentials: An attacker gains access to agent credentials (e.g., private keys), allowing them to impersonate the agent and perform malicious actions. With agents dynamically obtaining and presenting verifiable credentials, compromised agents can rapidly acquire new and powerful capabilities, escalating the potential for damage.
Vulnerability: Insecure key management practices.
Attack Vector: Phishing, malware, social engineering, exploiting vulnerabilities in systems where agent credentials are stored.
Risk: High (High likelihood, high impact).
Mitigation:
M6.1.1 Secure Key Storage: Never embed agent credentials directly in code. Use hardware security modules (HSMs) or secure key management services.
M6.1.2 Key Rotation: Regularly rotate agent credentials.
Multi Factor Authentications:* Use MFA to make sure it's really the user who's in control.
Threat T6.2 Lack of Compliance on Sensitive Data: Agents send, receive, and process data with PII without proper protection. Agent autonomy intensifies these threat.
Vulnerability: Lack of safeguards and processes
Attack Vector: Systemic failure to comply with data privacy laws.
Risk: High (Medium-high likelihood, potentially high impact - legal and financial penalties).
Mitigation:
M6.2.1 Data minimization: Reduce the collection of personal data.
M6.2.2 Pseudonymization/Anonymization
Data encryption: Ensure end to end encryption and the key is protected
Threat T6.3 Abuse of Delegated Authority, where vulnerabilities in the implementation can lead to agents exceeding their granted permissions.
Mitigation:
Explicit User Consent
Detailed Auditing
Strict Token Validation
6.7. Layer 7: Agent Ecosystem
MAESTRO calls for a secure ecosystem between all the agents, considering the complexities arising from agent autonomy, non-determinism, and dynamic identities.
Threat T7.1: Malicious Agent Interaction: A compromised agent interacts with other agents, causing harm, exploiting vulnerabilities, or leading to unintended consequences. With agent identities changing and behaviours unpredictable, it is hard to predict how one may cause more harm than the other.
Vulnerability: Lack of trust mechanisms between agents, insecure communication channels.
Attack Vector: Attacker compromises one agent and uses it to attack others.
Risk: High (Medium-low likelihood, potentially very high impact).
Mitigation:
M7.1.1 Secure Inter-Agent Communication: Use secure communication protocols and authentication mechanisms for interactions between agents.
M7.1.2 Agent Reputation Systems: Implement reputation systems to track agent behavior and identify potentially malicious agents. The reputation system must be able to handle the dynamic nature of agent identities and be resistant to manipulation.
M7.1.3 Sandboxing: Isolate agents from each other to limit the impact of a compromised agent. Implement runtime monitoring and policy enforcement to prevent agents from exceeding their defined boundaries.
Cross-Layer Threats (Examples): Emphasizing MAESTRO's Perspective
MAESTRO helps us understand how vulnerabilities in different layers can combine to create more complex threats. These examples now incorporate considerations of non-determinism, agent autonomy, and dynamic agent identities.
C3.1 (Agent Frameworks -> Data Operations): MAESTRO highlights how an attacker injects malicious code into an A2A message (Layer 3), causing an autonomous agent to access and exfiltrate sensitive data (Layer 2) due to non-deterministic decision-making that circumvents safety mechanisms.
C6.1 (Security & Compliance -> Agent Frameworks): Using the layered approach from MAESTRO, we see that an attacker obtains unauthorized agent credentials (Layer 6) and uses them to send malicious messages via the A2A protocol (Layer 3).
7: Summary and Next Steps
This threat modeling exercise provides a comprehensive overview of potential threats to systems built using the A2A protocol, methodically analyzed through the lens of the MAESTRO framework. The next steps would involve:
Prioritization: Focus on the highest-risk threats based on your specific application and context, using MAESTRO's risk assessment principles.
Mitigation Implementation: Implement the suggested mitigations, prioritizing those that address the highest-risk threats, guided by MAESTRO's best practices.
Testing: Thoroughly test the system, including security testing and adversarial testing, to validate the effectiveness of the mitigations within the MAESTRO framework.
Monitoring: Continuously monitor the system for threats and vulnerabilities, incorporating MAESTRO's emphasis on continuous adaptation.
Iteration: Regularly review and update the threat model as the system evolves and new threats emerge, aligning with MAESTRO's principle of ongoing assessment.
8: About the Authors
Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning AI and Web3 business and technical guides and cutting-edge research. As Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance and Co-Chair of the AI STR Working Group at World Digital Technology Academy under the UN Framework, he's at the forefront of shaping AI governance and security standards.
Huang also serves as CEO and Chief AI Officer(CAIO) of DistributedApps.ai, specializing in Generative AI-related training and consulting. His expertise is further showcased in his role as a core contributor to OWASP's Top 10 Risks for LLM Applications and his active involvement in the NIST Generative AI Public Working Group in the past.
Key Books:
“Agentic AI: Theories and Practices” (upcoming, Springer, August 2025)
"Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow" (Springer, 2023) - Strategic insights on AI and Web3's business impact.
"Generative AI Security: Theories and Practices" (Springer, 2024) - A comprehensive guide on securing generative AI systems
"Practical Guide for AI Engineers" (Volumes 1 and 2 by DistributedApps.ai, 2024) - Essential resources for AI and ML Engineers
"The Handbook for Chief AI Officers: Leading the AI Revolution in Business" (DistributedApps.ai, 2024) - A Practical guide for CAIO in small or big organizations.
"Web3: Blockchain, the New Economy, and the Self-Sovereign Internet" (Cambridge University Press, 2024) - Examining the convergence of AI, blockchain, IoT, and emerging technologies
His co-authored book on "Blockchain and Web3: Building the Cryptocurrency, Privacy, and Security Foundations of the Metaverse" (Wiley, 2023) has been recognized as a must-read by TechTarget in both 2023 and 2024.
A globally sought-after speaker, Ken has presented at prestigious events, including Davos WEF, ACM, IEEE, CSA AI Summit, IEEE, ACM, Depository Trust & Clearing Corporation, and World Bank conferences.
Ken Huang is a member of OpenAI Forum to help advance its mission to foster collaboration and discussion among domain experts and students regarding the development and implications of AI.
Dr. Idan Habler is a Staff AI Security Researcher at Intuit, specializing in security of LLMs and AI-powered applications. He holds a Ph.D. in Information Systems Engineering from Ben-Gurion University, and brings extensive expertise in threat modeling, risk assessment and securing artificial intelligence applications. At Intuit, Idan is a member of the Adversarial AI Security reSearch team (A2RS), which is in-charge of identifying risks and vulnerabilities in AI applications and developing effective mitigation strategies. Idan also contributes to OWASP work on building the foundations of LLM security, including co-leading the Agentic Security Initiative (ASI) workstream, focusing on securing agentic applications.