OWASP AIVSS: The Kickoff Meeting
Why CVSS isn't enough for the age of agentic AI and how we're building the solution.
June 10th, 2025, marked a truly special day for the founding members of the OWASP AI Vulnerability Scoring System (AIVSS) project. At 12:30 PM, we officially held our inaugural meeting, establishing the OWASP Top 10 for Agentic AI as our first critical deliverable.
This Substack post delves into our vision for AIVSS.
What is AIVSS? The AI Vulnerability Scoring System
At its core, AIVSS stands for the AI Vulnerability Scoring System. It's an initiative born out of a critical need to standardize how we identify, assess, and communicate security vulnerabilities specific to AI systems. Think of it as an extension to the existing CVSS(Common Vulnerability Scoring System) framework, not a replacement.
While CVSS have served us well for software bugs, they fall short when it comes to the complexities of AI. AIVSS aims to fill this gap, providing an extension framework to understand and score AI-specific risks.
Why Do We Need AIVSS? The Limitations of Traditional Scoring
You might be asking, "Can't we just use CVSS for AI?" The short answer is no, and here's why the fundamental nature of AI demands a new approach:
Semantic Vulnerabilities Beyond Code Bugs: CVSS excels at identifying vulnerabilities that can be patched with a code fix. But what about "prompt injection," where a cleverly crafted input manipulates an AI's behavior without altering its underlying code? These semantic vulnerabilities require a different lens for assessment.
The Non-Deterministic Nature of AI: Traditional software operates deterministically – given the same input, it produces the same output. AI, especially large language models and agentic systems, can be non-deterministic. Their responses can vary, making it challenging to define and measure traditional "bugs." AIVSS must account for this inherent variability.
Life Cycle Vulnerability: AI systems have a complex lineage, from their training data and models to their deployment as agentic or generative AI. Vulnerabilities can originate at any point in this lifecycle, not just in the final compiled code. AIVSS considers this broader scope.
Ethical Implications: While not the primary focus of AIVSS's initial technical scope, the ethical implications of AI vulnerabilities (e.g., bias, misuse) are a significant underlying driver for a more comprehensive scoring system, particularly for autonomous agents.
The immediate focus for AIVSS is on Agentic AI, which is rapidly gaining traction in enterprise environments. These autonomous systems, capable of making decisions and taking actions, introduce a unique set of security challenges that existing frameworks simply don't address. We need to consider factors like:
The agent's level of autonomy.
The integrity of the agent's identity.
Security in multi-agent communication and orchestration.
Vulnerabilities related to tool misuse (e.g., tool squatting, schema poisoning).
Memory poisoning in an agent's short-term and long-term memory.
Goal manipulation.
How Are We Going to Do It? The AIVSS Roadmap
The development of AIVSS is a collaborative, multi-year undertaking, driven by a diverse coalition of founding members. These senior leaders and distinguished researchers in AI security represent a cross-section of critical sectors, including government agencies, academia, non-profit research organizations, and the open-source community. They are joined by industry pioneers from leading AI and security companies, major financial institutions, and top-tier consulting firms, ensuring the framework is both robust and practical.
Here's a glimpse into the roadmap:
Immediate Deliverables:
Agentic AI Top 10 and Scoring System Documentation: This critical document is undergoing review, aiming to define the top 10 most pressing vulnerabilities for agentic AI and establish a preliminary scoring methodology. Your input here is crucial for building consensus!
AIVSS Website: Dedicated website (aivss.owasp.org) are being established to house documentation, updated, and community contributions.
AIVSS Demo: A working MVP demonstration of the AIVSS concept is already available.https://vineethsai.github.io/aivss/
AIVSS Github: https://github.com/OWASP/www-project-artificial-intelligence-vulnerability-scoring-system
High-Level Components of AIVSS:
AIVSS will combine several matrices to provide a holistic score:
CVSS Base Matrix: Leveraging existing standards for traditional cybersecurity risks.
AI-Specific Matrix: Focusing on the unique vulnerabilities of AI systems, starting with agentic AI.
Environmental Matrix: Considering the deployment environment and industry-specific factors (e.g., finance, healthcare).
The Future Roadmap (A 3+ Year Vision):
Beyond the immediate goal of delivering the Agentic AI Top 10, the AIVSS project has a comprehensive long-term vision:
Generic AIVSS Framework: Expanding the framework beyond agentic AI to encompass other forms of AI.
Mobile AI/AI on Device Vulnerabilities: Addressing the specific security challenges of AI deployed on mobile devices.
Specialized Scoring Calculators: Developing tools adapted for specific industries like finance and healthcare.
Training and Certification Programs: In the longer term, establishing educational programs and professional certifications to build expertise in AI vulnerability management.
If you have others program or delivrable to propose, you can use this Google doc to add your suggestions: https://docs.google.com/document/d/1qCsyOVn257o9kvTU4ho1XEEvVpcqPg14HRabH4dAt-w/edit?tab=t.0
AIVSS is a collaborative effort that requires diverse perspectives and contributions. If you're passionate about AI security, we encourage you to get involved, signup you name here using Google doc: https://docs.google.com/document/d/13A42SIBrSF-xfaQchWyn5KYfGnXDEa9NXbu3eZ3E2eY/edit?tab=t.0 .
To learn more, the following is the youtube link:
Acknowledgement
A huge thank you to our phenomenal special guests who delivered the opening remarks:
Rob Joyce: Former Cybersecurity Director, NSA; Former Special Assistant to the White House; Advisor to OpenAI and PwC
Kathleen Fisher: Director, I2O (Information Innovation Office) at DARPA
Jason Clinton: CISO, Anthropic
Apostol Vassilev: Research Team Supervisor, National Institute of Standards and Technology (NIST)
And profound gratitude to my exceptional co-leaders on this project:
Michael Bargury: OWASP Committee, CTO of Zenity
Vineeth Sai Narajala: Security Engineer/Researcher, AWS
And equally important thank you our esteemed founding members of this important project list alphabetically based on their last names:
Sunil Agrawal, Chief Information Security Officer, Glean
David Ames, Partner, PwC
Michael Bargury, Founder and CTO, Zenity
Anat Bremler-Barr, Professor of Computer Science, Tel Aviv University
Joshua Beck, Application Security Architect, SAS
Manish Bhatt, Security Researcher, Amazon Kuiper Security
Mark Breitenbach, Security Engineer, Dropbox
Siah Burke, HIPAA Security Officer, Siah.ai
David Campbell, AI Security, Scale AI
Ying-Jung Chen, AI safety researcher, PhD, Georgia Institute of Technology
Anton Chuvakin, Security Solution Strategy, Google
Jason Clinton, CISO, Anthorphic
Adam Dawson, Staff AI Security Researcher, Dreadnode
Ron F. Del Rosario, VP-Head of AI Security, SAP
Leon Derczynski, Principal Research Scientist, NVIDIA
Walker Lee Dimon, AI Security Researcher, MITRE
Marissa Dotter, AI Security Researcher, MITRE
Dan Goldberg, ISO Market Lead, Omnicom
David Haber, CEO, Lakera
Idan Habler, Staff AI/ML Security Researcher, Intuit
Jason Haddix, Founder, Arcanum Information Security
Keith Hoodlet, Director of AI/ML & AppSec, Trail of Bits
Ken Huang, AIVSS Project Lead, OWASP
Chris Hughes, CEO, Aquia
Charles Iheagwara, AI/ML Security Leader, AstraZeneca
Krystal Jackson, Researcher, Center for Long-Term Cybersecurity, UC Berkeley
Sushmitha Janapareddy, Director - Security Integrations, American Express
Rob Joyce, Former Cybersecurity Director of NSA, Advisor to PwC, PwC
Diana Kelley, CISO, Protect AI
Prashant Kulkarni, Lead AI Security Research Engineer, Google Cloud
Mahesh Lambe, Founder, MIT, Unify Dynamics
Edward Lee, Vice President, Lead AI Security, JP Morgan
Nate Lee, CEO, Cloudsec.ai
Vishwas Manral, CEO, Precize.ai
Daniela Muhaj, Executive-in-Residence for Research & Development, AI 2030
Vineeth Sai Narajala, Application Security, AWS
Om Narayan, AI Security Researcher, AWS
Advait Patel, Senior Site Reliability Engineer (DevSecOps + Cloud + AIOps), Broadcom, IEEE
Alex Polyakov, CEO, adversa.ai
Ramesh Raskar, Professor & Director, MIT Media Lab
Tal Shapira, Co-Founder & CTO, Reco AI
Akram Sheriff, Senior AI/ML Software Engineering Leader, Cisco
Samantha Siau, Security and Compliance, Anthropic
Kevin Simmonds, Partner on AI Offensive Security, PWC
Martin Stanley, NIST AI RMF Lead, Independent
Omar A. Turner, General Manager of Security, Microsoft
Apostol Vassilev, AI Research Team Supervisor, NIST
Matthew Versaggi, AI Fellow, White House Presidential Innovation Fellow
David Webb, Agency Cybersecurity Officer, Cybersecurity and Infrastructure Security Agency
Dennis Xu, Research VP, AI, Gartner
Xiaochen Zhang, Executive Director and Chief Responsible AI Officer, AI 2030
Here is the Link to PPT slides: