What is the DNS for Agentic AI?
In a recent New York City Agentic AI Security summit conference organized by
, I predicted that in 5 or 10 years, the majority of internet traffic would be agent-to-agent communications. But how do these agents find each other to communicate?Unlike conventional services or applications, a GenAI-based agent must be discoverable, verifiable, secure, and interoperable across multiple domains. Existing naming and identity infrastructure, such as DNS and public key infrastructure (PKI), offer partial analogues, but fall short in expressing agent behavior, intention modeling, delegation boundaries, and dynamic capabilities. A modern agent registry must fulfill the roles of directory services, trust management, semantic matchmaking, and behavioral auditing in a single, coherent framework.
The Evolution Beyond DNS
DNS abstracts IP addresses into hierarchical domain names, and its design relies on relatively static host identities and service mapping. However, agentic systems are not static: they encapsulate learning capabilities, maintain internal belief states, and often exhibit mobility, either logically (e.g., task migration across clusters) or physically (e.g., embodied agents in edge devices). The Agent Network Protocol (ANP) correctly observes that traditional methods of service resolution are insufficient for environments where agents autonomously negotiate, delegate, and reconfigure their workflows in real time. Discovery is no longer just a mapping problem; it is a dynamic alignment of intent, authority, and capacity.
Unlike DNS records, which are primarily forward mappings (A, AAAA, CNAME), a functional agent registry must support multi-dimensional queries that match on semantic descriptions, runtime capabilities, and trust scores. For example, an agent requesting support in a healthcare workflow must identify another agent not only based on availability but also on HL7/FHIR protocol support, relevant certifications (e.g., HIPAA compliance), and prior task reliability. This requires registries to function more like distributed knowledge graphs with inference capabilities, rather than flat lookup tables.
Core Components of an Agentic AI Registry
This section explores core components of such registry
Agent Capabilities Registry: Defining What Agents Can Do
The registry must represent each agent with a capability ontology using formal logic (e.g., OWL or SHACL) to express its affordances and constraints. For example, agent A may be described as:
<AgentA> a:Agent ;
a:canPerform a:DocumentTranslation ;
a:supportsLanguagePair "en-fr" ;
a:usesTool <OpenNMT> ;
a:hasMetric [
a:BLEU "36.7" ;
a:Latency "250ms"
] ;
a:hasDomainExpertise "Legal" ;
a:hasVersion "v3.1.4" .
Such descriptions enable automated matchmaking engines to perform subsumption reasoning, e.g., to infer that an agent capable of "DocumentTranslation" in the "Legal" domain also satisfies a request for "ContractTranslation". Versioning support allows dependent agents to bind to specific capability sets or apply semantic version compatibility checks.
Capability declarations must support both static declarations and runtime verification through challenge-response protocols. For instance, during onboarding or task assignment, the registry may issue benchmarking tasks to validate the agent’s claimed metrics (e.g., response latency under specified load, translation quality on held-out samples).
Communication Protocol Standards: Enabling Agent Interaction
A robust communication model must extend beyond basic REST or RPC patterns. Agent-to-agent interaction requires persistent conversation state, schema negotiation, error recovery strategies, and multi-turn dialog. The preferred message representation is JSON-LD for semantic clarity, but it must be embedded in a communication substrate that supports:
Provenance Tracking: Every message must include a digital signature and a Merkle-proof hash chain of prior interaction context.
Typed Intent Dispatching: Messages should embed an @type field corresponding to the agent communication act ontology (e.g., Inform, Request, Propose, Confirm), enabling dialog act parsers and FSM-based interaction models.
Shared Context IDs: For multi-agent workflows, agents must share a Context-ID header to correlate actions across services (similar to OpenTracing spans).
Adaptive Protocol Switching: Agents should negotiate down to the minimal shared protocol (e.g., fallback from gRPC to REST) using content negotiation headers.
Protocols like AEA’s agent-to-agent envelope protocol or FIPA ACL provide blueprints, but need to be extended with authenticated envelope layers (e.g., JOSE) and transport-agnostic bindings (e.g., WebSocket, QUIC, MQTT).
Discovery Protocol: Finding the Right Agents
Agent discovery must blend semantic search with cryptographic filtering. The system must support SPARQL-like federated queries over agent registries, while restricting results to entities holding verifiable credentials (VCs). Each registry entry should include:
Capability RDF graphs
VC documents signed using DID methods
Agent provenance (e.g., creator org, training data lineage)
Reputation scores updated via decentralized feedback channels (e.g., proof-of-use attestations)
Discovery protocols must accommodate both active search, wherein a client queries for matching agents, and passive discovery, where agents advertise capabilities through decentralized channels (e.g., pub-sub over IPFS or gossip protocols).
To prevent Sybil attacks or spam registrations, discovery services should require proof-of-work or stake mechanisms tied to decentralized identifiers (DIDs) and verifiable claims issued by reputable CAs or registries (e.g., Cloud Security Alliance, W3C).
Security Protocol Framework
The proposed identity model maps well to multi-token identity flows:
User ID Token: JWT or OIDC-compliant credential issued by an IdP representing the end user, includes scopes for delegation.
Agent ID Token: Signed using a DID method (e.g., did:web, did:ion), encodes operational metadata, capability claims, and endpoint identifiers.
Delegation Token: OAuth2-style assertion embedding context-specific permissions (act_as, on_behalf_of) and cryptographically binding user and agent identity.
This triple-token model allows recursive delegation where agent A may delegate a subtask to agent B by generating a constrained sub-delegation token with a reduced capability scope and TTL (time-to-live), ensuring minimal exposure in the event of compromise.
Authorization and Access Control
Access control requires integration of Attribute-Based Access Control (ABAC) and Context-Aware Policy Evaluation. Policies may include constraints like:
IF agent.role == "data_processor" AND task.context == "research"
THEN allow read(data.type == "anonymized_medical")
These rules are evaluated using policy engines like OPA (Open Policy Agent), ideally embedded in the agent runtime to enable local enforcement and cryptographic logging of policy decisions.
Delegation chains should be recorded in tamper-proof logs (e.g., append-only Merkle DAGs) to ensure auditability of access rights and enforce revocation via CRLs or OCSP-style real-time checks.
Continuous Monitoring and Validation
Agents must include embedded instrumentation that exposes a telemetry stream compliant with OpenTelemetry standards. This stream should be ingested into a SIEM or runtime monitoring system that applies:
Real-time behavioral validation: Using predefined LLM validators or zero-knowledge proofs of safety.
Heuristic anomaly detection: Applying techniques like isolation forests or time-series anomaly models to detect drift or erratic behavior.
Rollback triggers: Based on output misalignment or security events, agents should support dynamic rollback to prior known-safe versions or enter containment mode.
Implementation Approaches: Centralized vs. Distributed
Centralized
Centralized agent registries can be modeled after cloud IAM and service catalogs, allowing enterprises to enforce strict governance, audit, and budget control. However, to reduce risks of lock-in and monopoly control, they must expose open APIs for registry interoperation and support export of full agent metadata in machine-readable formats.
Distributed
Distributed registries must adopt verifiable data registries (VDR) standards and rely on decentralized ledgers (e.g., Ethereum, Hyperledger Aries) to publish agent records. DID Documents store service endpoints, public keys, and verification methods. Capability invocations use VC + ZCAP-LD to allow granular, revokeable authorization without central approval.
Registries should support interoperation via standard discovery protocols like ActivityPub or the Linked Data Platform, allowing agents to form dynamic trust webs and federated coordination models.
Hybrid
A federated trust framework where accredited registrars (e.g., cloud providers, standards bodies) operate registry nodes anchored to a public DID network offers a viable path forward. Interoperability is enabled via common vocabularies, discovery APIs, and trust frameworks (e.g., W3C VC Trust Registry). Federation mechanisms akin to the Gaia-X federation services can mediate access policies and credential translation.
Summary and Call for Action
The deployment and scaling of autonomous AI agents require a foundational shift in how we manage identity, capabilities, communication, and trust. The proposed Agentic AI Registry represents a synthesis of multi-agent systems theory, decentralized identity infrastructure, security policy enforcement, and dynamic service discovery. It fills a structural gap between narrow identity resolution (DNS) and rich behavioral governance. If designed with modular standards, composable protocols, and secure-by-design principles, this registry will not only enhance interoperability but also prevent the proliferation of opaque, insecure agent ecosystems.
As the call for action: Coordinated efforts among industry consortia, standardization bodies, and cloud providers will be essential to implement and enforce these protocols at scale.