a16z: 5 Ways Blockchain Helps AI Agent Infrastructure
Author: a16z
Compiled by: Hu Tao, ChainCatcher
AI agents have rapidly transformed from "co-pilots" to economic participants, outpacing the surrounding infrastructure.
While agents can now perform tasks and conduct transactions, they lack standardized methods to prove their identity, authority, and reward mechanisms across environments. Identity information cannot be shared across platforms, payment methods have not yet achieved default programmability, and coordination work is conducted independently.
Blockchain addresses this issue at the infrastructure layer. Public ledgers provide receipts for every transaction, which anyone can audit. Wallets offer users portable identity information. Stablecoins provide an alternative settlement method. These are not distant future technologies; they can be utilized now and can help users operate as true economic entities without permission.
1. Non-Human Identity
The current bottleneck in the agent economy is no longer intelligence, but identity.
In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is approximately 100 times greater than that of human employees. With the large-scale deployment of modern agent frameworks (using tools like LLMs, autonomous workflows, multi-agent orchestration), this ratio is bound to continue rising across various industries.
However, these agents still do not have bank accounts. They can interact with the financial system, but the interaction lacks portability, verifiability, and is not inherently trustworthy. They lack standardized methods of proof of authority, cannot operate independently across platforms, and cannot be held accountable for their actions.
What is currently missing is a universal identity layer—equivalent to an SSL protocol for agents—that can standardize coordination across platforms. While there have been significant attempts, the methods remain fragmented: on one side is a vertically integrated, fiat-first stack; on the other side are crypto-native, open standards (like x402 and emerging agent identity proposals); and there are developer frameworks like MCP (Model Context Protocol) extensions trying to bridge identity at the application layer.
Currently, there is no widely adopted, interoperable way for one agent to prove to another: who it represents, what it is allowed to do, and how it will be compensated. This is the core idea of KYA (Know Your Agent).
Just as humans rely on credit history and KYC (Know Your Customer), agents also need cryptographic signatures as credentials that bind the agent to its principal, authority, constraints, and reputation. Blockchain provides a neutral coordination layer for all of this: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat applications, APIs, and marketplaces.
We have already seen early implementations: on-chain agent registries, wallet-native agents using USDC, ERC standards for "trust-minimized agents," and developer toolkits that combine identity with embedded payments and fraud controls.
But until a universal identity standard emerges, merchants will continue to block agents at the firewall.
2. Governance of AI Operating Systems
Agents are starting to operate real systems, raising new questions.
The key is who truly controls everything. Imagine a community or company where AI systems are responsible for coordinating critical resources, whether it's fund allocation or supply chain management. Even if people vote to decide policy changes, if the underlying AI layer is controlled by a single vendor that can push model updates, adjust constraints, or overturn decisions, then that power is very weak. The formal governance layer may be decentralized, but the operational layer remains centralized; whoever controls the model ultimately controls the outcome.
When agents take on governance roles, they introduce a new layer of dependency. In theory, this could make direct democracy easier to implement: everyone could have an AI representative responsible for understanding complex proposals, weighing pros and cons, and voting according to their stated preferences.
But this vision can only be realized if these agents are truly accountable to the people they represent, can operate across different service providers, and are technically constrained to follow human instructions. Otherwise, the resulting system may appear democratic on the surface but is actually driven by opaque model behaviors that no one can control.
If the current reality is that agents are built from a small number of foundational models, then we need ways to prove that agent behaviors align with user interests rather than the interests of the model companies. This may require multi-layered cryptographic assurances: (1) the exact training data, fine-tuning processes, or reinforcement learning processes from which the model instance originates; (2) the exact prompts and instructions controlling specific agents; (3) records of the actual behaviors of agents in the real world; and (4) reliable assurances that once deployed, providers cannot change instructions or retrain agents to operate without user knowledge. Without these assurances, governance of agents will ultimately degrade to governance by the party controlling the model weights.
This is where cryptocurrency comes into play. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to execute verified outcomes. If agents have cryptographic identities and transparent execution logs, people can check whether their agents are following the rules. Moreover, if the AI layer is user-owned and portable rather than locked into a single platform, then no company can change the rules through model updates.
Ultimately, governance of AI systems is fundamentally an infrastructure challenge, not a policy challenge. True authority depends on building executable assurance mechanisms into the system itself.
3. Filling the Gap of Traditional Payment Systems in AI-Native Enterprises
AI agents are starting to make purchases—web scraping, browser sessions, image generation—while stablecoins are becoming an alternative settlement layer for these transactions. Meanwhile, a new class of agent-focused marketplaces is taking shape. For example, the MPP marketplace from Stripe and Tempo aggregates over 60 services designed specifically for AI agents. In its first week of operation, it processed over 34,000 transactions, with fees as low as $0.003, and stablecoins as one of the default payment methods.
The difference lies in how these services are accessed. There is no checkout page. Agents read schemas, send requests, pay, and receive outputs in a single exchange. They represent a new class of "headless" merchants: just a server, a set of endpoints, and a price for each call. There is no frontend—neither storefront nor sales team.
The payment rails to achieve this are already live. Coinbase's x402 and MPP take different approaches but both embed payments directly into HTTP requests. Visa is also expanding its card rails in a similar direction, providing a CLI tool that allows developers to spend from the terminal, with merchants receiving stablecoins instantly on the backend.
Current data is still in the early stages. After filtering out non-organic activities like wash trading, x402 processes about $1.6 million in agent-driven payments monthly, far below the $24 million recently reported by Bloomberg (citing data from x402.org). However, the surrounding infrastructure is rapidly expanding: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.
The developer tools space holds enormous opportunities, with the rise of Vibe Coding expanding the pool of software developers and broadening the potential market for developer tools. Companies like Merit Systems are working on future-facing solutions, launching AgentCash, a CLI wallet and marketplace platform that connects to MPP and x402 protocols. These products allow agents to purchase the data, tools, and functionalities they need using stablecoins from a single account. For example, sales team agents can simply call an endpoint to enrich lead information with data from Apollo, Google Maps, and Whitepages without leaving the command line interface.
This inclination for agent-to-agent commerce towards crypto payments (and emerging card-based solutions) stems from several reasons. First is underwriting. When payment processors onboard merchants, they take on the risk of that merchant. A headless merchant without a website or legal entity is difficult for traditional processors to underwrite. Second, stablecoins can be programmed on open networks without permission: any developer can enable endpoints to support payments without integrating payment processors or signing merchant agreements.
We have seen this model before. Every shift in business models spawns a new class of merchants, while existing systems initially struggle to serve them. Companies building this infrastructure are not betting on $1.6 million in monthly revenue but on what revenue levels will be when agents become the default buyers.
4. Repricing Trust in the Agent Economy
For 300,000 years, human cognition has been the bottleneck of progress. Today, AI is pushing the marginal cost of execution towards zero. As scarce resources become abundant, the limiting factors shift. When intelligence becomes cheap, what becomes expensive? Verification.
In the agent economy, the true limitation on scaling is our biological instinct's limitations—our ability to audit and assess machine decisions. The throughput of agents has far exceeded human supervisory capacity. Due to the high costs of supervision and the time it takes for failures to manifest, markets tend to reduce investments in oversight. "Human-machine collaboration" is rapidly becoming an impossible reality.
But deploying unverified agents introduces compounded risks. Systems ruthlessly optimize "agent" metrics while quietly diverging from human intent, creating a false illusion of productivity that conceals the massive accumulation of AI debt. To safely delegate the economy to machines, trust can no longer rely on human audits—trust must be hard-coded into the architecture itself.
When anyone can generate content for free, the most important factor becomes verifiable sources—understanding where content comes from and whether it is trustworthy. Blockchain, along with on-chain certification and decentralized digital identity systems, changes the economic boundaries of secure deployment. AI is no longer viewed as a black box but has a clear, auditable history.
As more AI agents begin to trade with each other, settlement mechanisms and provenance systems become inextricably linked. Fund transfer systems—such as stablecoins and smart contracts—can also carry cryptographic receipts that record who did what and who should be held accountable when issues arise.
Human comparative advantages are continually enhanced: from discovering small errors to setting strategic directions, to taking responsibility when problems occur. Lasting advantages belong to those who can cryptographically certify outputs, insure them, and take responsibility in the event of failure.
Lack of verification at scale is a risk that accumulates over time.
5. Retaining User Control
For decades, layers of abstraction have continually changed the way users interact with technology. Programming languages abstracted machine code. Command lines were replaced by graphical user interfaces, which then evolved into mobile applications and application programming interfaces (APIs). Each transformation has hidden more underlying complexity while allowing users to maintain an overarching grasp.
In the world of agents, users specify outcomes rather than actions, and the system decides how to achieve those outcomes. Agents abstract not only how tasks are completed but also who performs the tasks. Once users set initial parameters, they step back, and the system runs autonomously. The user's role shifts from interaction to oversight; the system defaults to "on" unless the user intervenes.
As users delegate more tasks to agents, new risks emerge: ambiguous inputs may lead agents to act on incorrect assumptions without the user's knowledge; failures may go unreported, resulting in no clear diagnostic pathways; a single approval may trigger multi-step workflows that no one anticipated.
Cryptographic technology plays a role here. The core of cryptographic technology has always been to minimize blind trust. As users delegate more decision-making power to software, agent systems highlight this issue and raise our demands for rigor in system design—we need to set clearer boundaries, enhance transparency, and provide stronger guarantees about the functionalities of these systems.
To address this challenge, a new generation of crypto-native tools has emerged. For example, MetaMask's Delegation Toolkit, Coinbase's AgentKit and agent wallets, and scope-based delegation frameworks like Merit Systems' AgentCash allow users to define what agents can and cannot do at the smart contract level. Intent-based architectures like NEAR Intents (which have accumulated over $15 billion in trading volume on decentralized exchanges (DEX) since Q4 2024) allow users to set expected outcomes—such as "bridge tokens and stake"—without specifying the exact implementation method.
AI makes scaling cost-effective but struggles to establish trust. Cryptocurrency can rebuild trust at scale.
Internet infrastructure is under construction, where individuals can directly participate in economic activities. The question now is whether it will be designed with maximum transparency, accountability, and user control in mind, or whether it will be built on systems that are inherently unsuitable for non-human actors.