AI agents now account for over half of internet traffic, but trust has plummeted: discovery, identity, and reputation have become the three major hurdles to scaling

動區BlockTempo
ETH-2,65%
EIGEN-2,94%

AI agent traffic has surpassed human traffic, accounting for 51% of online activity. However, trust in fully autonomous agents has dropped from 43% to 22%. To enable a true agent economy, three foundational layers—discoverability, identity verification, and reputation systems—are indispensable. This article is based on Vaidik Mandloi’s piece “Know your Agent,” edited and translated by Dongqu.
(Background: Russia plans to introduce a “Stablecoin Law” expected to go into effect as early as July this year, highlighting the cross-border payment potential of stablecoins.)
(Additional context: The FBI in the U.S. has arrested John Daghita! After stealing $46 million in government crypto assets, he flaunted his wealth, exposing himself.)

The promise that AI agents will reshape the internet is gradually becoming reality. They have moved beyond experimental chat tools to become an integral part of our daily operations—from clearing inboxes and scheduling meetings to responding to support tickets. They are quietly boosting productivity, often unnoticed.

But this growth is not just rumor.

By 2025, autonomous traffic will surpass human traffic, making up 51% of total online activity. AI-driven traffic on U.S. retail sites alone has increased by 4,700% year-over-year. AI agents now operate across systems; many can access data, trigger workflows, and even initiate transactions.

However, trust in fully autonomous agents has fallen from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys for agent authentication, a method never designed for autonomous systems to transfer value or act independently.

The problem is: the pace of agent expansion outstrips the infrastructure meant to govern them.

In response, new protocol layers are emerging. Stablecoins, card network integrations, and standards like x402 are enabling machine-initiated transactions. Simultaneously, new identity and verification layers are under development to help agents recognize themselves and operate within structured environments.

But enabling payments does not equal enabling an economy. Once agents can transfer value, more fundamental questions arise: How do they discover suitable services in a machine-readable way? How do they prove their identity and authorization? How do we verify that the operations they claim to perform actually occurred?

This article explores the infrastructure needed for large-scale, agent-driven economic execution and assesses whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.

Agents Can’t Buy What They Can’t See

Before agents can pay for services, they must first discover those services. This sounds simple but is currently the most friction-laden part.

The internet was built for humans to read pages. Search engines return ranked links based on human-centric optimization. These pages are filled with layouts, trackers, ads, navigation bars, and stylistic elements—meaningful to humans but mostly “noise” to machines.

When agents request the same pages, they receive raw HTML. A typical blog post or product page might contain around 16,000 tokens in this form. When converted into clean Markdown files, token count drops to about 3,000—a reduction of 80%. For a single request, this difference may be negligible. But when agents make thousands of such requests across multiple services, the processing overhead compounds into delays, costs, and increased reasoning complexity.

@Cloudflare

Ultimately, agents spend significant computational effort stripping interface elements to access the core information needed to act. This effort does not improve output quality; it merely compensates for a web designed without their needs in mind.

As agent-driven traffic grows, this inefficiency becomes more apparent. AI-driven crawling on retail and software sites has surged over the past year, now constituting a large portion of total web activity.

Meanwhile, about 79% of major news and content sites block at least one AI crawler. From their perspective, this is understandable. Agents extract content without engaging with ads, subscriptions, or traditional conversion funnels. Blocking them protects revenue.

The problem is, the web lacks a reliable way to distinguish malicious scrapers from legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure, and to the system, they look identical.

A deeper issue is that agents are not trying to “consume” pages—they are trying to discover actionable opportunities.

When humans search “tickets under $500,” a ranked list of links suffices. They can compare options and decide. When agents receive the same query, they need something entirely different: knowledge of which services accept bookings, input formats, pricing models, and whether payments can be programmatically settled. Few services openly publish this information clearly.

@TowardsAI

This is why the shift is happening from Search Engine Optimization (SEO) to Agent-Oriented Discoverability (AEO). If the end-user is an agent, ranking on search pages becomes less important. What matters is whether services can describe their capabilities in a way that agents can interpret without guesswork. Without this, services risk becoming “invisible” in the growing economy.

Agents Need Identity

@Hackernoon

Once agents can discover services and initiate transactions, the next major challenge is ensuring the other end knows who they are dealing with—identity.

Today’s financial systems handle far more machine identities than human ones. In finance, non-human identities outnumber human identities by about 96 to 1. API keys, service accounts, automation scripts, and internal agents dominate institutional infrastructure. Most were never designed to hold capital discretion; they execute predefined commands, cannot negotiate, choose vendors, or initiate payments on open networks.

Autonomous agents are changing this boundary. If an agent can move stablecoins or trigger settlement without manual confirmation, the core question shifts from “Can it pay?” to “Who authorized it to pay?”

This is where identity becomes fundamental. The concept of “Know Your Agent” (KYA) emerges.

Just as financial institutions verify clients before allowing trading, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:

  • Cryptographic authenticity: Does this agent truly control the keys it claims?
  • Delegation permissions: Who granted this agent permission, and what are its limits?
  • Real-world linkage: Is this agent associated with a legally responsible entity?

These checks form an identity stack:

  • The base layer is cryptographic keys and signatures. Standards like ERC-8004 formalize how agents anchor their identity on verifiable on-chain profiles.
  • The middle layer is identity providers linking keys to real-world entities—registered companies, financial institutions, or verified individuals. Without this binding, signatures only prove control, not accountability.
  • The edge layer is verification infrastructure—payment processors, CDNs, or application servers that validate signatures, check credentials, and enforce permissions in real time. Protocols like Visa’s Trusted Agent Protocol enable merchants to verify whether an agent is authorized to act on behalf of a specific user. Stripe’s ACP is pushing similar checks into programmable settlement and stablecoin flows.

Meanwhile, standards like the Universal Commerce Protocol (UCP), led by Google and Shopify, enable merchants to publish “capability lists” that agents can discover and negotiate. These act as orchestration layers, expected to integrate into Google Search and Gemini.

@FintechBrainfood

A key subtlety is that permissionless and permissioned systems will coexist.

On public blockchains, agents can transact without centralized gatekeeping, increasing speed and composability but also regulatory pressure. The acquisition of Bridge by Stripe highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations do not vanish just because settlement occurs on-chain.

This tension inevitably involves regulators. Once autonomous agents can initiate financial transactions and interact with markets without direct human oversight, accountability issues become unavoidable. Financial systems cannot allow capital to flow through unverified or unauthorized actors—even if those actors are software fragments.

Regulatory frameworks are being adopted. For example, Colorado’s AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automation systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will no longer be optional. If discoverability makes agents visible, then identity becomes the credential that grants recognition.

Verifying Agent Performance and Reputation

Once agents start executing tasks involving money, contracts, or sensitive data, merely having an identity is insufficient. A verified agent can still hallucinate, distort its work, leak information, or perform poorly.

The key question then is: how can we prove that an agent truly completed what it claimed?

If an agent reports analyzing 1,000 files, detecting fraud patterns, or executing trades, there must be a way to verify that this computation actually occurred and that the output was not forged or corrupted. For this, we need a performance layer.

Currently, three approaches exist:

  • Trusted Execution Environments (TEEs): The first relies on hardware proofs via platforms like AWS Nitro and Intel SGX. In this mode, agents run inside secure enclaves that produce cryptographic attestations confirming code execution on specific data without tampering. Overhead is modest (around 5-10% latency), acceptable for high-integrity enterprise and financial use cases.
  • Zero-Knowledge Machine Learning (ZKML): The second is mathematical. ZKML enables agents to generate cryptographic proofs that their outputs derive from a specific model without revealing the model weights or private inputs. Recent demonstrations, like DeepProve-1 from Lagrange Labs, show GPT-2 inference with full zero-knowledge proofs, 54-158 times faster than previous methods.
  • Restake Security: The third approach enforces correctness economically rather than computationally. Protocols like EigenLayer introduce stake-based security, where validators back the agent’s output with staked capital. If the output is challenged and proven false, the stake is slashed. This system does not prove every computation but makes dishonest behavior economically irrational.

These mechanisms address the same core issue from different angles. But proof of execution is often episodic. They verify individual tasks, but markets need cumulative evidence. That’s where reputation becomes critical.

Reputation transforms isolated proofs into a long-term performance history. Emerging systems aim to make agent efficacy portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.

Ethereum Attestation Service (EAS) allows users or services to publish signed, on-chain attestations about agent behavior. Successful task completion, accurate predictions, or compliant transactions can be recorded immutably and carried across applications.

@EAS

Competitive benchmarking environments are also emerging. Agent Arenas evaluate agents based on standardized tasks, using Elo or similar scoring systems. Recall Network reports over 110,000 participants generating 5.88 million predictions, creating measurable performance data. As these systems expand, they resemble real rating markets for AI agents.

This enables reputation to be portable across platforms.

In traditional finance, agencies like Moody’s provide credit ratings to signal trustworthiness. The agent economy will need an equivalent layer to rate non-human actors. Markets will want to assess whether an agent is reliable enough to entrust with capital, whether its outputs are statistically consistent, and whether its behavior remains stable over time.

Conclusion

As agents gain real authority, markets will require a clear way to measure their reliability. Agents will carry verifiable performance records based on execution validation and benchmarks, with scores adjusted for quality, and permissions traceable to explicit authorizations. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.

In summary, these layers are beginning to form the infrastructure of the agent economy:

  • Discoverability: Agents must be able to find services in a machine-readable way, or opportunities cannot be discovered.
  • Identity: Agents must prove who they are and who authorized them, or they cannot participate.
  • Reputation: Agents must build verifiable records demonstrating trustworthiness, earning ongoing economic trust.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments