China and the US unify "measurement standards" for AI, making good assets easier to find?

Writing by: Zhang Feng

This article discusses and compares the standardization features of AI in China and the United States, exploring how the advancement of standardization infrastructure is reshaping industry development and fundamentally changing the valuation logic of AI companies.

In recent years, the rapid development of artificial intelligence (AI) technology has quickly transitioned from cutting-edge research in laboratories to commercial applications across various industries. However, behind the technological enthusiasm, the valuation logic of AI companies has long been controversial, with market judgments often mixed with boundless hopes for the future. As technological applications enter deeper waters, risks and uncertainties are increasingly prominent, prompting policymakers, regulators, and investors to seek more stable and sustainable development paths.

Against this backdrop, regulators and industry players in China and the U.S. have simultaneously turned their attention to AI standardization and risk management. It is clear that standardization is becoming a key driver for the AI industry to shift from “storytelling” to “practical action.”

  1. The US AI Dictionary and Standardized Risk Prevention Features

Recently, the U.S. Department of the Treasury released two new resources to guide AI applications in the financial sector: the Shared AI Dictionary and the Financial Services AI Risk Management Framework (FSAIRMF). This initiative supports the President’s “AI Action Plan,” which calls for clear standards, shared understanding, and risk-based governance to ensure the safe and responsible deployment of AI.

“Implementing the President’s AI Action Plan requires not only idealistic statements but also tangible resources that institutions can utilize,” said Deputy Secretary of the Treasury Derek Thaler. “By establishing a common AI language and a tailored financial AI risk management framework, these deliverables help protect consumers while supporting responsible innovation.”

The U.S. demonstrates a pragmatic and collaborative approach to AI standardization, especially in critical areas like finance. Its core lies in translating macro national strategies into actionable guidelines for micro-level entities through the development of universal language and operational frameworks, encouraging innovation while maintaining safety and stability.

First, the release of the “Shared AI Dictionary” marks a significant step in addressing fundamental AI governance challenges. For a long time, terminology in AI has varied significantly due to disciplinary backgrounds, application scenarios, and stakeholder interests. Terms like “model interpretability” used by developers, “algorithm transparency” emphasized by legal compliance teams, and “decision logic” understood by business units often refer to different issues. This inconsistency hampers cross-departmental and cross-agency communication and poses regulatory challenges. The AI Dictionary aims to break this “Tower of Babel” by establishing official, unified definitions for key AI concepts, capabilities, and risk categories, enabling regulators, technical experts, legal advisors, and business leaders to speak the same language. This not only fosters internal consensus within financial institutions but also provides clear benchmarks for external regulation, supporting more consistent and predictable implementation. Standardizing language itself reflects the U.S. emphasis on foundational AI governance and is a cornerstone for building complex risk prevention systems.

Second, the “Financial Services AI Risk Management Framework” (FS AI RMF) is a practical manual built upon the unified language. It refines and adapts the macro-level AI risk management framework issued by the National Institute of Standards and Technology (NIST) to fit the specific context of financial services. This “tailored” approach demonstrates American regulatory flexibility and precision. The FS AI RMF covers the entire AI lifecycle—from design, development, validation, deployment, monitoring, to updates—guiding institutions on identifying AI use cases, assessing risks, and embedding accountability, transparency, and operational resilience at every stage. Importantly, the framework is designed to be scalable and flexible, accommodating organizations of different sizes and complexities, from startups to large multinational banks. For example, small fintech firms can use simplified tools for initial risk assessments, while systemic banks may need more sophisticated governance structures. This “custom-fit” design greatly enhances the likelihood of widespread industry adoption.

Finally, the promotion of AI standardization in the U.S. features a clear “public-private partnership and multi-stakeholder governance” model. Whether it is the dictionary or the risk management framework, their development is not solely dictated by regulators but involves collaboration among financial and banking infrastructure committees, AI oversight groups under the Financial Services Sector Coordinating Council, industry associations like the Cyber Risk Institute, and other stakeholders. Industry praise from organizations like the Cyber Risk Institute further confirms the framework’s industry acceptance. This multi-party participation ensures that standards reflect both regulatory concerns for safety and stability and industry interests in innovation and cost-efficiency. The ultimate goal is to “support faster and broader AI adoption in finance” by enhancing cybersecurity and operational resilience, empowering the industry rather than creating barriers.

  1. Chinese AI Terminology and Risk Management Framework Features

China has established official terminology standards and a national AI safety governance/risk management system comparable to the U.S. AI Dictionary and risk frameworks, forming a multi-layered, full-process governance structure. Its core features can be summarized as “promoting development through standards, ensuring safety through regulation,” aiming to establish rule-making dominance in the fierce global AI competition and safeguard the healthy, orderly development of domestic industries.

The main content centers on the national standards “Information Technology - Artificial Intelligence Terminology” (GB/T 41867-2022) and the “AI Safety Governance Framework” (Version 2.0, September 2025), supported by the “AI Risk Management Capability Assessment” (GB/T 46347-2025), which provides organizational-level AI risk management grading, assessment procedures, and compliance guidelines. Additionally, the “Interim Measures for Generative AI Service Management” (2023) specify mandatory requirements for safety assessments, filing, content review, and data compliance for generative AI services. There are also best-practice norms issued by key industries such as finance, healthcare, and education, covering application risk management.

Compared to the pragmatic, industry-segmented, incremental approach of the U.S., China’s AI terminology and risk management frameworks exhibit a stronger top-level design, faster pace of development, and closer alignment with national strategic goals.

First, in terminology standardization, China adopts a “systematic and forward-looking” strategy. Led by the Standardization Administration of China, efforts are underway to build a comprehensive AI standard system covering foundational concepts, supporting technologies, products and services, industry applications, and safety management. For example, the published “Artificial Intelligence Terminology” national standard aims to provide a basic “common language” for the entire AI field.

Unlike the U.S. focus on finance-specific terminology, China’s terminology standardization aims for a holistic approach, clarifying core concepts, technical classifications, and developmental stages across the AI domain. This provides a unified “foundation” for subsequent industry-specific standards, preventing conflicts and contradictions among different sectors, reflecting China’s “concentrated efforts for major tasks” institutional advantage. Moreover, these standards are closely aligned with international trends, aiming to incorporate China’s AI practices into global standard systems and enhance its influence in international AI governance.

Second, in risk management frameworks, China emphasizes “ethics first, safety foremost.” Its AI governance is deeply influenced by cybersecurity, data security, and personal information protection laws. Regulatory agencies like the Cyberspace Administration of China, Ministry of Industry and Information Technology, and Ministry of Public Security have issued numerous normative documents targeting algorithms, deep synthesis, and generative AI, forming a multi-layered regulatory matrix. For example, China has pioneered algorithm filing and safety assessment systems for generative AI, requiring providers to ensure the legality of training data, fairness of algorithms, and authenticity of generated content.

This regulatory approach is more mandatory and bottom-line oriented compared to the U.S. FS AI RMF’s emphasis on internal governance and self-assessment. It clearly delineates “red lines” for AI development, especially concerning data security, ideological security, and citizens’ rights, reflecting high regulatory standards. China’s risk management framework functions more as “external compliance constraints,” prompting enterprises to establish internal risk control systems to meet regulatory requirements.

Finally, China’s AI standardization efforts are highly coordinated with industry development and national strategic objectives. Standards are viewed as key infrastructure to promote AI empowerment of the real economy and high-quality growth. For instance, the People’s Bank of China’s “Financial Technology Development Plan” explicitly calls for strengthening AI financial application standards, covering intelligent risk control, marketing, and customer service. These standards aim not only to control risks but also to improve efficiency and financial inclusion.

The underlying logic is to reduce cooperation costs along the industry chain through standardized interfaces, data formats, and evaluation methods, facilitating large-scale AI deployment in finance. Additionally, leading tech firms actively participate in standard-setting, turning mature technologies into industry norms and consolidating market position. This “standards-driven industry” approach makes China’s AI standardization process a tool for regulation, industry upgrading, and cultivating new productivity.

  1. Comparison of AI Standardization Infrastructure in China and the U.S.

Despite both countries recognizing the importance of AI standardization and taking active steps, fundamental differences in political systems, market environments, innovation cultures, and regulatory philosophies lead to distinct paths, core features, and implementation effects in their AI standardization infrastructure.

From top-level design and underlying drivers, China’s AI standardization is a typical “government-led, top-down” model. The national strategy clearly guides AI development, with standardization as a key support managed by the Standardization Administration, coordinated across ministries. Standard-setting priorities align closely with national industrial policies and technological breakthroughs, with strong guidance and enforcement. This model’s advantages include high efficiency and strong execution, enabling rapid establishment of a broad standard system.

In contrast, the U.S. approach is more “market-driven, bottom-up.” The government acts mainly as a “convenor” and “promoter,” issuing guidelines, frameworks, and best practices to guide industry consensus. The process emphasizes multi-stakeholder participation and consensus-building, respecting market innovation and professional judgment. The development of the FS AI RMF exemplifies this, resulting in more “recommendatory guidance” rather than “mandatory regulations.” Its strengths are flexibility and adaptability, fostering innovation but potentially leading to fragmented standards and slower dissemination, requiring government coordination.

Regarding core focus, China’s AI standards, especially in risk management, emphasize “security and controllability” and “ethical compliance,” driven by concerns over cybersecurity, data sovereignty, and social stability. Standards often impose strict requirements on data legality, algorithm fairness, content authenticity, and system accountability, closely linked to laws like the Cybersecurity Law, Data Security Law, and Personal Information Protection Law. Regulatory agencies prefer pre-emptive or process-based oversight through clear rules, filings, and assessments.

The U.S. risk management framework, while also considering safety and fairness, centers on “risk-based” self-governance. It aims to help organizations identify, evaluate, and manage operational, reputational, and compliance risks aligned with business goals. It emphasizes dynamic, ongoing risk management processes based on organizational risk appetite and application scenarios, rather than rigid adherence to fixed rules. This reflects a fundamental difference: China’s approach seeks to prevent systemic risks through unified rules, whereas the U.S. trusts market entities’ self-regulation.

In the interaction between standards and industry, China’s model uses standards to “drive” industry development. Leading AI firms, especially top tech giants, actively participate in standard-setting, which reflects their technological strength and helps build industry ecosystems and competitive advantages. Standards act as catalysts for technology diffusion and large-scale application.

In the U.S., standards are more about “summarizing” and “elevating” industry best practices. The FS AI RMF incorporates risk management experiences from financial institutions and tech companies, ensuring standards stay aligned with industry frontiers and avoiding lagging behind technological progress. However, this can lead to fragmentation, requiring government efforts for coordination.

On international influence and compatibility, both countries aim to promote their standards globally. China leverages its large market and industrial strength to actively participate in international standardization platforms like ISO/IEC JTC 1/SC 42, exporting its standards and practices. The U.S., with its dominant position in global technology, exerts influence through frameworks like NIST, which have a strong “soft law” impact worldwide. Future global AI governance is likely to feature a complex landscape of competing but partially overlapping Chinese and American standards.

  1. Impact of AI Standardization Infrastructure on Industry Development and Valuation Logic

Whether through China’s top-down systematic approach or the U.S.’s industry consensus-driven model, a key fact is that the improving AI standardization infrastructure is profoundly reshaping industry trajectories and fundamentally overturning the past irrational “storytelling” approach that supported inflated valuations.

First, standardization significantly reduces transaction costs and entry barriers, promoting the ubiquitous application of AI across the economy. Unified terminology and interface standards enable different companies’ AI components to be flexibly integrated and deployed. This “plug-and-play” standardization accelerates AI from lab research to factory floors, farms, and bank counters. The industry focus shifts from “how to build AI” to “how to use AI.”

This shift implies that companies with only algorithmic expertise but lacking deep industry understanding and application capabilities will see their valuations re-evaluated downward. Conversely, “AI + industry” solution providers that deeply understand industry pain points and combine standardized AI technology with specific business processes to generate tangible value will be favored.

Second, the establishment of risk management frameworks provides a universal yardstick for assessing the “health” of AI companies. Previously, risk assessments were vague and subjective. Now, frameworks like the U.S. FS AI RMF and China’s regulatory requirements in finance and cybersecurity offer concrete dimensions for evaluating a company’s sustainability.

Investors increasingly focus on: Does the AI model have bias risks? Are training data sources legal and compliant? Is the decision-making process interpretable? Has the company established comprehensive risk management across the AI lifecycle? These previously overlooked “soft skills” are now critical to success. Companies that ensure data privacy, algorithm fairness, and system security while providing efficient AI services will have more resilient and sustainable business models, deserving valuation premiums.

Third, standardization and compliance are becoming key selection criteria in AI industry competition. Meeting increasingly complex regulatory requirements demands significant resources, creating a “compliance threshold” that favors larger, resource-rich, well-managed leading firms.

At the same time, standards provide a basis for customer trust. AI products certified to national standards or aligned with internationally recognized risk frameworks are more likely to gain customer confidence. This trust, based on standards, becomes a vital part of brand reputation, further consolidating market leadership. Future AI competition will increasingly involve governance, compliance, and reputation, beyond just technology and algorithms.

Ultimately, all these changes point to a fundamental shift: the core of AI company valuation is moving from “possibility” to “certainty.” In early AI development, markets eagerly chased stories depicting “future worlds,” supporting early investments and high valuations but also fueling bubbles.

The improvement of AI standardization infrastructure acts as a bubble-squeezing process. It requires companies to break down grand visions into measurable, manageable, verifiable indicators. Company value will depend less on founder visions or top-tier academic papers and more on healthy revenue growth, customer success stories, technological barriers, risk management effectiveness, and compliance records.

In conclusion, despite different paths, China and the U.S. are both exploring AI standardization toward a clear future: AI evolving from a speculative gold rush into a mature industry with clear rules, infrastructure, and risk controls. The release of AI dictionaries eliminates communication noise; the implementation of risk frameworks defines boundaries; and the development of standardization infrastructure builds a sustainable ecosystem. In this grand context, the valuation logic of AI companies will undergo profound transformation. Those capable of navigating conceptual fog and building safe, trustworthy, efficient, and genuinely valuable AI applications on a solid standardization foundation will be the winners of the new era. The once-popular “storytelling” logic will ultimately be abandoned by the market.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)