Gate News message, April 29 — AI researcher Aran Komatsuzaki conducted a comparative analysis of tokenization efficiency across six major AI models by translating Rich Sutton’s seminal paper “The Bitter Lesson” into nine languages and processing them through OpenAI, Gemini, Qwen, DeepSeek, Kimi, and Claude’s tokenizers. Using the English version’s token count on OpenAI as the baseline (1x), the study revealed significant disparities: processing the same content in Chinese required 1.65x tokens on Claude, compared to only 1.15x on OpenAI. Hindi showed an even more extreme result on Claude, exceeding the baseline by over 3x. Anthropic ranked lowest among the six models tested.
Critically, when the identical Chinese text was processed across different models—all measured against the same English baseline—the results diverged dramatically: Kimi consumed only 0.81x tokens (even less than English), Qwen 0.85x, while Claude required 1.65x. This gap reveals a pure tokenization efficiency problem, not an inherent language issue. Chinese models demonstrated superior efficiency in processing Chinese, suggesting the disparity stems from tokenizer optimization rather than the language itself.
The practical implications for users are substantial: increased token consumption directly raises API costs, extends model response latency, and depletes context windows more rapidly. Tokenization efficiency depends on the linguistic composition of a model’s training data—models trained predominantly on English compress English text more efficiently, while languages with lower data representation are tokenized into smaller, less efficient fragments.
Komatsuzaki’s conclusion underscores a fundamental principle: market size determines tokenization efficiency. Larger markets receive better optimization, while underrepresented languages face significantly higher token costs.
相關文章
Parag Agrawal's Parallel Raises $100M Series B for AI Agent Search Infrastructure