According to monitoring by Beating, a billing bug in Anthropic’s Claude Code service caused a Max 20x subscriber to be overcharged $200.98 in extra usage fees while only using 13% of their monthly quota. The bug was triggered when a user’s git repository commit history contained the uppercase string “HERMES.md”; the system bypassed subscription limits and charged additional usage instead. Anthropic’s abuse prevention system was performing string matching on git logs injected into system prompts, likely attempting to detect unofficial API clients, but mistakenly flagged legitimate users mentioning the Hermes Agent configuration file.
After the user submitted a refund request, Anthropic’s AI customer service initially denied compensation, stating it could not refund for billing routing errors. The denial sparked backlash on GitHub and Hacker News, reaching the front page. Claude Code team member Thariq subsequently announced that all affected users would receive full refunds plus equivalent credit compensation.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
China's Cyberspace Administration Launches 4-Month Campaign to Curb AI Application Chaos on April 30
According to CCTV News, China's Cyberspace Administration launched a nationwide four-month campaign on April 30 to address AI application chaos. The initiative, deployed across two phases, targets issues including missing model registrations, inadequate platform safety and review capabilities,
GateNews7m ago
Forefront Tech Completes $100M IPO Pricing, Nasdaq Listing Under Code FTHAU
According to ChainCatcher, special purpose acquisition company Forefront Tech completed a $100 million IPO pricing on April 30 and will list on Nasdaq under ticker symbol FTHAU. The company plans to use the proceeds to pursue merger and acquisition opportunities in blockchain, fintech, artificial in
GateNews1h ago
DeepSeek Introduces Visual Primitives Method to Enhance Multimodal Reasoning on April 30
According to DeepSeek's technical report, on April 30, the company introduced Visual Primitives, a method that embeds basic visual units such as points and bounding boxes into reasoning chains to address the Reference Gap problem in multimodal tasks. The method reduces image token consumption
GateNews2h ago
NVIDIA Releases Cosmos-Reason2-32B Flagship Model Weights, Expands Context Window to 256K Tokens
According to Beating, NVIDIA has released the weights for Cosmos-Reason2-32B, the flagship version of its physical AI reasoning vision-language model (VLM) designed to help robots and autonomous driving systems understand spatial, temporal, and physical principles. The 32-billion-parameter model,
GateNews2h ago
OpenAI reveals why Codex is not allowed to talk about “goblins”: the nerd persona reward went out of control
OpenAI’s official blog explains that Codex bans “banter goblins” and other creatures because the reward signal in nerd-persona training favored biological metaphors, leading to cross-persona contamination and RLHF misdirection. The incident was revealed by Barron Roth after a system prompt surfaced; OpenAI then used two strategies—short-term hard-coded fixes and long-term reward-signal removal—to warn about the fragility of reward design, and said post-training audits need to be more granular.
ChainNewsAbmedia3h ago
Alibaba's Qwen Open-Sources Qwen-Scope Interpretability Module Covering 7 Models on April 30
According to PANews, on April 30, Alibaba's Qwen announced the open-sourcing of Qwen-Scope, an interpretability module trained on Qwen3 and Qwen3.5 series models. The release covers 7 large language models across dense and mixture-of-experts variants, with 14 sets of sparse autoencoder
GateNews3h ago