
According to a Bloomberg News report dated Tuesday, April 22, 2026, citing documents and people familiar with the matter, Anthropic’s Claude Mythos Preview AI model was accessed by a small batch of unauthorized users on the same day Anthropic first publicly announced its plan to open up Mythos testing to a limited number of enterprises.
Bloomberg Report Details: A Small Number of Users Received Access to Claude Mythos Preview
According to the Bloomberg report, a small number of users in a private online forum obtained access to Claude Mythos Preview on the same day that Anthropic announced its plan to open up Mythos testing to a limited number of enterprises. Since then, that user group has continued to use Mythos regularly, but Bloomberg noted that the purpose was not for cybersecurity use. Bloomberg said that, in reporting on this investigation, the access party provided screenshots and an on-site demonstration of the model’s use.
An Anthropic spokesperson confirmed that an investigation has been launched and did not provide further details on the progress of the investigation.
Background on Claude Mythos and Project Glasswing
According to Anthropic’s public information, Claude Mythos Preview was announced on April 7, 2026, as part of Anthropic’s “Project Glasswing.” Project Glasswing is a controlled program that allows specific organizations to use an unreleased Claude Mythos Preview to perform defensive cybersecurity work. Anthropic previously said that Mythos has unprecedented capabilities to identify digital security vulnerabilities and potential misuse, and that it has drawn the attention of regulators.
Frequently Asked Questions
When did Bloomberg report the unauthorized access to Claude Mythos Preview?
According to a Bloomberg News report dated Tuesday, April 22, 2026, citing documents and people familiar with the matter, Claude Mythos Preview was accessed without authorization by a small number of users in a private forum on the same day Anthropic announced plans to open up testing to limited enterprises.
What official response did Anthropic give to this incident?
According to an Anthropic spokesperson statement cited by Bloomberg, Anthropic said: “We are investigating a report that claims someone accessed Claude Mythos Preview without authorization through one of our third-party provider environments.”
Which program does Claude Mythos Preview belong to, and what is its intended use?
According to Anthropic’s public information, Claude Mythos Preview was announced on April 7, 2026, and is part of the “Project Glasswing” program, allowing certain organizations to use the unreleased model for defensive cybersecurity work.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Perplexity Discloses Web Search Agent Post-Training Method; Qwen3.5-Based Model Outperforms GPT-5.4 on Accuracy and Cost
Perplexity uses SFT followed by RL with Qwen3.5 models, leveraging a multi-hop QA dataset and rubric checks to boost search accuracy and efficiency, achieving best-in-class FRAMES performance.
Abstract: Perplexity's post-training workflow for web-search agents combines supervised fine-tuning (SFT) to enforce instruction-following and language consistency with online reinforcement learning (RL) via the GRPO algorithm. The RL stage uses a proprietary multi-hop verifiable QA dataset and rubric-based conversational data to prevent SFT drift, with reward gating and within-group efficiency penalties. Evaluation shows Qwen3.5-397B-SFT-RL achieving top FRAMES performance, 57.3% accuracy with a single tool call and 73.9% with four calls at $0.02 per query, outperforming GPT-5.4 and Claude Sonnet 4.6 on these metrics. Pricing is API-based and excludes caching.
GateNews7m ago
TikTok Removes Over 538,000 AI-Generated Unauthorized Videos; Multiple Platforms Launch Governance Initiatives
Gate News message, April 23 — TikTok announced a comprehensive crackdown on AI-generated content that infringes on user rights, disclosing that it has removed over 538,000 videos and penalized more than 4,000 accounts to date. The platform will prioritize enforcement against AI deepfakes, voice
GateNews45m ago
Traditional Finance Will Accelerate Entry Into Crypto Market, Says Economist Fu Peng
Gate News message, April 23 — Fu Peng, chief economist of Xinfire Group, shared his outlook on the convergence of traditional finance and crypto assets during the 2026 Hong Kong Institutional Digital Wealth Management Summit. According to Fu, the integration of traditional financial institutions wit
GateNews57m ago
OpenAI Codex Team Fixes OpenClaw Authentication Bug, Significantly Improves Agent Behavior
OpenClaw switches from Pi to Codex harness to fix a silent authentication fallback, with two PRs addressing the bridge and fallback; post-fix, the agent shifts from shallow heartbeat polling to a full work loop, enabling progress.
Abstract: OpenClaw’s Codex harness optimization addressed a critical authentication flaw that caused silent fallback to the Pi harness when using Codex with OpenAI models. Two pull requests fix the authentication bridge and prevent silent fallback, changing the runtime adapter. As a result, agent behavior evolves from shallow heartbeat polling to a full work loop that reads context, analyzes tasks, edits repositories, and verifies progress, improving continuity and visibility across heartbeats.
GateNews1h ago
Anthropic weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it?
Bloomberg reported that a private forum group gained unauthorized access to Mythos through legally authorized use by a third-party contractor of Anthropic. Mythos is an enterprise-grade, defensive AI that is only available to large organizations that undergo rigorous review. The group used knowledge of the model URL to guess the system’s location to gain entry, and provided screenshot demonstrations, claiming it was still being used with no malicious intent. Anthropic is investigating and has preliminarily determined it was abuse of permissions rather than external intrusion. This case highlights the risk of entrusting highly sensitive models to third-party oversight, and underscores the need to strengthen governance resilience and trust mechanisms.
ChainNewsAbmedia1h ago
SlowMist CISO issues alert: ShinyHunters claims to have breached Anthropic’s internal systems
According to an alert posted on the X platform on April 23 by SlowMist’s Chief Information Security Officer 23pds, the hacker group ShinyHunters claims it has breached internal systems related to the Anthropic Mythos models and has publicly shared screenshots as evidence, including a user management panel, an AI experiment dashboard, and analyses of model performance and costs. However, Anthropic has not yet issued an official statement.
MarketWhisper1h ago