Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
Gate MCP
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I've just noticed something that’s quite concerning. Over the past few years, the security of top executives has become an issue that can no longer be ignored. Since the incident involving the President of UnitedHealthcare late last year, the number of attacks targeting executives of large companies has surged by 225% compared to 2023. The average security expenses this year are around $130,000, a 20% increase from the previous year.
In the AI industry, this trend is even more evident. The combined security costs for CEOs of leading AI companies exceeded $45 million in 2024. Sundar Pichai of Google spent over $8 million, a 22% increase. Meanwhile, NVIDIA’s CEO spent up to $3.5 million, a 59% increase recently.
Sam Altman, CEO of OpenAI, is not immune to this situation. His home was attacked twice within four days in April. The first was an arson attack, and the second involved gunfire. The suspect in the first incident posted on social media that he was concerned about the existential risks of AGI, a concept that OpenAI itself has consistently emphasized in public communications.
What’s interesting is that this very point reveals a clear contradiction. Publicly, Altman talks about AI as the greatest opportunity. But at the same time, he built a bunker in Wyoming in 2016, equipped with weapons and enough food supplies for a militia. It’s a two-pronged bet: an open claim that AI will succeed, but secretly preparing for the possibility that it might go out of control.
These two attacks occurred after OpenAI signed a contract with the U.S. Department of Defense to enable ChatGPT on classified national security networks. The response was mass protests against AI in major cities, and a 295% increase in uninstallations of ChatGPT in just one day.
What he calls the “narrative” about the existential risks of AI is useful for fundraising and regulatory negotiations. But ultimately, this tool has come knocking at his own door. The fears he helped propagate have turned into a driving force for some people to rise up in opposition.