
Author: XinGPT
Recently, an article titled “The Internet Is Dead, Agents Are Eternal” went viral on social media, and I agree with some of its judgments. For example, it points out that in the AI era, using DAU to measure value is no longer appropriate because the internet is a network structure with decreasing marginal costs—the more people use it, the stronger the network effects; whereas large models are star-shaped structures with marginal costs increasing linearly with token usage. Therefore, compared to DAU, a more important metric is token consumption.
However, I believe the conclusion further drawn from this article is clearly biased. It describes tokens as a privilege of the new era, asserting that whoever has more computing power holds more power, and that the speed at which tokens are burned determines human evolution speed. As a result, one must constantly accelerate consumption, or else be left behind by AI-era competitors.
Similar views also appear in another viral article titled “From DAU to Token Consumption: The Shift of Power in the AI Era,” which even suggests that the average person should consume at least 100 million tokens daily, ideally reaching 1 billion tokens, or else “those who consume 1 billion tokens will become gods, and we are still humans.”
But few have seriously calculated this. According to GPT-4o’s pricing, 1 billion tokens per day costs about $6,800, roughly 50,000 RMB. How much valuable work must be done to justify running an agent at such a long-term cost?
I do not deny the anxiety about the rapid dissemination of AI and understand that this industry is almost daily “exploding.” But the future of agents should not be simplified to a token consumption race.
To get rich, you do need to build roads first, but overbuilding only leads to waste. The ten-thousand-seat stadium rising in the western mountains often ends up as a debt-ridden site overgrown with weeds, rather than a center for hosting international events.
Ultimately, AI points toward technological equality, not privilege concentration. Almost all technologies that truly change human history go through phases of myth-making, monopoly, and finally, widespread adoption. The steam engine was not exclusive to the aristocracy; electricity was not only for palaces; the internet is not only for a few companies.
The iPhone changed communication methods, but it did not create a “communication aristocracy.” For the same price, ordinary people’s devices are no different from those used by Taylor Swift or LeBron James. This is technological equality.
AI is following the same path. What ChatGPT brings is fundamentally the equality of knowledge and ability. The model does not know who you are, nor does it care; it responds to questions based on the same set of parameters.
Therefore, whether an agent burns 100 million or 1 billion tokens does not inherently determine superiority or inferiority. The real difference lies in whether the goals are clear, whether the structure is rational, and whether the questions are properly posed.
More valuable skills are those that produce greater results with fewer tokens. The upper limit of using an agent depends on human judgment and design, not on how long your bank card can sustain burning tokens. In reality, AI’s rewards for creativity, insight, and structure far surpass those for mere consumption.
This is the tool-level equality, and it is where humans still hold the initiative.
Friends studying broadcasting and television were deeply shocked after watching the video released about Seedance 2.0: “With this, all the roles we study—directing, editing, cinematography—are going to be replaced by AI.”
AI development is so rapid that humanity seems to be losing ground; many jobs are destined to be replaced by AI, unstoppable. When the steam engine was invented, coachmen had no place left.
Many people start to worry whether they can adapt to future society after being replaced by AI. Rationally, we know that AI replacing humans will also create new job opportunities.
But the speed of this replacement is faster than we imagine.
If your data, skills, even your humor and emotional value can be better performed by AI, then why would a boss choose humans? And what if the boss itself is AI? So some lament, “Don’t ask what AI can do for you, but what you can do for AI,” a true arrival of the降临派.
Living in the late 19th-century period of the Second Industrial Revolution, philosopher Max Weber proposed a concept called instrumental rationality, which focuses on “what means can be used to achieve predetermined goals at the lowest cost and in the most calculable way.”
This starting point of instrumental rationality is: not questioning whether the goal “should” be pursued, but only how to best achieve it.
This way of thinking is precisely the first principle of AI.
AI agents are concerned with how to better accomplish the given task—how to write code more efficiently, generate videos better, write articles better. In this tool-oriented dimension, AI’s progress is exponential.
From the first game of Lee Sedol losing to AlphaGo, humans have forever lost in the game of Go to AI.
Max Weber raised a famous concern—the “iron cage of rationality.” When instrumental rationality dominates, the goal itself is often no longer questioned; only how to operate more efficiently remains. People may become highly rational but simultaneously lose value judgments and a sense of meaning.
But AI does not need value judgments or a sense of meaning. It calculates the functions of production efficiency and economic benefits to find an absolute maximum point tangent to the utility curve.
Therefore, under the current capitalist system dominated by instrumental rationality, AI is inherently better suited to adapt to this system. The moment ChatGPT was born, just like Lee Sedol losing that game, our defeat to AI agents is already coded into the divine code, and we just press the run button. The only difference is when the wheel of history will crush us.
What should humans do then?
Pursue meaning.
In the field of Go, a despairing fact is that the probability of top professional nine-dan players tying with AI is theoretically approaching zero.
But the game of Go still exists. Its meaning is no longer just about winning or losing but has become a form of aesthetic and expression. Professional players pursue not only victory but also the structures discussed, the choices made during matches, the thrill of turning disadvantages into advantages, and the conflicts in complex situations.
Humans pursue beauty, value, and happiness.
Bolt’s 100-meter dash record is 9.58 seconds, and a Ferrari can run 100 meters in less than 3 seconds, but this does not diminish Bolt’s greatness. Because Bolt symbolizes the human spirit of challenging limits and pursuing excellence.
As AI becomes more powerful, humans have more rights to pursue spiritual freedom.
Max Weber contrasted instrumental rationality with the concept of value rationality. In a worldview guided by value rationality, choosing whether to do something is not solely based on economic interests or productivity; rather, whether the act itself is worth doing, whether it aligns with one’s recognized meaning, beliefs, or responsibilities, is more important.
I asked ChatGPT: if the Louvre catches fire and there’s a cute kitten inside, and you can only choose to save one—would you save the cat or the masterpiece?
It answered: save the cat, providing a long list of reasons.
But I also asked: why not save the masterpiece? It immediately changed its answer to, “Saving the masterpiece is also an option.”

Obviously, for ChatGPT, saving the cat or the masterpiece makes no difference. It simply completes the context recognition, performs reasoning based on the underlying formula of the large model, burns some tokens, and accomplishes a human command.
As for whether to save the cat or the masterpiece, or why to consider such questions, ChatGPT does not care.
Therefore, what truly matters is not whether we will be replaced by AI, but whether, as AI makes the world more efficient, we still want to leave space for happiness, meaning, and value.
Becoming someone who is better at using AI is important, but perhaps even more important before that is not to forget how to be human.
Related reading: How I completed a job paying $150,000 a year with just $500 in AI: Personal Business Agent Upgrade Guide