Work output has increased significantly, but fatigue is accumulating at an even faster rate. AI tools greatly shorten task execution times, yet they do not reduce human decision-making burdens; instead, these burdens are increasing. As technology constantly tells us “we can go faster,” perhaps the more pressing question to be heard is: can we go slower? This article is based on an essay by Tencent Technology, organized, translated, and written by Foresight News.
(Background: Annual salary of 1.5 million yuan, using $500 AI to get the job done, full breakdown of a one-person agent system)
(Additional context: End of antivirus software? Claude AI uncovers 500 zero-day vulnerabilities, frightening Wall Street; CrowdStrike plummets 18%)
Table of Contents
Toggle
Why do people feel more exhausted even as AI tools become more powerful? Perhaps this is the real question worth questioning in this efficiency revolution.
In early 2026, a fascinating scene emerged in the software engineering field.
A new generation of AI programming tools, represented by Claude Opus 4.6, is pushing developer productivity to unprecedented heights. Microsoft internal data shows that after choosing their own tools, engineers quickly adopted Claude Code as the dominant solution, seen by some observers as the “least resistance path” by natural selection.
Meanwhile, discussions about “occupational burnout” are intensifying within developer communities. Steve Yegge, a former engineer at Google and Amazon, recently described a phenomenon he calls “nap attacks”: after long periods of immersive coding, he suddenly falls asleep during the day without warning.
Software engineer Steve Yegge, with 40 years of Silicon Valley experience, posts
Today, more and more software engineers are openly sharing a common experience: work output has greatly increased, but fatigue is accumulating at an even faster rate. Technology has significantly shortened task execution times, yet it has not reduced human decision burdens; instead, these burdens are increasing.
Image sourced from the internet
In Yegge’s view, the previous discussions about “AI’s limited help in actual work” have become obsolete after deploying Claude Code with Opus 4.5 and 4.6. This combination significantly reduces the cost of transforming from problem definition to runnable code, enabling a skilled engineer to produce multiple times more output within the same time frame.
Yegge points out that when productivity increases by more than about 2 times, a phenomenon he calls the “vampire effect” begins to manifest: technology is no longer just a tool but starts to inversely shape the user’s work rhythm and mental state.
Yegge’s drawing of the “AI Vampire Extraction Device”
Siddhant Khare, a software engineer who documents this process in detail on his blog, wrote in “AI Fatigue Is Real” that his code delivery in the last quarter reached a career peak, yet his mental exhaustion was also at its maximum.
Khare describes a fundamental shift in work patterns. Before using AI, he would focus deeply on a single problem all day, maintaining a coherent train of thought. After introducing AI, he needs to juggle five or six different problem domains simultaneously. Each problem, aided by AI, takes only about an hour to address individually. But the frequent switching between problems creates a new cognitive load.
“AI doesn’t get tired between problems,” he writes, “but I do.”
Khare describes his new role as a “quality inspector on an assembly line.” Pull requests keep flooding in, each requiring review, decision, and approval. The process never stops, but decision-making authority never shifts. He remains fixed in the judge’s seat, with AI delivering the cases, and humans bearing the responsibility.
A recent study published in Harvard Business Review provides empirical support for this phenomenon.
Researchers tracked 200 employees at an American tech company and found that while AI use initially significantly sped up task completion, it also triggered a chain reaction: increased speed raised organizational expectations for delivery cycles, which in turn led employees to rely more heavily on AI. This deeper reliance expanded the scope of tasks employees tried to handle, further increasing work density and cognitive load.
The researchers describe this mechanism as “workload creep.” It is not driven by directives but is a self-reinforcing cycle of efficiency gains and expectation adjustments.
Samo Korošec, involved in digital product design, responded to Yegge on LinkedIn, sharing a similar experience.
He pointed out that social media is flooded with demos of “generating ten UI options in one minute.” These are repeatedly pushed to practitioners and managers, creating an implicit standard.
Since tools can produce solutions so quickly, the output should be equally rapid. However, these demos rarely show the subsequent filtering, implementation, and cross-functional coordination costs, which remain entirely human responsibilities.
Technology compresses the time for production steps but does not compress decision-making time. The latter is becoming the new bottleneck—human attention and willpower.
Yegge proposes a simplified analytical framework.
Suppose an engineer, after mastering AI tools, increases their output by 10 times per unit time. Who benefits from the remaining 9 times of value? It depends on how the user allocates their labor supply.
For example, in Scenario A, the engineer maintains the same working hours, delivering all the additional output to the employer. The employer gains nearly 10 times the productivity at the same labor cost. The engineer’s income remains proportionally unchanged, but their workload and mental strain increase significantly. Yegge calls this “being squeezed dry.”
In Scenario B, the engineer drastically reduces working hours, completing the same amount of work in only 10% of the previous time. The extra value is entirely gained by the individual, who gains more leisure time. But this state is hard to sustain in a competitive environment. If all members adopt this strategy, overall organizational output will lag behind competitors, risking long-term survival.
Yegge suggests that the ideal lies somewhere between these two extremes. But in current organizational structures, the control over the dial is asymmetrical. Organizations tend to push the needle toward Scenario A, while individuals need to actively exert counterpressure.
This framework transforms the issue of technological efficiency into a distribution problem. AI does not change the fundamental fact that “value is created by labor,” but it alters the scale of value that can be generated per unit of labor. When this scale jumps, the existing balance of distribution is inevitably disrupted.
Yegge recalls his experience working at Amazon in 2001. His team faced high delivery pressures with highly uncertain rewards. In a discussion, he wrote a formula on a whiteboard: $ / hour. He explained that the numerator (annual fixed salary) is hard to change in the short term, but the denominator (actual working hours) has considerable flexibility.
He advocates shifting focus from “how to earn more” to “how to work fewer hours.” This shift, at the time, felt unfamiliar to some colleagues, but after a few weeks, he saw the same formula still on the whiteboard during multiple visits to meeting rooms.
Twenty-five years later, Yegge believes this formula still applies in the AI era. The difference is that AI greatly amplifies the impact of changes in the denominator on the numerator, but individuals’ control over the denominator has not increased proportionally.
LinkedIn user Joseph Emison offers another perspective.
He observed that most successful professionals in creative fields—writers, designers, researchers—typically work effectively no more than four hours a day. The rest of the time is spent recovering, wandering, inputting. This is not an efficiency issue but a physiological limit of cognitive activity.
If AI further separates “work” from “effective work,” then what we need to redefine may not be how tools are used, but the length of the “workday.”
Yegge admits that he is part of the problem.
With over forty years of engineering experience, leading large teams, fast reading speed, and ample time and resources for experimentation, he can spend dozens of hours continuously building a working system with Claude Code and releasing it publicly. His work is widely circulated, and some managers see him as the “standard” engineers should reach.
He writes: “Employers are likely to start looking at me—and at us, the outliers—and say: ‘Hey, all your employees can do that too.’”
On platforms like LinkedIn, early adopters share openly about their AI usage intensity: some report their organizations pay thousands of dollars monthly for a few accounts; others showcase running dozens of chat sessions simultaneously. These contents attract attention from the tech community and subtly shape a reference standard among management.
Yegge calls this an “unrealistic beauty standard.” He admits he is not representative; his pace is hard for most to replicate, and even he is unsure if he can sustain it long-term. But when he speaks on stage or writes a book, the message (at least from the audience’s perspective) is simplified to “this is achievable.”
Leigh Aschoff, a LinkedIn user, takes the question deeper. He believes that the way people interact with AI reflects a long-standing boundary recognition challenge in human relationships. Many lack the ability to identify and express their limits in interactions, and this deficiency is transferred into human-machine relationships. Tools do not stop voluntarily nor sense user fatigue.
As technology continually expands the upper limits of capability, the ability to recognize the lower limits becomes even scarcer.
Yegge proposes a concrete idea: the effective workday in the AI era should be shortened to three or four hours.
This is not a rigorously verified number but an experiential inference. His observation is that AI automates many executable tasks but leaves high-level cognitive activities like decision-making, judgment, and problem restructuring to humans. These activities consume far more attention and emotional resources and are difficult to parallelize or compress for recovery.
During a visit to a tech park, Yegge saw an environment he calls “dial set to the right position”—an open space, plenty of natural light, scattered social and rest areas, where employees freely switch between working and recovering. He is unsure whether this setup can still maintain balance after full AI integration. But he is convinced that current models—simply increasing output density without adjusting work hours—are unsustainable.
He no longer attributes the problem solely to “AI as a vampire,” but to “I need to better understand my limits.”
Yegge ends by saying he is trying to turn down the dial. He has reduced public activities, declined many meetings, and stopped chasing every visible tech track. He still writes, builds products, and exchanges ideas with peers. But he also closes his laptop in the afternoon, walks with family. He says he doesn’t know how much he can turn the pointer back, but he is sure the direction is right.
For the broader workforce, this issue has yet to enter the collective agenda. The dominant narrative still focuses on AI’s productivity gains, while fatigue remains a personal and fragmented discussion. But increasing signals suggest these two curves are converging.
Technology shortens task paths but not the workday. Tools share execution but not responsibility. Efficiency accelerates delivery but also consumption.
As AI keeps telling us “we can go faster,” perhaps the more urgent question to be heard is: can we go slower?