The internet exploded when I tried to frame an age-old question quantitatively: Is the gap between humans sometimes greater than the gap between humans and dogs? This time, I’ll tackle it with numbers, though I should warn you—these are ballpark figures designed to spark conversation, not gospel truth.
Let’s start with a straightforward scoring system. Imagine we rate cognitive ability on a scale: an elementary school student scores 10 points, a PhD holder hits 60, a university professor reaches 75, and Einstein sits at the apex with 100 points. The difference between 10 and 100 is staggering—a full 10x gap, genuinely comparable to the cognitive distance between humans and canines.
The Cognitive Scoring Framework: From 10 Points to Beyond
Now introduce artificial intelligence into this equation. By 2025-2026, AI’s cognitive value sits conservatively at 40 points—though accounting for AI’s broad knowledge base versus specialists’ deep expertise, a realistic assessment puts it around 80 points. Watch what happens when we add AI to each group:
Elementary school student + AI = 90 points
PhD + AI = 140 points
University professor + AI = 155 points
Einstein + AI = 180 points
Here’s the revealing part: while the absolute gap between a student and Einstein remains 90 points, the relative gap collapses from 10x to just 2x. By this logic, AI genuinely narrows human cognitive disparities.
The 200-240 Threshold: Why Proficiency Matters Right Now
But wait. Critics—and they’re right to be skeptical—point out a fatal flaw. Not everyone uses AI the same way. Like Luffy mastering different Gear levels in One Piece, a novice might extract only 20% of AI’s potential value, while an expert “overclocks” it to capture 100% or more through intensive Prompt engineering and Vibe Coding. Suddenly:
Elementary school student + novice AI user = 30 points
Einstein + AI expert = 200 points
The gap has exploded to 170 points. By this framework, AI actually widens human differences—at least for now.
This isn’t wrong. Teachers Lao Bai and Alvin articulated this perfectly. Yet here’s where I diverge: I believe this apparent contradiction dissolves once you account for AI’s evolution trajectory, which unfolds along two critical dimensions.
The Evolution Double-Track: Smarter and Simpler
First, AI will become exponentially smarter. Second, it will become radically easier to use. These aren’t separate trends—they’re interconnected. As AI advances, the barrier to mastery collapses.
Consider what happens as AI approaches 240-point capability levels. A more sophisticated AI automatically compensates for user inexperience. The proficiency ceiling—what experts can extract—rises to 240-280 points, but equally important, the floor rises. An ordinary person now accesses 200 points almost by default, simply by asking questions naturally.
Elementary school student + AI (240 level) = 210 points
Einstein + AI (240 level) = 380 points
The gap widens in absolute terms (170 points), but shrinks relative to overall capability levels—now a 1.8x multiplier instead of 2x.
The 1000-Point Future: When AI Democratization Erases Human Gaps
Now project forward a decade in an optimistic scenario: AI achieves 1000-point-equivalent cognitive capability.
Elementary school student = 1010 points
Einstein = 1100 points
The absolute gap is larger—90 points. But relatively? It’s barely 1.09x. Even Einstein becomes statistically indistinguishable from an elementary school student. The relative difference approaches zero as the denominator explodes.
Why Training Teachers Will Become Obsolete
Some worry that AI expertise will become a permanent elite skillset, permanently dividing the population. But this concern misses a crucial point: the very people who profit from teaching “how to extract 100% of AI’s potential” will become irrelevant. Why? Because AI itself will be the tutor. As AI becomes smarter and more intuitive, the training burden dissolves. We’ve already witnessed this pattern: writers, illustrators, dancers, and visual artists have been displaced by AI systems that democratize their capabilities. Why would teaching AI optimization prove immune to this trend?
The future norm won’t be isolated cases of people achieving 80-120% AI utilization—it will be a universal standard. The gap between the best and worst at using AI will compress, not expand.
The Martial Arts Master Paradox
Think of it this way: imagine two martial arts masters suddenly granted access to shoulder-mounted rocket launchers in combat. One studied weaponry for 10 years, the other for 15. How much does that prior experience matter? The firepower overwhelms expertise differentials entirely.
That’s the future with sufficiently advanced, user-friendly AI. Human cognitive disparities, while not eliminated absolutely, become negligible relative to AI-amplified capability. The smarter AI becomes, the less human intervention matters. The easier it becomes to use, the smaller the gap between people shrinks.
The widening we see today is a transitional phase. It’s real, but temporary—a symptom of immature technology, not a permanent condition. In the long arc, AI doesn’t amplify human inequality. It compresses it.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
When AI Abilities Reach 240: The Paradox of Narrowing and Widening Human Differences
The internet exploded when I tried to frame an age-old question quantitatively: Is the gap between humans sometimes greater than the gap between humans and dogs? This time, I’ll tackle it with numbers, though I should warn you—these are ballpark figures designed to spark conversation, not gospel truth.
Let’s start with a straightforward scoring system. Imagine we rate cognitive ability on a scale: an elementary school student scores 10 points, a PhD holder hits 60, a university professor reaches 75, and Einstein sits at the apex with 100 points. The difference between 10 and 100 is staggering—a full 10x gap, genuinely comparable to the cognitive distance between humans and canines.
The Cognitive Scoring Framework: From 10 Points to Beyond
Now introduce artificial intelligence into this equation. By 2025-2026, AI’s cognitive value sits conservatively at 40 points—though accounting for AI’s broad knowledge base versus specialists’ deep expertise, a realistic assessment puts it around 80 points. Watch what happens when we add AI to each group:
Here’s the revealing part: while the absolute gap between a student and Einstein remains 90 points, the relative gap collapses from 10x to just 2x. By this logic, AI genuinely narrows human cognitive disparities.
The 200-240 Threshold: Why Proficiency Matters Right Now
But wait. Critics—and they’re right to be skeptical—point out a fatal flaw. Not everyone uses AI the same way. Like Luffy mastering different Gear levels in One Piece, a novice might extract only 20% of AI’s potential value, while an expert “overclocks” it to capture 100% or more through intensive Prompt engineering and Vibe Coding. Suddenly:
The gap has exploded to 170 points. By this framework, AI actually widens human differences—at least for now.
This isn’t wrong. Teachers Lao Bai and Alvin articulated this perfectly. Yet here’s where I diverge: I believe this apparent contradiction dissolves once you account for AI’s evolution trajectory, which unfolds along two critical dimensions.
The Evolution Double-Track: Smarter and Simpler
First, AI will become exponentially smarter. Second, it will become radically easier to use. These aren’t separate trends—they’re interconnected. As AI advances, the barrier to mastery collapses.
Consider what happens as AI approaches 240-point capability levels. A more sophisticated AI automatically compensates for user inexperience. The proficiency ceiling—what experts can extract—rises to 240-280 points, but equally important, the floor rises. An ordinary person now accesses 200 points almost by default, simply by asking questions naturally.
The gap widens in absolute terms (170 points), but shrinks relative to overall capability levels—now a 1.8x multiplier instead of 2x.
The 1000-Point Future: When AI Democratization Erases Human Gaps
Now project forward a decade in an optimistic scenario: AI achieves 1000-point-equivalent cognitive capability.
The absolute gap is larger—90 points. But relatively? It’s barely 1.09x. Even Einstein becomes statistically indistinguishable from an elementary school student. The relative difference approaches zero as the denominator explodes.
Why Training Teachers Will Become Obsolete
Some worry that AI expertise will become a permanent elite skillset, permanently dividing the population. But this concern misses a crucial point: the very people who profit from teaching “how to extract 100% of AI’s potential” will become irrelevant. Why? Because AI itself will be the tutor. As AI becomes smarter and more intuitive, the training burden dissolves. We’ve already witnessed this pattern: writers, illustrators, dancers, and visual artists have been displaced by AI systems that democratize their capabilities. Why would teaching AI optimization prove immune to this trend?
The future norm won’t be isolated cases of people achieving 80-120% AI utilization—it will be a universal standard. The gap between the best and worst at using AI will compress, not expand.
The Martial Arts Master Paradox
Think of it this way: imagine two martial arts masters suddenly granted access to shoulder-mounted rocket launchers in combat. One studied weaponry for 10 years, the other for 15. How much does that prior experience matter? The firepower overwhelms expertise differentials entirely.
That’s the future with sufficiently advanced, user-friendly AI. Human cognitive disparities, while not eliminated absolutely, become negligible relative to AI-amplified capability. The smarter AI becomes, the less human intervention matters. The easier it becomes to use, the smaller the gap between people shrinks.
The widening we see today is a transitional phase. It’s real, but temporary—a symptom of immature technology, not a permanent condition. In the long arc, AI doesn’t amplify human inequality. It compresses it.