University of California professor refutes Vibe Coding! Generative AI coding efficiency drops by 20%

MarketWhisper

加州大學教授打臉Vibe Coding

Professor Sarah Chasins from Berkeley analyzes generative AI in GQ Taiwan, pointing out that LLMs are essentially “fill-in-the-blank games” trained over 300-400 years of computational data. Vibe Coding can handle routine content but struggles with innovation. Studies show that users who believe their efficiency has increased by 20% actually experience a 20% slowdown. The recommended approach involves three steps: minimize the problem to 5 lines, describe it with pseudocode, and develop a validation plan.

The Truth Behind the Fill-in-the-Blank Game Behind ChatGPT

Professor Sarah Chasins first explains, in an accessible way, how ChatGPT works. Built on large language models (LLMs), its core logic is quite simple: it’s a program responsible for matching words together in plausible combinations. Developers of LLMs first collect all human-written documents and web pages online, representing the reasonable vocabulary combinations in human cognition.

Then, the program undergoes large-scale “fill-in-the-blank” training. For example, it might see a sentence like “The dog has four ![加州大學教授打臉Vibe Coding]###https://img-cdn.gateio.im/social/moments-87a9b3933a-b20c904ffd-8b7abd-e2c905###”, where the logical human answer is “legs.” If the program guesses incorrectly, developers correct it until it gets it right. After approximately 300 to 400 years of computational training, the program ultimately produces an enormous “cheat sheet,” known in tech as “parameters.”

Next, providing a dialogue-formatted document allows this fill-in-capable program to transform into a chatbot, automatically completing human questions based on logic. This “fill-in” explanation reveals the core limitation of generative AI: it can only combine patterns seen in training data and cannot truly understand semantics or engage in creative thinking.

This principle is crucial for understanding the limitations of Vibe Coding. When you ask AI to write “login functionality” code, it performs well because there are thousands of examples online. But when you ask it to create an innovative algorithm never before implemented, AI can only piece together similar fragments, often resulting in logical errors or code that cannot run at all.

The Illusion of Efficiency in Vibe Coding

Regarding the recent trend of using LLMs to generate code directly, rather than manually coding, Professor Sarah Chasins remains cautious. She analyzes that these tools perform adequately with routine content that humans have written countless times, but are generally ineffective for any attempt at innovation.

Even more surprising are the research data. The professor cites studies indicating that users who rely on LLM tools for assistance believe their efficiency has increased by 20%, yet their actual development speed is 20% slower than those who do not use such tools. This stark discrepancy between subjective perception and objective reality exposes the biggest trap of Vibe Coding: it creates an illusion of “high efficiency,” but in reality, a lot of time is wasted debugging and fixing errors generated by AI.

This shows that over-reliance on tools can create a false sense of productivity. When faced with novel programming requirements, lacking fundamental skills in logical decomposition and understanding physical principles makes it impossible to correct AI errors, leading to even more time-consuming outcomes. To illustrate, LLMs are like high-end autonomous vehicles—they can handle common routes, but if you don’t understand how to decompose the track or the physics of vehicle operation, encountering unfamiliar, treacherous curves can cause the autonomous system to fail, and you won’t know how to fix it due to lack of basic skills.

Three Major Reasons for Vibe Coding Failure

Zero innovation capability: Can only combine existing patterns from training data, unable to produce truly novel solutions

Hard to detect errors: Generated code may look reasonable but contain logical mistakes that require expertise to identify

Debugging time skyrockets: Fixing AI errors takes longer than writing code yourself, negating speed advantages

Training data bias: LLM training data is mostly in developer language; everyday language descriptions can easily lead to misunderstandings

From a cognitive psychology perspective, this efficiency illusion stems from the “fluency illusion.” When AI rapidly generates large amounts of code, users feel progress is fast and perceive increased efficiency. However, the quality of this code may be poor, requiring more time later to fix. In contrast, human programmers may work slower but produce higher-quality code, resulting in shorter overall time.

Three Steps to Correctly Use Generative AI

Faced with the powerful capabilities of AI tools, many question the necessity of learning to code. The professor believes that the core skill in programming education is “problem decomposition”—breaking down a vague, large problem into smaller parts until each can be solved with a few lines of code. Without this training, users will struggle to leverage AI tools to produce truly functional complex programs.

Furthermore, since LLM training data is mostly in engineer-style language, everyday language used by non-professionals often does not match the training data, making it difficult for AI to generate useful code. To maximize the benefits of generative AI in coding, Chasins recommends following three steps:

Step 1: Minimize the problem—break it down into about 5 lines of code. This is a critical decomposition skill; when you can split complex problems into small units, AI can effectively assist in implementing each part.

Step 2: Use pseudocode—a way to describe logic using a syntax that may combine multiple programming languages and reserved words. Although similar to natural language, pseudocode is not everyday language; its purpose is to help the computer understand the logic more precisely.

Step 3: Develop a validation plan—use extensive testing or professional review to ensure the correctness of AI outputs. This step is often overlooked but is crucial. Many Vibe Coding users take AI-generated code directly into production without sufficient testing, leading to serious errors in live environments.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Dogecoin On-Chain Losses Hit Record High: 1,100 Days of "Profit Days" Warning that DOGE May Enter a Two-Year Bottoming Cycle

February 25 News, Dogecoin (DOGE) is currently in a critical structural and technical range. On-chain research firm AMBCrypto pointed out that its price has entered a rare historical discount level, a characteristic often associated with long-term bottom formation rather than short-term reversal signals. Data shows that the "profit days" indicator has risen to a new high of 1100 days in history, indicating that the vast majority of historical trading days had prices above the current level, and many DOGE holders are still in deep unrealized losses. This phenomenon typically occurs during the later stages of a market correction cycle, reflecting that there is still significant remaining supply in the market, while long-term holders are numerous. However, structural cycle indicators have not yet released clear bottoming signals. From the perspective of net position changes among holders, during the 2021 and 2024 bull market ends, a large amount of DOGE was sold at cycle tops, while continuous buying in the $0.095 to $0.34 range significantly increased the overall holding costs.

GateNewsBot16h ago

Make-or-Break Moment for Dogecoin (DOGE): Can It Defend $0.090 or Face a Deeper Slide?

Dogecoin trades at $0.091, with significant trading volume of $853M. Despite a modest gain, bearish momentum persists, indicating potential price drops unless a bullish reversal occurs. Key indicators suggest weak sentiment and selling pressure.

TheNewsCrypto17h ago

Dogecoin breaks through key resistance to turn it into support: DOGE price gains strength and may challenge the $0.096 level

February 25 News, Dogecoin (DOGE) rebounded amid a stabilization of market sentiment in the crypto space, breaking through a key technical resistance level and turning it into short-term support, sparking market attention on a new upward structure. Previously, DOGE was consolidating and oscillating within the range of $0.090 to $0.0927, repeatedly testing the $0.0924 resistance without success. As trading volume significantly increased, this critical price level was effectively broken, and the technical pattern strengthened accordingly. Data shows that DOGE rose from approximately $0.0926 to $0.0944, with the breakout phase reaching a trading volume of 749 million, a notable increase from the baseline level, indicating genuine buying interest rather than a short-term rally driven by liquidity. After briefly touching $0.0950, the price consolidated in the $0.0940 to $0.0945 range, forming higher lows and higher highs, demonstrating that the bulls are still defending the breakout zone.

GateNewsBot18h ago
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)