Odaily Planet Daily reports that Vitalik Buterin published an article elaborating on his layered understanding of blockchain scalability, pointing out that the difficulty of blockchain expansion from low to high is sequentially computation, data, and state.
Vitalik states that computation is the easiest to scale, achievable through parallelization, introducing “tips” provided by block builders, or replacing extensive calculations with proofs such as zero-knowledge proofs. Data scalability is of moderate difficulty; if the system requires data availability guarantees, this requirement cannot be avoided, but can be optimized through data partitioning, erasure coding (like PeerDAS), and other methods, supporting “graceful degradation,” meaning nodes with lower data capacity can still produce blocks of corresponding size.
In contrast, the state is the most difficult part to scale. Vitalik points out that to verify even a single transaction, nodes need the complete state; even if the state is abstracted into a tree and only the root is stored, updating the root still depends on the full state. Although methods exist to partition the state, they usually require significant architectural adjustments and are not universal solutions.
Based on this, Vitalik summarizes: if it is possible to replace state with data without introducing new centralization assumptions, it should be prioritized; similarly, if computation can replace data without new centralization assumptions, it should also be seriously considered.