Now, scaling.
There are two buckets here: short-term and long-term.
Short term scaling I've written about elsewhere. Basically:
There is a multi-stage roadmap for multidimensional gas.
First, in Glamsterdam, we separate out "state creation" costs from "execution and calldata" costs. Today, an SSTORE that changes a slot from nonzero -> nonzero costs 5000 gas, an SSTORE that changes zero -> nonzero costs 20000. One of the Glamsterdam repricings greatly increases that extra amount (eg. to 60000); our goal doing this + gas limit increases is to scale execution capacity much more than we scale state size capacity, for reasons I've written before ( ethresear.ch/t/hyper-scaling-… ). So in Glamsterdam, that SSTORE will charge 5000 "regular" gas and (eg.) 55000 "state creation gas".
State creation gas will NOT count toward the ~16 million tx gas cap, so creating large contracts (larger than today) will be possible.
One challenge is: how does this work in the EVM? The EVM opcodes (GAS, CALL...) all assume one dimension. Here is our approach. We maintain two invariants:
What we do is, we create N+1 "dimensions" of gas, where by default N=1 (state creation), and the extra dimension we call "reservoir". EVM execution by default consumes the "specialized" dimensions if it can, and otherwise it consumes from reservoir. So eg. if you have (100000 state creation gas, 100000 reservoir), then if you use SSTORE to create new state three times, your remaining gas goes (100000, 100000) -> (45000, 95000) -> (0, 80000) -> (0, 20000). GAS returns reservoir. CALL passes along the specified gas amount from the reservoir, plus _all_ non-reservoir gas.
Later, we switch to multi-dimensional _pricing_, where different dimensions can have different floating gas prices. This gives us long-term economic sustainability and optimality (see vitalik.eth.limo/general/2024… ). The reservoir mechanism solves the sub-call problem at the end of that article.
Now, for long-term scaling, there are two parts: ZK-EVM, and blobs.
For blobs, the plan is to continue to iterate on PeerDAS, and get it to an eventual end-state where it can ideally handle ~8 MB/sec of data. Enough for Ethereum's needs, not attempting to be some kind of global data layer. Today, blobs are for L2s. In the future, the plan is for Ethereum block data to directly go into blobs. This is necessary to enable someone to validate a hyperscaled Ethereum chain without personally downloading and re-executing it: ZK-SNARKs remove the need to re-execute, and PeerDAS on blobs lets you verify availability without personally downloading.
For ZK-EVM, the goal is to step up our "comfort" relying on it in stages: