Part 2: Technical Overview
The Incentive Layer
- The stake in proof of stake provides three things: an anti-Sybil mechanism, an accountability mechanism, and an incentive alignment mechanism.
- The 32 ETH stake size is a trade-off between network overhead, number of validators, and time to finality.
- Combined with the Casper FFG rules, stakes provide economic finality: a quantifiable measure of the security of the chain.
A stake is the deposit that a full participant of the Ethereum 2 protocol must lock up. The stake is lodged permanently in the deposit contract on the Ethereum chain, and reflected in a balance in the validator's record on the beacon chain. The stake entitles a validator to propose blocks, to attest to blocks and checkpoints, and to participate in sync committees, all in return for rewards that accrue to its beacon chain balance.
In Ethereum 2 the stake has three key roles.
First, the stake is an anti-Sybil mechanism. Ethereum 2 is a permissionless system that anyone can participate in. Permissionless systems must find a way to allocate influence among their participants. There must be some cost to creating an identity in the protocol, otherwise individuals could cheaply create vast numbers of duplicate identities and overwhelm the chain. In Proof of Work chains a participant's influence is proportional to its hash power, a limited resource1. In Proof of Stake chains participants must stake some of the chain's coin, which is again a limited resource. The influence of each staker in the protocol is proportional to the stake that they lock up.
Second, the stake provides accountability. There is a direct cost to acting in a harmful way in Ethereum 2. Specific types of harmful behaviour can be uniquely attributed to the stakers that performed them, and their stakes can be reduced or taken away entirely in a process called slashing. This allows us to quantify the economic security of the protocol in terms of what it would cost an attacker to do something harmful.
Third, the stake aligns incentives. Stakers necessarily own some of what they are guarding, and are incentivised to guard it well.
The size of the stake in Ethereum 2 is 32 ETH per validator.
This value is a compromise. It tries to be as small as possible to allow wide participation, while remaining large enough that we don't end up with too many validators. In short, if we reduced the stake, we would potentially be forcing stakers to run more expensive hardware on higher bandwidth networks, thus increasing the forces of centralisation.
The main practical constraint on the number of validators in a monolithic2 L1 blockchain is the messaging overhead required to achieve finality. Like other PBFT-style consensus algorithms, Casper FFG requires two rounds of all-to-all communication to achieve finality. That is, for all nodes to agree on a block that will never be reverted.
Following Vitalik's notation, if we can tolerate a network overhead of messages per second, and we want a time to finality of , then we can have participation from at most validators, where
We would like to keep small to allow the broadest possible participation by validators, including those on slower networks. And we would like to be as short as possible since a shorter time to finality is much more useful than a longer time3. Taken together, these requirements imply a cap on , the total number of validators.
This is a classic scalability trilemma. Personally, I don't find these pictures of triangles very intuitive, but they have become the canonical way to represent the trade-offs.
- Our ideal might be to have high participation (large ) with low overhead (low ) – lots of stakers on low-spec machines –, but finality would take a long time since message exchange would be slow.
- We could have very fast finality and high participation, but would need to mandate that stakers run high spec machines on high bandwidth networks in order to participate.
- Or we could have fast finality on reasonably modest machines by severely limiting the number of participants.
It's not clear exactly how to place Ethereum 2 on such a diagram, but we definitely favour participation over time to finality: maybe "x" marks the spot. One complexity is that participation and overhead are not entirely independent: we could decrease the stake to encourage participation, but that would increase the hardware and networking requirements (the overhead), which will tend to reduce the number of people able or willing to participate.4
To put this in concrete terms, the hard limit on the number of validators is the total Ether supply divided by the stake size. With a 32 ETH stake, that's about 3.6 million validators today, which is consistent with a time to finality of 768 seconds (two epochs), and a message overhead of 9375 messages per second5. That's a substantial number of messages per second to handle. However, we don't ever expect all Ether to be staked, perhaps around 10-20%. In addition, due to the use of BLS aggregate signatures, messages are highly compressed to an asymptotic 1-bit per validator.
Given the capacity of current p2p networks, 32 ETH per stake is about as low as we can go while delivering finality in two epochs. Anecdotally, my staking node continually consumes about 3.5mb/s in up and down bandwidth. That's about 30% of my upstream bandwidth on residential ADSL. If the protocol were more any chatty it would rule out home staking for many.
An alternative approach might be to cap the number of validators active at any one time to put an upper bound on the number of messages exchanged. With something like that in place, we could explore reducing the stake below 32 ETH, allowing many more validators to participate, but each participating only on a part-time basis.
Note that this analysis overlooks the distinction between nodes (which actually have to handle the messages) and validators (a large number of which can be hosted by a single node). A design goal of the Ethereum 2 protocol is to minimise any economies of scale, putting the solo-staker on as equal as possible footing with staking pools. Thus, we ought to be careful to apply our analyses to the most distributed case, that of one-validator per node.
Fun fact: the original hybrid Casper FFG PoS proposal (EIP-1011) called for a minimum deposit size of 1500 ETH as the system design could handle up to around 900 active validators. While 32 ETH now represents a great deal of money for most people, decentralised staking pools that can take less than 32 ETH are now becoming available.
The requirement for validators to lock up stakes, and the introduction of slashing conditions allows us to quantify the security of the beacon chain in some sense.
The main attack we wish to prevent is one that rewrites the history of the chain. The cost of such an attack parameterises the security of the chain. In proof of work, this is the cost of acquiring an overwhelming (51%) of hash power for a period of time. Interestingly, a successful 51% attack in proof of work costs essentially nothing, since the attacker claims all the block rewards on the rewritten chain.
In Ethereum's proof of stake protocol we can measure security in terms of economic finality. That is, if an attacker wished to revert a finalised block on the chain, what would be the cost?
This turns out to be easy to quantify. To quote Vitalik's Parametrizing Casper,
State is economically finalized if enough validators sign a message attesting to , with the property that if both and a conflicting are finalized, then there is evidence that can be used to prove that at least of validators were malicious and therefore destroy their entire deposits.
Ethereum's proof of stake protocol has this property. In order to finalise a checkpoint (), two-thirds of the validators must have attested to it. To finalise a conflicting checkpoint () requires two-thirds of validators to attest to that as well. Thus, at least one-third of validators must have attested to both checkpoints. Since individual validators sign their attestations, this is both detectable and attributable: it's easy to submit the evidence on-chain that those validators contradicted themselves, and they can be punished by the protocol.
If one-third of validators were to be slashed simultaneously, they would have their entire effective balances burned (up to 32 ETH each). At that point with, say, fifteen million ETH staked in total, the cost of reverting a finalised block would be five million of the attackers' ETH being permanently burned and the attackers being expelled from the network.
It is obligatory at this point to quote (or paraphrase) Vlad Zamfir: comparing proof of stake to proof of work, "it's as though your ASIC farm burned down if you participated in a 51% attack".
For more on the mechanics of economic finality, see below under Slashing, and for more on the rationale and justification, see the section on Casper FFG. [TODO: link to Casper FFG when written.]
- Parametrizing Casper: the decentralization/finality time/overhead tradeoff presents some early reasoning about the trade-offs for different stake sizes. Things have moved on somewhat since then, most notably with the advent of BLS aggregate signatures.
- Why 32 ETH validator sizes? from Vitalik's Serenity Design Rationale.
Vitalik's discussion document around achieving single slot finality looks at the participation/overhead/finality trade-off space from a different perspective.
In the Bitcoin white paper, Satoshi wrote that, "Proof-of-work is essentially one-CPU-one-vote", although ASICs and mining farms have long subverted this. Proof of Stake is one-stake-one-vote.↩
A monolithic blockchain is one in which all nodes process all information, be it transactions or consensus-related. Pretty much all blockchains to date, including Ethereum, have been monolithic. One way to escape the scalability trilemma is to go "modular".
- More on the general scalability trilemma: Why sharding is great by Vitalik.
- More on modularity: Modular Blockchains: A Deep Dive by Alec Chen of Volt Capital.
In an unfinished paper Vitalik attempts to quantify the "protocol utility" for different times to finality.
...a blockchain with some finality time has utility roughly , or in other words increasing the finality time of a blockchain by a constant factor causes a constant loss of utility. The utility difference between 1 minute and 2 minute finality is the same as the utility difference between 1 hour and 2 hour finality.
He goes on to make a justification for this (p.10).↩
Exercise for the reader: try placing some of the other monolithic L1 blockchains within the trade-off space.↩
Vitalik's estimate of 5461 is too low since he omits the factor of two in the calculation.↩