# Part 2: Technical Overview

## The Building Blocks

### Randomness

- Assigning beacon chain duties unpredictably is an important defence against some attacks.
- The beacon chain maintains a RANDAO to accumulate randomness.
- Duties such as proposing blocks, committee assignments, and sync committee participation are assigned based on the RANDAO, with a limited lookahead period.
- Block proposers verifiably contribute randomness to the RANDAO via BLS signatures over the epoch number.
- Validators are able to bias the RANDAO to a small extent, but this is not significant problem in practice.

#### Introduction

An element of randomness is an important part of a permissionless blockchain protocol, both for security and for fairness.

A protocol that is fully predictable could work well in a benign environment. But we must assume that our protocols will come under attack, and predictability provides attackers with opportunities - just as the bad guys in crime thrillers often take advantage of their victims' predictable routines.

An attacker with advance knowledge of which validators will be active in different roles has a significant foothold for mounting an attack. For example, to selectively mount denial of service attacks against future proposers, or to bribe members of a particular committee, or to register especially advantageous validator numbers for themselves allowing them to take over a future committee, or simply to censor transactions.^{1}

To quote from a paper by Brown-Cohen et al^{2},

Intuitively, it is good for protocols to be unpredictable in the sense that miners do not learn that they are eligible to mine a block until shortly before it is due to be mined. Many attacks, such as double-spending, or selfish-mining, can become much more profitable if miners know in advance when they become eligible to mine.

Unpredictability, arising from randomness, is an excellent first line of defence against many attacks.

Unpredictability in proof of work comes from the process used to mine blocks. A block is valid only if it satisfies a certain condition, and the only way to satisfy that condition is through trial and error. Miners make a random guess, test it, and try again if it's not correct - this is the "work" in proof of work. Only if the guess is correct is the block valid and the miner gets to extend the chain. As I write, the difficulty of the Ethereum PoW chain is around 12.5 Peta hashes. That means that mining an Ethereum block requires $1.25 \times 10^{16}$ guesses on average. This is similar to the odds of rolling 21 dice until they all come up six on the same roll. It is fabulously unlikely, yet somewhere on the Ethereum network somebody manages to do it every 13 seconds or so. Since the process is uniform â€“ nobody is better at guessing (rolling dice) than anyone else â€“ it provides fairness. Every Giga hash per second is equivalent to every other Giga hash per second (although there are other sources of unfairness in proof of work). And since guessing is random it provides unpredictability, which mitigates the attacks mentioned above.

Randomness^{3} in Ethereum's proof of stake protocol is used to bring unpredictability to the selection of block proposers, and to the membership of the committees that attest to blocks and sign sync data.

In this section we will look at the way that randomness is introduced into the beacon chain, some ways in which it is used, and finally some issues with the current scheme.

#### The RANDAO

The beacon chain design has always used a RANDAO^{4} mechanism to provide its in-protocol randomness. A RANDAO is simply an accumulator that incrementally gathers randomness from contributors. So, with each block, the proposer mixes in a random contribution to the existing RANDAO value.

To unpack that a little, the beacon chain maintains a RANDAO value. Every block included in the chain contains a verifiable random value provided by the validator that proposed it, its `randao_reveal`

. As each block is processed the beacon chain's RANDAO value is mixed with the `randao_reveal`

from the block. Thus, over time, the RANDAO accumulates randomness from all the block proposers.

If $R_n$ is the RANDAO value after $n$ contributions, and $r_n$ is the $n$th `randao_reveal`

, then the following holds. Here we are mixing in the new contribution using the `xor`

function, $\oplus$. Alternatives might be to use a sum or a hash, but `xor`

is simple and has useful properties.

We can think of a RANDAO as being like a deck of cards that's passed round the table, each person shuffling it in turn: the deck gets repeatedly re-randomised. Even if one contributor's randomness is weak, the cumulative result has a high level of entropy.

Current and past RANDAO values are stored in the beacon state in the `randao_mixes`

field. The current value is updated by `process_randao`

with every block that the beacon chain processes. If there is no block in a slot then the RANDAO is not updated. In addition to the RANDAO's current value, `EPOCHS_PER_HISTORICAL_VECTOR`

(minus one) past values of the RANDAO at the ends of epochs are also stored in the state. These can be used to recalculate past committee assignments, which allows historical attestations to be slashed even months later.

#### Source of randomness

Every block includes a field `randao_reveal`

that is its proposer's contribution to be mixed in to the RANDAO.

This contribution needs to satisfy two properties: it should be unpredictable by any other node, yet it should be verifiable by all nodes.

"Verifiable" means that, although random (read pseudo-random), the RANDAO contribution value must not be arbitrary. The proposer must not be able to pick and choose its contribution, otherwise it will just choose a value that gives itself some sort of advantage. There must be a single valid contribution that the proposer can make in any given block, and all the other nodes must be able to verify that contribution.

##### The old: hash onions

Early ideas for verifiable randomness had each validator pre-committing to a "hash onion". Before joining the beacon chain a validator would generate a random number. When registering its initial deposit the validator would include the result of repeatedly cryptographically hashing that number a large number (thousands) of times as a commitment. Then when proposing a block the `randao_reveal`

would be the pre-image of that commitment: one layer would be "peeled off the onion". Since a cryptographic hash is not invertible, only the proposer could calculate this value, but it's easily verifiable by everyone. Then the reveal gets stored as the new commitment and so on.

This scheme is viable, but has complexities and edge cases â€“ for example, if a block gets orphaned, everybody (except the beacon chain) can now see the proposer's reveal â€“ that make it clunky to implement in practice.

##### The new: BLS signatures

An alternative to the hash onion became available when we moved to using BLS signatures in the protocol. With the BLS scheme every validator already has a closely guarded random value: the secret key that it uses for signing blocks and attestations. As far as anyone knows the signatures produced are uniformly random.

The signature for an agreed message nicely satisfies our two desired properties for the RANDAO contribution. It is unpredictable to the other validators since they do not know the proposer's private key, but it is easily verifiable since all validators know the proposer's public key.

The elegance and simplicity of reusing the BLS key infrastructure for the RANDAO makes it a considerable improvement on the original hash onion design.

There is a further nice benefit to using BLS signatures that may not be obvious. The aggregation property of the signatures allows the contribution to be derived via a multi-party computation. That is, signatures from multiple validators can be combined into a threshold signature so that they can effectively act as a single validator. We do not use this property within the core Eth2 protocol, but it enables Distributed Validator Technology, which would be very difficult with the old hash onion approach.

For all these reasons, we now use a BLS signature as the entropy contribution to the RANDAO, that is, the `randao_reveal`

.

##### Where does the entropy come from?

Evidently the predominant source of randomness in the Ethereum 2 protocol is the secret keys of the validators. If every validator key is generated uniformly randomly and independently then each contributes 256 bits of entropy to the overall pool. However, keys are sometimes not independently generated^{5}. EIP-2333 provides a way to derive multiple validator keys from a single entropy seed, and large stakers are likely to have done this. Thus, the total entropy from $N$ validator keys will be less than $N \times 256$ bits, but we don't know how much less.

Some other sources of entropy for the RANDAO are noted in EIP-4399.

- Missed or orphaned block proposals directly affect the RANDAO's output. Network conditions, node faults, or maintenance downtime can all lead to missed block proposals that have a degree of randomness.
- The total number of active validators in an epoch affects the selection of proposers which in turn affects participation in the RANDAO. Thus, deposits and exits (both voluntary and forced) contribute entropy.
- A validator's effective balance affects its likelihood of being selected to propose a block. Thus, changes in effective balances (perhaps due to one or more validators being offline for a period of time) add entropy.

#### Updating the RANDAO

When a validator proposes a block, it includes a field `randao_reveal`

which has `BLSSignature`

type. This is the proposer's signature over the epoch number, using its normal signing secret key.

The `randao_reveal`

is computed by the proposer as follows, the `privkey`

input being the validator's random secret key.

```
def get_epoch_signature(state: BeaconState, block: BeaconBlock, privkey: int) -> BLSSignature:
domain = get_domain(state, DOMAIN_RANDAO, compute_epoch_at_slot(block.slot))
signing_root = compute_signing_root(compute_epoch_at_slot(block.slot), domain)
return bls.Sign(privkey, signing_root)
```

When a block is processed, the `randao_reveal`

is mixed into the RANDAO like this:

```
def process_randao(state: BeaconState, body: BeaconBlockBody) -> None:
epoch = get_current_epoch(state)
# Verify RANDAO reveal
proposer = state.validators[get_beacon_proposer_index(state)]
signing_root = compute_signing_root(epoch, get_domain(state, DOMAIN_RANDAO))
assert bls.Verify(proposer.pubkey, signing_root, body.randao_reveal)
# Mix in RANDAO reveal
mix = xor(get_randao_mix(state, epoch), hash(body.randao_reveal))
state.randao_mixes[epoch % EPOCHS_PER_HISTORICAL_VECTOR] = mix
```

Two things are going on in the processing of the `randao_reveal`

signature.

First, the signature is verified using the proposer's public key before being mixed in. This means that the proposer has almost no choice about what it contributes to the RANDAO: it either contributes a single verifiable value â€“ the correct signature over the epoch number â€“ or it withholds its block and contributes nothing. (Equivalently, a block with an incorrect reveal is invalid.)

Second, the hash of the signature is mixed in to the beacon state's RANDAO using an `xor`

operation. We apply the hash operation to reduce the length of the RANDAO accumulator from about 762 bits â€“ the length of a compressed BLS signature, an inconvenient number of bits to work with â€“ to 256 bits. The uniformity of the output of hash functions is also better established than that of BLS signatures.

We could have mixed in the reveal by hashing it directly with the RANDAO accumulator, however we choose to mix it in via an `xor`

operation. The combination of using the epoch number as the signed quantity and using `xor`

to mix it in leads to a subtle, albeit tiny, improvement in attack-resistance of the RANDAO. Justin Drake explains in his notes:

Using

`xor`

in`process_randao`

is (slightly) more secure than using`hash`

. To illustrate why, imagine an attacker can grind randomness in the current epoch such that two of his validators are the last proposers, in a different order, in two resulting samplings of the next epochs. The commutativity of`xor`

makes those two samplings equivalent, hence reducing the attacker's grinding opportunity for the next epoch versus`hash`

(which is not commutative). The strict security improvement may simplify the derivation of RANDAO security formal lower bounds.

We will see shortly that it can be advantageous to an attacker to have control of the last slots of an epoch. Justin's point is that, under the current scheme, the attacker having validators $V_0, V_1$ in the two last slots of an epoch is equivalent to it having $V_1, V_0$ with respect to the `randao_reveal`

s. This fractionally reduces an attacker's choices when it comes to influencing the RANDAO. If we used `hash`

rather than `xor`

, or if we signed over the slot number rather than the epoch number, these orderings would result in different outcomes from each other, giving an attacker more choice and therefore more power.

#### Lookahead

We started this section with a discussion of unpredictability. Ideally, it should not be possible to predict the duties for any block proposer or committee member until the moment they become active. However, in practice, proposers and committee members need a little advance notice of their duties to allow them to join the right p2p network subnets and do whatever other preparation they need to do.

The RANDAO seed at the end of epoch $N$ is used to compute validator duties for the whole of epoch $N+2$. This interval is controlled by `MIN_SEED_LOOKAHEAD`

via the `get_seed()`

function. Thus, validators have at least one full epoch to prepare themselves for any duties, but no more than two.

Under normal circumstances, then, an attacker is not able to predict the duty assignments more than two epochs in advance. However, if an attacker has a large proportion of the stake or is, for example, able to mount a DoS attack against block proposers for a while, then it might be possible for the attacker to predict the output of the RANDAO further ahead than `MIN_SEED_LOOKAHEAD`

would normally allow. The attacker might then use this foreknowledge to strategically exit validators or make deposits^{6} in order to gain control of a committee, or a large number of block proposal slots.

It's certainly not an easy attack. Nonetheless, it is easy to defend against, so we might as well do so.

To prevent this, we assume a maximum feasible lookahead that an attacker might achieve, `MAX_SEED_LOOKAHEAD`

and delay all activations and exits by this amount, which allows time for new randomness to come in via block proposals from honest validators, making irrelevant any manipulation by the entering or exiting validators. With `MAX_SEED_LOOKAHEAD`

set to 4, if only 10% of validators are online and honest, then the chance that an attacker can succeed in forecasting the seed beyond (`MAX_SEED_LOOKAHEAD`

`-`

`MIN_SEED_LOOKAHEAD`

) = 3 epochs is $0.9^{3\times 32}$, which is about 1 in 25,000.

##### Single Secret Leader Election

As currently implemented, both the minimum and maximum lookaheads smell a little of engineering hackery. In a perfect design only the block proposer would know ahead of time that it has been chosen to propose in that slot. Once its block is revealed then the rest of the network would be able to verify that, yes, this was indeed the chosen proposer. This feature is called Single Secret Leader Election. We do not yet have it in the Ethereum protocol, and I shall write about it elsewhere. Meanwhile, some good progress is being made towards making it practical.

#### RANDAO biasability

The RANDAO value for an epoch is set at the end of the previous epoch, and duty assignments for the entire epoch (proposals and committee memberships) depend on that value. (Actually â€“ due to `MIN_SEED_LOOKAHEAD`

â€“ on the RANDAO value at the end of the last-but-one epoch, but we'll overlook that in what follows.)

Thus, when a validator happens to be assigned to propose a block in the last slot of an epoch, it gains a small amount of control over the assignments for the next epoch. This is because it can choose to reveal its block, which mixes in its RANDAO reveal, or it can choose (at a cost) to withhold its block and keep the existing RANDAO value, knowing that there will be no subsequent RANDAO change before duties are calculated. In this way, a validator is able to exert a little influence over the proposer and committee assignments in the next epoch. This is called "one bit of influence" over the RANDAO as the validator has a choice of two outcomes.

If an attacker gets a string of proposals at the end of an epoch then it has more power. Having $k$ consecutive proposals at the end of an epoch gives the attacker $2^k$ choices for the ultimate value of the RANDAO that will be used to compute future validator duties. In this scenario the attacker has "$k$ bits of influence" over the RANDAO.

#### Biasability analyses

This section is fully optional. I got a bit carried away with the maths; it's fine to skip to the next section.

To make discussion of RANDAO biasability more concrete I shall try to quantify what it means in practice with a couple of examples. In each case the entity "cheating" or "attacking" has control over a proportion of the stake $r$, either directly or through some sort of collusion, and we will assume that the remaining validators are all acting independently and correctly. We will also assume, of course, that individual `randao_reveal`

s are uniformly random.

In the first example, I will try to gain control of the RANDAO by permanently acquiring proposals in the last slots of an epoch. In the second example I will try to improve my expected number of block proposals by biasing the RANDAO when I get the opportunity to do so. In both cases I will be selectively making and withholding proposals having computed the best outcome: a process of "grinding" the RANDAO.

These examples are intended only as illustrations. They are not academic studies, and there are lots of loose ends. It's very likely I've messed something up: probability is *hard*. I'd be very interested if anyone wanted to make them more rigorous and complete. Some related work, more simulation based, was previously done by Runtime Verification.

##### RANDAO takeover

If I control a proportion $r$ of the total stake, how much can I boost my influence over the protocol by manipulating the RANDAO?

The ability to influence the RANDAO depends on controlling a consecutive string of block proposals at the end of an epoch. We shall call this property "having a tail", and the tail will have a length $k$ from 0 to a maximum of 32, an entire epoch.

Our question can be framed like this: if I have a tail of length $k$ in one epoch, what is my expected length of tail in the next epoch? With a tail of length $k$ I have $2^k$ opportunities to reshuffle the RANDAO by selectively making or withholding block proposals. Can I grind through the possibilities to increase my tail length next time, and eventually take over the whole epoch?

In the absence of any manipulation, my probability of having a tail of length exactly $k$ in any given epoch is $(1-r)r^k$ for $k < 32$, and $r^{32}$ when $k = 32$. This is the chance that I make $k$ proposals in the tail positions preceded by a proposal that I did not make.

So the expected tail length for someone controlling a proportion $r$ of the stake is,

Now we will calculate $E^{(k)}(r)$, the expected length of tail I can achieve in the next epoch by using my previous tail of length $k$ to grind the options.

Consider the case where I have a tail of length $k = 1$ in some epoch. This gives me two options: I can publish my RANDAO contribution, or I can withhold my RANDAO contribution (by withholding my block). My strategy is to choose the longest tail for the next epoch that I can gain via either of these options.

The probability, $p^{(1)}_j$, of gaining a tail of exactly length $j$ as a result of having a tail of length 1 is,

We can think about this as follows. With $k = 1$ we get two attempts, therefore $q$ appears twice in each product. To calculate $p^{(1)}_j$ we need the sum over the all the combinations of the probability of getting a tail of length exactly $j$ (that is, $q_j$) multiplied by the probability of getting a tail of $j$ or less (that is, not getting a tail longer than $j$, otherwise we would have chosen that length instead of $j$).

Visually, calculating $p^{(1)}_2$ looks like the sum of the values in the shaded area of the next diagram.

This example with tail length $k = 1$ results in a two-dimensional square since we have two possibilities to try. One way to calculate $p^{(1)}_j$ is to take the difference between the sum of all the products in the square side $j + 1$ and the sum of all the products in the square side $j$.

Thinking of it like this helps us to generalise to the cases when $k > 1$. In those cases we are dealing with a hyper-cube of dimension $2^k$; each element is the product of $2^k$ values of $q$. To calculate $p^{(k)}_j$ we can find the difference between the sum of all the products in the $2^k$-dimensional cube side $j + 1$ and the sum of all the products in the $2^k$-dimensional cube side $j$. This is tedious to write down and involves a mind-boggling number of calculations even for quite small $k$, but see my example code for an efficient a way to calculate it.

Now, finally, we can calculate the expected tail length in the next epoch given that we have a tail of length $k$ in this epoch.

Graphing this for various values of $k$ we get the following. Note that the solid, $k = 0$, line is the same as $E(r)$ above - the expected tail with no manipulation. That is, $E^{(0)}(r) = E(r)$ as you'd expect.

We see that, if I end up with any length of tail in an epoch, I can always grind my RANDAO contributions to improve my expected length of tail in the next epoch when compared with not grinding the RANDAO. And the longer the tail I have, the better the tail I can expect to have in the next epoch. These results are not surprising.

The important question is, under what circumstances can I use this ability in order to indefinitely increase my expected tail length, so that I can eventually gain full control of the RANDAO?

To investigate this, consider the following graph. Here, for each $k$ line we have plotted $E^{(k)}(r) - k$. This allows us to see whether our expected tail in the next epoch is greater or less than our current tail. If $E^{(k)}(r) - k$ is negative then I can expect to have fewer proposals in the next epoch than I have in this one.

We can see that for $r$ less than around 0.5, especially as $k$ grows, we expect our tail length to shrink rather than grow, despite our best RANDAO grinding efforts. However, for $r$ greater than 0.5, we expect our tail length to grow as a result of our grinding, whatever tail length we start with.

For completeness, we shouldn't only look at expectations, but also at probabilities. The following graph shows the probability that if I have a tail of length $k$ then I will have a tail of length less than $k$ in the next epoch. As $k$ increases you can see that a step function is forming: for a proportion of stake less than about 50% it becomes practically certain that my tail will decrease in length from one epoch to the next despite my best efforts to grow it; conversely, for a proportion of stake greater than a little over 50% it becomes practically certain that I can maintain or grow my tail of block proposals.

###### Discussion of RANDAO takeover

What can we conclude from this? If I control less than about half the stake, then I cannot expect to be able to climb the ladder of increasing tail length: with high probability the length of tail I have will decrease rather than increase. Whereas, if I have more than half the stake, my expected length of tail increases each epoch, so I am likely to be able to eventually take over the RANDAO completely. With high enough $r$, the $2^k$ options I have for grinding the RANDAO overwhelm the probability of losing tail proposals. For large values of $k$ it will not be practical to grind through all these options. However, we need to arrive at only one good combination in order to succeed, so we might not need to do the full calculation.

The good news is that, if attackers control more than half the stake, they have more interesting attacks available, such as taking over the LMD fork choice rule. So we generally assume in the protocol that any attacker has less than half the stake, in which case the RANDAO takeover attack appears to be infeasible.

As a final observation, we have ignored cases where two or more of the tail proposals are from the same validator. As discussed above, these proposals would each result in the same RANDAO contribution and reduce my grinding options. However, with a large number of validators in the system this is a reasonable approximation to make.

## Code for calculating the length of tail with cheating

Here is the code for generating the data for the graphs above. The length of tail goes up to $k = 12$. Feel free to increase that, although it gets quite compute intensive. Twelve is enough to see the general picture.

```
def prob_tail_eq(r, k):
return (1 - r) * r**k if k < N else r**k
# The sum of the products of all the q_i in the hypercube of side j and dim k
# Recursive is cooler, but written iteratively so that python doesn't run out of stack
def hyper(q, j, k):
h = 1
for n in range(1, k + 1):
h = sum([q[i] * h for i in range(j)])
return h
# Smoke test.
assert abs(hyper([0.9, 0.09, 0.009, 0.0009, 0.00009, 0.00001], 6, 32) - 1.0
```