Part 3: Annotated Specification

Beacon Chain State Transition Function

Epoch processing

def process_epoch(state: BeaconState) -> None:
    process_justification_and_finalization(state)  # [Modified in Altair]
    process_inactivity_updates(state)  # [New in Altair]
    process_rewards_and_penalties(state)  # [Modified in Altair]
    process_registry_updates(state)
    process_slashings(state)  # [Modified in Altair]
    process_eth1_data_reset(state)
    process_effective_balance_updates(state)
    process_slashings_reset(state)
    process_randao_mixes_reset(state)
    process_historical_summaries_update(state)  # [Modified in Capella]
    process_participation_flag_updates(state)  # [New in Altair]
    process_sync_committee_updates(state)  # [New in Altair]

The long laundry list of things that need to be done at the end of an epoch. You can see from the comments that a bunch of extra work was added in the Altair upgrade.

Used by process_slots()
Uses All the things below

Justification and finalization

def process_justification_and_finalization(state: BeaconState) -> None:
    # Initial FFG checkpoint values have a `0x00` stub for `root`.
    # Skip FFG updates in the first two epochs to avoid corner cases that might result in modifying this stub.
    if get_current_epoch(state) <= GENESIS_EPOCH + 1:
        return
    previous_indices = get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, get_previous_epoch(state))
    current_indices = get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, get_current_epoch(state))
    total_active_balance = get_total_active_balance(state)
    previous_target_balance = get_total_balance(state, previous_indices)
    current_target_balance = get_total_balance(state, current_indices)
    weigh_justification_and_finalization(state, total_active_balance, previous_target_balance, current_target_balance)

I believe the corner cases mentioned in the comments are related to Issue 8491. In any case, skipping justification and finalisation calculations during the first two epochs definitely simplifies things.

For the purposes of the Casper FFG finality calculations, we want attestations that have both source and target votes we agree with. If the source vote is incorrect, then the attestation is never processed into the state, so we just need the validators that voted for the correct target, according to their participation flag indices.

Since correct target votes can be included up to 32 slots after they are made, we collect votes from both the previous epoch and the current epoch to ensure that we have them all.

Once we know which validators voted for the correct source and head in the current and previous epochs, we add up their effective balances (not actual balances). total_active_balance is the sum of the effective balances for all validators that ought to have voted during the current epoch. Slashed, but not exited validators are not included in these calculations.

These aggregate balances are passed to weigh_justification_and_finalization() to do the actual work of updating justification and finalisation.

Used by process_epoch(), compute_pulled_up_tip
Uses get_unslashed_participating_indices(), get_total_active_balance(), get_total_balance(), weigh_justification_and_finalization()
See also participation flag indices

def weigh_justification_and_finalization(state: BeaconState,
                                         total_active_balance: Gwei,
                                         previous_epoch_target_balance: Gwei,
                                         current_epoch_target_balance: Gwei) -> None:
    previous_epoch = get_previous_epoch(state)
    current_epoch = get_current_epoch(state)
    old_previous_justified_checkpoint = state.previous_justified_checkpoint
    old_current_justified_checkpoint = state.current_justified_checkpoint

    # Process justifications
    state.previous_justified_checkpoint = state.current_justified_checkpoint
    state.justification_bits[1:] = state.justification_bits[:JUSTIFICATION_BITS_LENGTH - 1]
    state.justification_bits[0] = 0b0
    if previous_epoch_target_balance * 3 >= total_active_balance * 2:
        state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
                                                        root=get_block_root(state, previous_epoch))
        state.justification_bits[1] = 0b1
    if current_epoch_target_balance * 3 >= total_active_balance * 2:
        state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
                                                        root=get_block_root(state, current_epoch))
        state.justification_bits[0] = 0b1

    # Process finalizations
    bits = state.justification_bits
    # The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
    if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
        state.finalized_checkpoint = old_previous_justified_checkpoint
    # The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
    if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
        state.finalized_checkpoint = old_previous_justified_checkpoint
    # The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
    if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
        state.finalized_checkpoint = old_current_justified_checkpoint
    # The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
    if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
        state.finalized_checkpoint = old_current_justified_checkpoint

This routine handles justification first, and then finalisation.

Justification

A supermajority link is a vote with a justified source checkpoint CmC_m and a target checkpoint CnC_n that was made by validators controlling more than two-thirds of the stake. If a checkpoint has a supermajority link pointing to it then we consider it justified. So, if more than two-thirds of the validators agree that checkpoint 3 was justified (their source vote) and have checkpoint 4 as their target vote, then we justify checkpoint 4.

We know that all the attestations have source votes that we agree with. The first if statement tries to justify the previous epoch's checkpoint seeing if the (source, target) pair is a supermajority. The second if statement tries to justify the current epoch's checkpoint. Note that the previous epoch's checkpoint might already have been justified; this is not checked but does not affect the logic.

The justification status of the last four epochs is stored in an array of bits in the state. After shifting the bits along by one at the outset of the routine, the justification status of the current epoch is stored in element 0, the previous in element 1, and so on.

Note that the total_active_balance is the current epoch's total balance, so it may not be strictly correct for calculating the supermajority for the previous epoch. However, the rate at which the validator set can change between epochs is tightly constrained, so this is not a significant issue.

Finalisation

The version of Casper FFG described in the Gasper paper uses kk-finality, which extends the handling of finality in the original Casper FFG paper. See the k-finality section in the chapter on Consensus for more on how it interacts with the safety guarantees of Casper FFG.

In kk-finality, if we have a consecutive set of kk justified checkpoints Cj,,Cj+k1{C_j, \ldots, C_{j+k-1}}, and a supermajority link from CjC_j to Cj+kC_{j+k}, then CjC_j is finalised. Also note that this justifies Cj+kC_{j+k}, by the rules above.

The Casper FFG version of this is 11-finality. So, a supermajority link from a justified checkpoint CnC_n to the very next checkpoint Cn+1C_{n+1} both justifies Cn+1C_{n+1} and finalises CnC_n.

On the beacon chain we are using 22-finality, since target votes may be included up to an epoch late. In 22-finality, we keep records of checkpoint justification status for four epochs and have the following conditions for finalisation, where the checkpoint for the current epoch is CnC_n. Note that we have already updated the justification status of CnC_n and Cn1C_{n-1} in this routine, which implies the existence of supermajority links pointing to them if the corresponding bits are set, respectively.

  1. Checkpoints Cn3C_{n-3} and Cn2C_{n-2} are justified, and there is a supermajority link from Cn3C_{n-3} to Cn1C_{n-1}: finalise Cn3C_{n-3}.
  2. Checkpoint Cn2C_{n-2} is justified, and there is a supermajority link from Cn2C_{n-2} to Cn1C_{n-1}: finalise Cn2C_{n-2}. This is equivalent to 11-finality applied to the previous epoch.
  3. Checkpoints Cn2C_{n-2} and Cn1C_{n-1} are justified, and there is a supermajority link from Cn2C_{n-2} to CnC_n: finalise Cn2C_{n-2}.
  4. Checkpoint Cn1C_{n-1} is justified, and there is a supermajority link from Cn1C_{n-1} to CnC_n: finalise Cn1C_{n-1}. This is equivalent to 11-finality applied to the current epoch.

A diagram of the four 2-finality scenarios.

The four cases of 2-finality. In each case the supermajority link causes the checkpoint at its start (the source) to become finalised and the checkpoint at its end (the target) to become justified. Checkpoint numbers are along the bottom.

Almost always we would expect to see only the 11-finality cases, in particular, case 4. The 22-finality cases would occur only in situations where many attestations are delayed, or when we are very close to the 2/3rds participation threshold. Note that these evaluations stack, so it is possible for rule 2 to finalise Cn2C_{n-2} and then for rule 4 to immediately finalise Cn1C_{n-1}, for example.

For the uninitiated, in Python's array slice syntax, bits[1:4] means bits 1, 2, and 3 (but not 4). This always trips me up.

Used by process_justification_and_finalization()
Uses get_block_root()
See also JUSTIFICATION_BITS_LENGTH, Checkpoint

Inactivity scores

def process_inactivity_updates(state: BeaconState) -> None:
    # Skip the genesis epoch as score updates are based on the previous epoch participation
    if get_current_epoch(state) == GENESIS_EPOCH:
        return

    for index in get_eligible_validator_indices(state):
        # Increase the inactivity score of inactive validators
        if index in get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, get_previous_epoch(state)):
            state.inactivity_scores[index] -= min(1, state.inactivity_scores[index])
        else:
            state.inactivity_scores[index] += INACTIVITY_SCORE_BIAS
        # Decrease the inactivity score of all eligible validators during a leak-free epoch
        if not is_in_inactivity_leak(state):
            state.inactivity_scores[index] -= min(INACTIVITY_SCORE_RECOVERY_RATE, state.inactivity_scores[index])

Since the Altair upgrade, each validator has an individual inactivity score in the beacon state which is updated as follows.

  • At the end of epoch NN, irrespective of the inactivity leak,
  • When not in an inactivity leak

Flowchart showing how inactivity score updates are calculated.

How each validator's inactivity score is updated. The happy flow is right through the middle. "Active", when updating the scores at the end of epoch NN, means having made a correct and timely target vote in epoch N1N-1.

There is a floor of zero on the score. So, outside a leak, validators' scores will rapidly return to zero and stay there, since INACTIVITY_SCORE_RECOVERY_RATE is greater than INACTIVITY_SCORE_BIAS.

Used by process_epoch()
Uses get_eligible_validator_indices(), get_unslashed_participating_indices(), is_in_inactivity_leak()
See also INACTIVITY_SCORE_BIAS, INACTIVITY_SCORE_RECOVERY_RATE, INACTIVITY_SCORE_RECOVERY_RATE

Reward and penalty calculations

Without wanting to go full Yellow Paper on you, I am going to adopt a little notation to help analyse the rewards.

We will define a base reward BB that we will see turns out to be the expected long-run average income of an optimally performing validator per epoch (ignoring validator set size changes). The total number of active validators is NN.

The base reward is calculated from a base reward per increment, bb. An "increment" is a unit of effective balance in terms of EFFECTIVE_BALANCE_INCREMENT. B=32bB = 32b because MAX_EFFECTIVE_BALANCE = 32 * EFFECTIVE_BALANCE_INCREMENT

Other quantities we will use in rewards calculation are the incentivization weights: WsW_s, WtW_t, WhW_h, and WyW_y being the weights for correct source, target, head, and sync committee votes respectively; WpW_p being the proposer weight; and the weight denominator WΣW_{\Sigma} which is the sum of the weights.

Issuance for regular rewards happens in four ways:

  • IAI_A is the maximum total reward for all validators attesting in an epoch;
  • IAPI_{A_P} is the maximum reward issued to proposers in an epoch for including attestations;
  • ISI_S is the maximum total reward for all sync committee participants in an epoch; and
  • ISPI_{S_P} is the maximum reward issued to proposers in an epoch for including sync aggregates;

Under get_flag_index_deltas(), process_attestation(), and process_sync_aggregate() we find that these work out as follows in terms of BB and NN:

IA=Ws+Wt+WhWΣNBIAP=WpWΣWpIAIS=WyWΣNBISP=WpWΣWpIS\begin{aligned} &I_A = \frac{W_s + W_t + W_h}{W_{\Sigma}}NB \\ &I_{A_P} = \frac{W_p}{W_{\Sigma} - W_p}I_A \\ &I_S = \frac{W_y}{W_{\Sigma}}NB \\ &I_{S_P} = \frac{W_p}{W_{\Sigma} - W_p}I_S \end{aligned}

To find the total optimal issuance per epoch, we can first sum IAI_A and ISI_S,

IA+IS=Ws+Wt+Wh+WyWΣNB=WΣWpWΣNBI_A + I_S = \frac{W_s + W_t + W_h + W_y}{W_{\Sigma}}NB = \frac{W_{\Sigma} - W_p}{W_{\Sigma}}NB

Now adding in the proposer rewards,

IA+IS+IAP+ISP=WΣWpWΣ(1+WpWΣWp)NB=(WΣWpWΣ+WpWΣ)NB=NBI_A + I_S + I_{A_P} + I_{S_P} = \frac{W_{\Sigma} - W_p}{W_{\Sigma}}(1 + \frac{W_p}{W_{\Sigma} - W_p})NB = (\frac{W_{\Sigma} - W_p}{W_{\Sigma}} + \frac{W_p}{W_{\Sigma}})NB = NB

So, we see that every epoch, NBNB Gwei is awarded to NN validators. Every validator participates in attesting, and proposing and sync committee duties are uniformly random, so the long-term expected income per optimally performing validator per epoch is BB Gwei.

Helpers

def get_base_reward_per_increment(state: BeaconState) -> Gwei:
    return Gwei(EFFECTIVE_BALANCE_INCREMENT * BASE_REWARD_FACTOR // integer_squareroot(get_total_active_balance(state)))

The base reward per increment is the fundamental unit of reward in terms of which all other regular rewards and penalties are calculated. We will denote the base reward per increment, bb.

As I noted under BASE_REWARD_FACTOR, this is the big knob to turn if we wish to increase or decrease the total reward for participating in Eth2, otherwise known as the issuance rate of new Ether.

An increment is a single unit of a validator's effective balance, denominated in terms of EFFECTIVE_BALANCE_INCREMENT, which happens to be one Ether. So, an increment is 1 Ether of effective balance, and maximally effective validator has 32 increments.

The base reward per increment is inversely proportional to the square root of the total balance of all active validators. This means that, as the number NN of validators increases, the reward per validator decreases as 1N\frac{1}{\sqrt{N}}, and the overall issuance per epoch increases as N\sqrt{N}.

The decrease with increasing NN in per-validator rewards provides a price discovery mechanism: the idea is that an equilibrium will be found where the total number of validators results in a reward similar to returns available elsewhere for similar risk. A different curve could have been chosen for the rewards profile. For example, the inverse of total balance rather than its square root would keep total issuance constant. The section on Issuance has a deeper exploration of these topics.

Used by get_base_reward(), process_sync_aggregate()
Uses integer_squareroot(), get_total_active_balance()

def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei:
    """
    Return the base reward for the validator defined by ``index`` with respect to the current ``state``.
    """
    increments = state.validators[index].effective_balance // EFFECTIVE_BALANCE_INCREMENT
    return Gwei(increments * get_base_reward_per_increment(state))

The base reward is the reward that an optimally performing validator can expect to earn on average per epoch, over the long term. It is proportional to the validator's effective balance; a validator with MAX_EFFECTIVE_BALANCE can expect to receive the full base reward B=32bB = 32b per epoch on a long-term average.

Used by get_flag_index_deltas(), process_attestation()
Uses get_base_reward_per_increment()
See also EFFECTIVE_BALANCE_INCREMENT

def get_finality_delay(state: BeaconState) -> uint64:
    return get_previous_epoch(state) - state.finalized_checkpoint.epoch

Returns the number of epochs since the last finalised checkpoint (minus one). In ideal running this ought to be zero: during epoch processing we aim to have justified the checkpoint in the current epoch and finalised the checkpoint in the previous epoch. A delay in finalisation suggests a chain split or a large fraction of validators going offline.

Used by is_in_inactivity_leak()

def is_in_inactivity_leak(state: BeaconState) -> bool:
    return get_finality_delay(state) > MIN_EPOCHS_TO_INACTIVITY_PENALTY

If the beacon chain has not managed to finalise a checkpoint for MIN_EPOCHS_TO_INACTIVITY_PENALTY epochs (that is, four epochs), then the chain enters the inactivity leak. In this mode, penalties for non-participation are heavily increased, with the goal of reducing the proportion of stake controlled by non-participants, and eventually regaining finality.

Used by get_flag_index_deltas(), process_inactivity_updates()
Uses get_finality_delay()
See also inactivity leak, MIN_EPOCHS_TO_INACTIVITY_PENALTY

def get_eligible_validator_indices(state: BeaconState) -> Sequence[ValidatorIndex]:
    previous_epoch = get_previous_epoch(state)
    return [
        ValidatorIndex(index) for index, v in enumerate(state.validators)
        if is_active_validator(v, previous_epoch) or (v.slashed and previous_epoch + 1 < v.withdrawable_epoch)
    ]

These are the validators that were subject to rewards and penalties in the previous epoch.

The list differs from the active validator set returned by get_active_validator_indices() by including slashed but not fully exited validators in addition to the ones marked active. Slashed validators are subject to penalties right up to when they become withdrawable and are thus fully exited.

Used by get_flag_index_deltas(), process_inactivity_updates(), get_inactivity_penalty_deltas()
Uses is_active_validator()
Inactivity penalty deltas

def get_inactivity_penalty_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
    """
    Return the inactivity penalty deltas by considering timely target participation flags and inactivity scores.
    """
    rewards = [Gwei(0) for _ in range(len(state.validators))]
    penalties = [Gwei(0) for _ in range(len(state.validators))]
    previous_epoch = get_previous_epoch(state)
    matching_target_indices = get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, previous_epoch)
    for index in get_eligible_validator_indices(state):
        if index not in matching_target_indices:
            penalty_numerator = state.validators[index].effective_balance * state.inactivity_scores[index]
            penalty_denominator = INACTIVITY_SCORE_BIAS * INACTIVITY_PENALTY_QUOTIENT_BELLATRIX
            penalties[index] += Gwei(penalty_numerator // penalty_denominator)
    return rewards, penalties

Validators receive penalties proportional to their individual inactivity scores, even when the beacon chain is not in an inactivity leak. However, these scores reduce to zero fairly rapidly outside a leak. This is a change from Phase 0 in which inactivity penalties were applied only during leaks.

All unslashed validators that made a correct and timely target vote in the previous epoch are identified by get_unslashed_participating_indices(), and all other active validators receive a penalty, including slashed validators.

The penalty is proportional to the validator's effective balance and its inactivity score. See INACTIVITY_PENALTY_QUOTIENT_BELLATRIX for more details of the calculation, and INACTIVITY_SCORE_RECOVERY_RATE for some charts of how the penalties accrue.

The returned rewards array always contains only zeros. It's here just to make the Python syntax simpler in the calling routine.

Used by def_process_rewards_and_penalties()
Uses get_unslashed_participating_indices(), get_eligible_validator_indices()
See also Inactivity Scores, INACTIVITY_PENALTY_QUOTIENT_BELLATRIX, INACTIVITY_SCORE_RECOVERY_RATE
Process rewards and penalties

def process_rewards_and_penalties(state: BeaconState) -> None:
    # No rewards are applied at the end of `GENESIS_EPOCH` because rewards are for work done in the previous epoch
    if get_current_epoch(state) == GENESIS_EPOCH:
        return

    flag_deltas = [get_flag_index_deltas(state, flag_index) for flag_index in range(len(PARTICIPATION_FLAG_WEIGHTS))]
    deltas = flag_deltas + [get_inactivity_penalty_deltas(state)]
    for (rewards, penalties) in deltas:
        for index in range(len(state.validators)):
            increase_balance(state, ValidatorIndex(index), rewards[index])
            decrease_balance(state, ValidatorIndex(index), penalties[index])

This is where validators are rewarded and penalised according to their attestation records.

Attestations included in beacon blocks were processed by process_attestation as blocks were received, and flags were set in the beacon state according to their timeliness and correctness. These flags are now processed into rewards and penalties for each validator by calling get_flag_index_deltas() for each of the flag types.

Once the normal attestation rewards and penalties have been calculated, additional penalties based on validators' inactivity scores are accumulated.

As noted elsewhere, rewards and penalties are handled separately from each other since we don't do negative numbers.

For reference, the only other places where rewards and penalties are applied are as follows:

Used by process_epoch()
Uses get_flag_index_deltas(), get_inactivity_penalty_deltas(), increase_balance(), decrease_balance()
See also ParticipationFlags, PARTICIPATION_FLAG_WEIGHTS

Registry updates

def process_registry_updates(state: BeaconState) -> None:
    # Process activation eligibility and ejections
    for index, validator in enumerate(state.validators):
        if is_eligible_for_activation_queue(validator):
            validator.activation_eligibility_epoch = get_current_epoch(state) + 1

        if (
            is_active_validator(validator, get_current_epoch(state))
            and validator.effective_balance <= EJECTION_BALANCE
        ):
            initiate_validator_exit(state, ValidatorIndex(index))

    # Queue validators eligible for activation and not yet dequeued for activation
    activation_queue = sorted([
        index for index, validator in enumerate(state.validators)
        if is_eligible_for_activation(state, validator)
        # Order by the sequence of activation_eligibility_epoch setting and then index
    ], key=lambda index: (state.validators[index].activation_eligibility_epoch, index))
    # Dequeued validators for activation up to churn limit
    for index in activation_queue[:get_validator_churn_limit(state)]:
        validator = state.validators[index]
        validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))

The Registry is the part of the beacon state that stores Validator records. These particular updates are, for the most part, concerned with moving validators through the activation queue.

is_eligible_for_activation_queue() finds validators that have a sufficient deposit amount yet their activation_eligibility_epoch is still set to FAR_FUTURE_EPOCH. These will be at most the validators for which deposits were processed during the last epoch, potentially up to MAX_DEPOSITS * SLOTS_PER_EPOCH, which is 512 (minus any partial deposits that don't yet add up to a whole deposit). These have their activation_eligibility_epoch set to the next epoch. They will become eligible for activation once that epoch is finalised – "eligible for activation" means only that they can be added to the activation queue; they will not become active until they reach the end of the queue.

Next, any validators whose effective balance has fallen to EJECTION_BALANCE have their exit initiated.

is_eligible_for_activation() selects validators whose activation_eligibility_epoch has just been finalised. The list of these is ordered by eligibility epoch, and then by index. There might be multiple eligibility epochs in the list if finalisation got delayed for some reason.

Finally, the first get_validator_churn_limit() validators in the list get their activation epochs set to compute_activation_exit_epoch().

On first sight, you'd think that the activation epochs of the whole queue could be set here, rather than just a single epoch's worth. But at some point, get_validator_churn_limit() will change unpredictably (we don't know when validators will exit), which makes that infeasible. Though, curiously, that is exactly what initiate_validator_exit() does. Anyway, clients could optimise this by persisting the sorted activation queue rather than recalculating it.

Used by process_epoch()
Uses is_eligible_for_activation_queue(), is_active_validator(), initiate_validator_exit(), is_eligible_for_activation(), get_validator_churn_limit(), compute_activation_exit_epoch()
See also Validator, EJECTION_BALANCE

Slashings

def process_slashings(state: BeaconState) -> None:
    epoch = get_current_epoch(state)
    total_balance = get_total_active_balance(state)
    adjusted_total_slashing_balance = min(
        sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX,
        total_balance
    )
    for index, validator in enumerate(state.validators):
        if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
            increment = EFFECTIVE_BALANCE_INCREMENT  # Factored out from penalty numerator to avoid uint64 overflow
            penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
            penalty = penalty_numerator // total_balance * increment
            decrease_balance(state, ValidatorIndex(index), penalty)

Slashing penalties are applied in two stages: the first stage is in slash_validator(), immediately on detection; the second stage is here.

In slash_validator() the withdrawable epoch is set EPOCHS_PER_SLASHINGS_VECTOR in the future, so in this function we are considering all slashed validators that are halfway to being withdrawable, that is, completely exited from the protocol. Equivalently, they were slashed EPOCHS_PER_SLASHINGS_VECTOR // 2 epochs ago (about 18 days).

To calculate the additional slashing penalty, we do the following:

  1. Find the sum of the effective balances (at the time of the slashing) of all validators that were slashed in the previous EPOCHS_PER_SLASHINGS_VECTOR epochs (36 days). These are stored as a vector in the state.
  2. Multiply this sum by PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX, but cap the result at total_balance, the total active balance of all validators.
  3. For each slashed validator being considered, multiply its effective balance by the result of #2 and then divide by the total_balance. This results in an amount between zero and the full effective balance of the validator. That amount is subtracted from its actual balance as the penalty. Note that the effective balance could exceed the actual balance in odd corner cases, but decrease_balance() ensures the balance does not go negative.

If only a single validator were slashed within the 36 days, then this secondary penalty is tiny (actually zero, see below). If one-third of validators are slashed (the minimum required to finalise conflicting blocks), then, with PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX set to three, a successful chain attack will result in the attackers losing their entire effective balances.

Interestingly, due to the way the integer arithmetic is constructed in this routine, in particular the factoring out of increment, the result of this calculation will be zero if validator.effective_balance * adjusted_total_slashing_balance is less than total_balance. Effectively, the penalty is rounded down to the nearest whole amount of Ether. Issues 1322 and 2161 discuss this. In the end, the consequence is that when there are few slashings there is no extra correlated slashing penalty at all, which is probably a good thing.

Used by process_epoch()
Uses get_total_active_balance(), decrease_balance()
See also slash_validator(), EPOCHS_PER_SLASHINGS_VECTOR, PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX

Eth1 data votes updates

def process_eth1_data_reset(state: BeaconState) -> None:
    next_epoch = Epoch(get_current_epoch(state) + 1)
    # Reset eth1 data votes
    if next_epoch % EPOCHS_PER_ETH1_VOTING_PERIOD == 0:
        state.eth1_data_votes = []

There is a fixed period during which beacon block proposers vote on their view of the Eth1 deposit contract and try to come to a simple majority agreement. At the end of the period, the record of votes is cleared and voting begins again, whether or not agreement was reached during the period.

Used by process_epoch()
See also EPOCHS_PER_ETH1_VOTING_PERIOD, Eth1Data

Effective balances updates

def process_effective_balance_updates(state: BeaconState) -> None:
    # Update effective balances with hysteresis
    for index, validator in enumerate(state.validators):
        balance = state.balances[index]
        HYSTERESIS_INCREMENT = uint64(EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT)
        DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
        UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
        if (
            balance + DOWNWARD_THRESHOLD < validator.effective_balance
            or validator.effective_balance + UPWARD_THRESHOLD < balance
        ):
            validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)

Each validator's balance is represented twice in the state: once accurately in a list separate from validator records, and once in a coarse-grained format within the validator's record. Only effective balances are used in calculations within the spec, but rewards and penalties are applied to actual balances. This routine is where effective balances are updated once per epoch to follow the actual balances.

A hysteresis mechanism is used when calculating the effective balance of a validator when its actual balance changes. See Hysteresis Parameters for more discussion of this, and the values of the related constants. With the current values, a validator's effective balance drops to X ETH when its actual balance drops below X.75 ETH, and increases to Y ETH when its actual balance rises above Y.25 ETH. The hysteresis mechanism ensures that effective balances change infrequently, which means that the list of validator records needs to be re-hashed only infrequently when calculating the state root, saving considerably on work.

Used by process_epoch()
See also Hysteresis Parameters

Slashings balances updates

def process_slashings_reset(state: BeaconState) -> None:
    next_epoch = Epoch(get_current_epoch(state) + 1)
    # Reset slashings
    state.slashings[next_epoch % EPOCHS_PER_SLASHINGS_VECTOR] = Gwei(0)

state.slashings is a circular list of length EPOCHS_PER_SLASHINGS_VECTOR that contains the total of the effective balances of all validators that have been slashed at each epoch. These are used to apply a correlated slashing penalty to slashed validators before they are exited. Each epoch we overwrite the oldest entry with zero, and it becomes the current entry.

Used by process_epoch()
See also process_slashings(), EPOCHS_PER_SLASHINGS_VECTOR

Randao mixes updates

def process_randao_mixes_reset(state: BeaconState) -> None:
    current_epoch = get_current_epoch(state)
    next_epoch = Epoch(current_epoch + 1)
    # Set randao mix
    state.randao_mixes[next_epoch % EPOCHS_PER_HISTORICAL_VECTOR] = get_randao_mix(state, current_epoch)

state.randao_mixes is a circular list of length EPOCHS_PER_HISTORICAL_VECTOR. The current value of the RANDAO, which is updated with every block that arrives, is stored at position state.randao_mixes[current_epoch % EPOCHS_PER_HISTORICAL_VECTOR], as per get_randao_mix().

At the end of every epoch, final value of the RANDAO for this epoch is copied over to become the starting value of the randao for the next, preserving the remaining entries as historical values.

Used by process_epoch()
Uses get_randao_mix()
See also process_randao(), EPOCHS_PER_HISTORICAL_VECTOR

Historical summaries updates

def process_historical_summaries_update(state: BeaconState) -> None:
    # Set historical block root accumulator.
    next_epoch = Epoch(get_current_epoch(state) + 1)
    if next_epoch % (SLOTS_PER_HISTORICAL_ROOT // SLOTS_PER_EPOCH) == 0:
        historical_summary = HistoricalSummary(
            block_summary_root=hash_tree_root(state.block_roots),
            state_summary_root=hash_tree_root(state.state_roots),
        )
        state.historical_summaries.append(historical_summary)

This routine replaced process_historical_roots_update() at the Capella upgrade.

Previously, both the state.block_roots and state.state_roots lists were Merkleized together into a single root before being added to the state.historical_roots double batched accumulator. Now they are separately Merkleized and appended to state.historical_summaries via the HistoricalSummary container. The Capella upgrade changed this to make it possible to validate past block history without having to know the state history.

The summary is appended to the list every SLOTS_PER_HISTORICAL_ROOT slots. At 64 bytes per summary, the list will grow at the rate of 20 KB per year. The corresponding block and state root lists in the beacon state are circular and just get overwritten in the next period.

The process_historical_roots_update() function that this replaces remains documented in the Bellatrix edition.

Used by process_epoch()
See also HistoricalSummary, SLOTS_PER_HISTORICAL_ROOT

Participation flags updates

def process_participation_flag_updates(state: BeaconState) -> None:
    state.previous_epoch_participation = state.current_epoch_participation
    state.current_epoch_participation = [ParticipationFlags(0b0000_0000) for _ in range(len(state.validators))]

Two epochs' worth of validator participation flags (that record validators' attestation activity) are stored. At the end of every epoch the current becomes the previous, and a new empty list becomes current.

Used by process_epoch()
See also ParticipationFlags

Sync committee updates

def process_sync_committee_updates(state: BeaconState) -> None:
    next_epoch = get_current_epoch(state) + Epoch(1)
    if next_epoch % EPOCHS_PER_SYNC_COMMITTEE_PERIOD == 0:
        state.current_sync_committee = state.next_sync_committee
        state.next_sync_committee = get_next_sync_committee(state)

Sync committees are rotated every EPOCHS_PER_SYNC_COMMITTEE_PERIOD. The next sync committee is ready and waiting so that validators can prepare in advance by subscribing to the necessary subnets. That becomes the current sync committee, and the next is calculated.

Used by process_epoch()
Uses get_next_sync_committee()
See also EPOCHS_PER_SYNC_COMMITTEE_PERIOD

  1. Worth a visit if only to have a chuckle at Jacek's description of uints as "ugly integers".

Created by Ben Edgington. Licensed under CC BY-SA 4.0. Published 2023-09-29 14:16 UTC. Commit ebfcf50.