Part 3: Annotated Specification
Helper Functions
Beacon State Accessors
As the name suggests, these functions access the beacon state to calculate various useful things, without modifying it.
get_current_epoch
def get_current_epoch(state: BeaconState) -> Epoch:
"""
Return the current epoch.
"""
return compute_epoch_at_slot(state.slot)
A getter for the current epoch, as calculated by compute_epoch_at_slot()
.
Used by | Everywhere |
Uses | compute_epoch_at_slot() |
get_previous_epoch
def get_previous_epoch(state: BeaconState) -> Epoch:
"""`
Return the previous epoch (unless the current epoch is ``GENESIS_EPOCH``).
"""
current_epoch = get_current_epoch(state)
return GENESIS_EPOCH if current_epoch == GENESIS_EPOCH else Epoch(current_epoch - 1)
Return the previous epoch number as an Epoch
type. Returns GENESIS_EPOCH
if we are in the GENESIS_EPOCH
, since it has no prior, and we don't do negative numbers.
Used by | Everywhere |
Uses | get_current_epoch() |
See also | GENESIS_EPOCH |
get_block_root
def get_block_root(state: BeaconState, epoch: Epoch) -> Root:
"""
Return the block root at the start of a recent ``epoch``.
"""
return get_block_root_at_slot(state, compute_start_slot_at_epoch(epoch))
The Casper FFG part of consensus deals in Checkpoint
s that are the first slot of an epoch. get_block_root
is a specialised version of get_block_root_at_slot()
that returns the block root of the checkpoint, given only an epoch.
Used by | get_attestation_participation_flag_indices() , weigh_justification_and_finalization() |
Uses | get_block_root_at_slot() , compute_start_slot_at_epoch() |
See also | Root |
get_block_root_at_slot
def get_block_root_at_slot(state: BeaconState, slot: Slot) -> Root:
"""
Return the block root at a recent ``slot``.
"""
assert slot < state.slot <= slot + SLOTS_PER_HISTORICAL_ROOT
return state.block_roots[slot % SLOTS_PER_HISTORICAL_ROOT]
Recent block roots are stored in a circular list in state, with a length of SLOTS_PER_HISTORICAL_ROOT
(currently ~27 hours).
get_block_root_at_slot()
is used by get_attestation_participation_flag_indices()
to check whether an attestation has voted for the correct chain head. It is also used in process_sync_aggregate()
to find the block that the sync committee is signing-off on.
Used by | get_block_root() , get_attestation_participation_flag_indices() , process_sync_aggregate() |
See also | SLOTS_PER_HISTORICAL_ROOT , Root |
get_randao_mix
def get_randao_mix(state: BeaconState, epoch: Epoch) -> Bytes32:
"""
Return the randao mix at a recent ``epoch``.
"""
return state.randao_mixes[epoch % EPOCHS_PER_HISTORICAL_VECTOR]
RANDAO mixes are stored in a circular list of length EPOCHS_PER_HISTORICAL_VECTOR
. They are used when calculating the seed for assigning beacon proposers and committees.
The RANDAO mix for the current epoch is updated on a block-by-block basis as new RANDAO reveals come in. The mixes for previous epochs are the frozen RANDAO values at the end of the epoch.
Used by | get_seed , process_randao_mixes_reset() , process_randao() |
See also | EPOCHS_PER_HISTORICAL_VECTOR |
get_active_validator_indices
def get_active_validator_indices(state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:
"""
Return the sequence of active validator indices at ``epoch``.
"""
return [ValidatorIndex(i) for i, v in enumerate(state.validators) if is_active_validator(v, epoch)]
Steps through the entire list of validators and returns the list of only the active ones. That is, the list of validators that have been activated but not exited as determined by is_active_validator()
.
This function is heavily used, and I'd expect it to be memoised in practice.
Used by | Many places |
Uses | is_active_validator() |
get_validator_churn_limit
def get_validator_churn_limit(state: BeaconState) -> uint64:
"""
Return the validator churn limit for the current epoch.
"""
active_validator_indices = get_active_validator_indices(state, get_current_epoch(state))
return max(MIN_PER_EPOCH_CHURN_LIMIT, uint64(len(active_validator_indices)) // CHURN_LIMIT_QUOTIENT)
The "churn limit" applies when activating and exiting validators and acts as a rate-limit on changes to the validator set. The value returned by this function provides the number of validators that may become active in an epoch, and the number of validators that may exit in an epoch.
Some small amount of churn is always allowed, set by MIN_PER_EPOCH_CHURN_LIMIT
, and the amount of per-epoch churn allowed increases by one for every extra CHURN_LIMIT_QUOTIENT
validators that are currently active (once the minimum has been exceeded).
In concrete terms, with 500,000 validators, this means that up to seven validators can enter or exit the active validator set each epoch (1,575 per day). At 524,288 active validators the limit will rise to eight per epoch (1,800 per day).
Used by | initiate_validator_exit() , process_registry_updates() |
Uses | get_active_validator_indices() |
See also | MIN_PER_EPOCH_CHURN_LIMIT , CHURN_LIMIT_QUOTIENT |
get_seed
def get_seed(state: BeaconState, epoch: Epoch, domain_type: DomainType) -> Bytes32:
"""
Return the seed at ``epoch``.
"""
mix = get_randao_mix(state, Epoch(epoch + EPOCHS_PER_HISTORICAL_VECTOR - MIN_SEED_LOOKAHEAD - 1)) # Avoid underflow
return hash(domain_type + uint_to_bytes(epoch) + mix)
Used in get_beacon_committee()
, get_beacon_proposer_index()
, and get_next_sync_committee_indices()
to provide the randomness for computing proposers and committees. domain_type
is DOMAIN_BEACON_ATTESTER
, DOMAIN_BEACON_PROPOSER
, and DOMAIN_SYNC_COMMITTEE
respectively.
RANDAO mixes are stored in a circular list of length EPOCHS_PER_HISTORICAL_VECTOR
. The seed for an epoch is based on the randao mix from MIN_SEED_LOOKAHEAD
epochs ago. This is to limit the forward visibility of randomness: see the explanation there.
The seed returned is not based only on the domain and the randao mix, but the epoch number is also mixed in. This is to handle the pathological case of no blocks being seen for more than two epochs, in which case we run out of randao updates. That could lock in forever a non-participating set of block proposers. Mixing in the epoch number means that fresh committees and proposers can continue to be selected.
Used by | get_beacon_committee() , get_beacon_proposer_index() , get_next_sync_committee_indices() |
Uses | get_randao_mix() |
See also | EPOCHS_PER_HISTORICAL_VECTOR , MIN_SEED_LOOKAHEAD |
get_committee_count_per_slot
def get_committee_count_per_slot(state: BeaconState, epoch: Epoch) -> uint64:
"""
Return the number of committees in each slot for the given ``epoch``.
"""
return max(uint64(1), min(
MAX_COMMITTEES_PER_SLOT,
uint64(len(get_active_validator_indices(state, epoch))) // SLOTS_PER_EPOCH // TARGET_COMMITTEE_SIZE,
))
Every slot in a given epoch has the same number of beacon committees, as calculated by this function.
As far as the LMD GHOST consensus protocol is concerned, all the validators attesting in a slot effectively act as a single large committee. However, organising them into multiple committees gives two benefits.
- Having multiple smaller committees reduces the load on the aggregators that collect and aggregate the attestations from committee members. This is important, as validating the signatures and aggregating them takes time. The downside is that blocks need to be larger, as, in the best case, there are up to 64 aggregate attestations to store per block rather than a single large aggregate signature over all attestations.
- It maps well onto the future plans for data shards, when each committee will be responsible for committing to a block on one shard in addition to its current duties.
Since the original Phase 1 sharding design that required these committees has now been abandoned, the second of these points no longer applies.
There is always at least one committee per slot, and never more than MAX_COMMITTEES_PER_SLOT
, currently 64.
Subject to these constraints, the actual number of committees per slot is , where is the total number of active validators.
The intended behaviour looks like this:
- The ideal case is that there are
MAX_COMMITTEES_PER_SLOT
= 64 committees per slot. This maps to one committee per slot per shard once data sharding has been implemented. These committees will be responsible for voting on shard crosslinks. There must be at least 262,144 active validators to achieve this. - If there are fewer active validators, then the number of committees per shard is reduced below 64 in order to maintain a minimum committee size of
TARGET_COMMITTEE_SIZE
= 128. In this case, not every shard will get crosslinked at every slot (once sharding is in place). - Finally, only if the number of active validators falls below 4096 will the committee size be reduced to less than 128. With so few validators, the chain has no meaningful security in any case.
Used by | get_beacon_committee() , process_attestation() |
Uses | get_active_validator_indices() |
See also | MAX_COMMITTEES_PER_SLOT , TARGET_COMMITTEE_SIZE |
get_beacon_committee
def get_beacon_committee(state: BeaconState, slot: Slot, index: CommitteeIndex) -> Sequence[ValidatorIndex]:
"""
Return the beacon committee at ``slot`` for ``index``.
"""
epoch = compute_epoch_at_slot(slot)
committees_per_slot = get_committee_count_per_slot(state, epoch)
return compute_committee(
indices=get_active_validator_indices(state, epoch),
seed=get_seed(state, epoch, DOMAIN_BEACON_ATTESTER),
index=(slot % SLOTS_PER_EPOCH) * committees_per_slot + index,
count=committees_per_slot * SLOTS_PER_EPOCH,
)
Beacon committees vote on the beacon block at each slot via attestations. There are up to MAX_COMMITTEES_PER_SLOT
beacon committees per slot, and each committee is active exactly once per epoch.
This function returns the list of committee members given a slot number and an index within that slot to select the desired committee, relying on compute_committee()
to do the heavy lifting.
Note that, since this uses get_seed()
, we can obtain committees only up to EPOCHS_PER_HISTORICAL_VECTOR
epochs into the past (minus MIN_SEED_LOOKAHEAD
).
get_beacon_committee
is used by get_attesting_indices()
and process_attestation()
when processing attestations coming from a committee, and by validators when checking their committee assignments and aggregation duties.
get_beacon_proposer_index
def get_beacon_proposer_index(state: BeaconState) -> ValidatorIndex:
"""
Return the beacon proposer index at the current slot.
"""
epoch = get_current_epoch(state)
seed = hash(get_seed(state, epoch, DOMAIN_BEACON_PROPOSER) + uint_to_bytes(state.slot))
indices = get_active_validator_indices(state, epoch)
return compute_proposer_index(state, indices, seed)
Each slot, exactly one of the active validators is randomly chosen to be the proposer of the beacon block for that slot. The probability of being selected is weighted by the validator's effective balance in compute_proposer_index()
.
The chosen block proposer does not need to be a member of one of the beacon committees for that slot: it is chosen from the entire set of active validators for that epoch.
The RANDAO seed returned by get_seed()
is updated once per epoch. The slot number is mixed into the seed using a hash to allow us to choose a different proposer at each slot. This also protects us in the case that there is an entire epoch of empty blocks. If that were to happen the RANDAO would not be updated, but we would still be able to select a different set of proposers for the next epoch via this slot number mix-in process.
There is a chance of the same proposer being selected in two consecutive slots, or more than once per epoch. If every validator has the same effective balance, then the probability of being selected in a particular slot is simply independent of any other slot, where is the number of active validators in the epoch corresponding to the slot.
Currently, neither get_beacon_proposer_index()
nor compute_proposer_index()
filter out slashed validators. This could result in a slashed validator, prior to its exit, being selected to propose a block. Its block would, however, be invalid due to the check in process_block_header()
. A fix for this has been proposed so as to avoid many missed slots (slots with invalid blocks) in the event of a mass slashing.
get_total_balance
def get_total_balance(state: BeaconState, indices: Set[ValidatorIndex]) -> Gwei:
"""
Return the combined effective balance of the ``indices``.
``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
Math safe up to ~10B ETH, after which this overflows uint64.
"""
return Gwei(max(EFFECTIVE_BALANCE_INCREMENT, sum([state.validators[index].effective_balance for index in indices])))
A simple utility that returns the total balance of all validators in the list, indices
, passed in.
As an aside, there is an interesting example of some fragility in the spec lurking here. This function used to return a minimum of 1 Gwei to avoid a potential division by zero in the calculation of rewards and penalties. However, the rewards calculation was modified to avoid a possible integer overflow condition, without modifying this function, which re-introduced the possibility of a division by zero. This was later fixed by returning a minimum of EFFECTIVE_BALANCE_INCREMENT
. The formal verification of the specification is helpful in avoiding issues like this.
Used by | get_total_active_balance() , get_flag_index_deltas() , process_justification_and_finalization() |
See also | EFFECTIVE_BALANCE_INCREMENT |
get_total_active_balance
def get_total_active_balance(state: BeaconState) -> Gwei:
"""
Return the combined effective balance of the active validators.
Note: ``get_total_balance`` returns ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
"""
return get_total_balance(state, set(get_active_validator_indices(state, get_current_epoch(state))))
Uses get_total_balance()
to calculate the sum of the effective balances of all active validators in the current epoch.
This quantity is frequently used in the spec. For example, Casper FFG uses the total active balance to judge whether the 2/3 majority threshold of attestations has been reached in justification and finalisation. And it is a fundamental part of the calculation of rewards and penalties. The base reward is proportional to the reciprocal of the square root of the total active balance. Thus, validator rewards are higher when little balance is at stake (few active validators) and lower when much balance is at stake (many active validators).
Since it is calculated from effective balances, total active balance does not change during an epoch, so is a great candidate for being cached.
get_domain
def get_domain(state: BeaconState, domain_type: DomainType, epoch: Epoch=None) -> Domain:
"""
Return the signature domain (fork version concatenated with domain type) of a message.
"""
epoch = get_current_epoch(state) if epoch is None else epoch
fork_version = state.fork.previous_version if epoch < state.fork.epoch else state.fork.current_version
return compute_domain(domain_type, fork_version, state.genesis_validators_root)
get_domain()
pops up whenever signatures need to be verified, since a DomainType
is always mixed in to the signed data. For the science behind domains, see Domain types and compute_domain()
.
Except for DOMAIN_DEPOSIT
, domains are always combined with the fork version before being used in signature generation. This is to distinguish messages from different chains, and ensure that validators don't get slashed if they choose to participate on two independent forks. (That is, deliberate forks, aka hard-forks. Participating on both branches of temporary consensus forks is punishable: that's basically the whole point of slashing.)
Note that a message signed under one fork version will be valid during the next fork version, but not thereafter. So, for example, voluntary exit messages signed during Altair will be valid after the Bellatrix beacon chain upgrade, but not after the Capella upgrade. Voluntary exit messages signed during Phase 0 are valid under Altair but were made invalid by the Bellatrix upgrade1.
get_indexed_attestation
def get_indexed_attestation(state: BeaconState, attestation: Attestation) -> IndexedAttestation:
"""
Return the indexed attestation corresponding to ``attestation``.
"""
attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
return IndexedAttestation(
attesting_indices=sorted(attesting_indices),
data=attestation.data,
signature=attestation.signature,
)
Lists of validators within committees occur in two forms in the specification.
- They can be compressed into a bitlist, in which each bit represents the presence or absence of a validator from a particular committee. The committee is referenced by slot, and committee index within that slot. This is how sets of validators are represented in
Attestation
s. - Or they can be listed explicitly by their validator indices, as in
IndexedAttestation
s. Note that the list of indices is sorted: an attestation is invalid if not.
get_indexed_attestation()
converts from the former representation to the latter. The slot number and the committee index are provided by the AttestationData
and are used to reconstruct the committee members via get_beacon_committee()
. The supplied bitlist will have come from an Attestation
.
Attestations are aggregatable, which means that attestations from multiple validators making the same vote can be rolled up into a single attestation through the magic of BLS signature aggregation. However, in order to be able to verify the signature later, a record needs to be kept of which validators actually contributed to the attestation. This is so that those validators' public keys can be aggregated to match the construction of the signature.
The conversion from the bit-list format to the list format is performed by get_attesting_indices()
, below.
Used by | process_attestation() |
Uses | get_attesting_indices() |
See also | Attestation , IndexedAttestation |
get_attesting_indices
def get_attesting_indices(state: BeaconState,
data: AttestationData,
bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]) -> Set[ValidatorIndex]:
"""
Return the set of attesting indices corresponding to ``data`` and ``bits``.
"""
committee = get_beacon_committee(state, data.slot, data.index)
return set(index for i, index in enumerate(committee) if bits[i])
As described under get_indexed_attestation()
, lists of validators come in two forms. This routine converts from the compressed form, in which validators are represented as a subset of a committee with their presence or absence indicated by a 1 bit or a 0 bit respectively, to an explicit list of ValidatorIndex
types.
Used by | get_indexed_attestation() , process_attestation() |
Uses | get_beacon_committee() |
See also | AttestationData , IndexedAttestation |
get_next_sync_committee_indices
def get_next_sync_committee_indices(state: BeaconState) -> Sequence[ValidatorIndex]:
"""
Return the sync committee indices, with possible duplicates, for the next sync committee.
"""
epoch = Epoch(get_current_epoch(state) + 1)
MAX_RANDOM_BYTE = 2**8 - 1
active_validator_indices = get_active_validator_indices(state, epoch)
active_validator_count = uint64(len(active_validator_indices))
seed = get_seed(state, epoch, DOMAIN_SYNC_COMMITTEE)
i = 0
sync_committee_indices: List[ValidatorIndex] = []
while len(sync_committee_indices) < SYNC_COMMITTEE_SIZE:
shuffled_index = compute_shuffled_index(uint64(i % active_validator_count), active_validator_count, seed)
candidate_index = active_validator_indices[shuffled_index]
random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
effective_balance = state.validators[candidate_index].effective_balance
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
sync_committee_indices.append(candidate_index)
i += 1
return sync_committee_indices
get_next_sync_committee_indices()
is used to select the subset of validators that will make up a sync committee. The committee size is SYNC_COMMITTEE_SIZE
, and the committee is allowed to contain duplicates, that is, the same validator more than once. This is to handle gracefully the situation of there being fewer active validators than SYNC_COMMITTEE_SIZE
.
Similarly to being chosen to propose a block, the probability of any validator being selected for a sync committee is proportional to its effective balance. Thus, the algorithm is almost the same as that of compute_proposer_index()
, except that this one exits only after finding SYNC_COMMITTEE_SIZE
members, rather than exiting as soon as a candidate is found. Both routines use the try-and-increment method to weight the probability of selection with the validators' effective balances.
It's fairly clear why block proposers are selected with a probability proportional to their effective balances: block production is subject to slashing, and proposers with less at stake have less to slash, so we reduce their influence accordingly. It is not so clear why the probability of being in a sync committee is also proportional to a validator's effective balance; sync committees are not subject to slashing. It has to do with keeping calculations for light clients simple. We don't want to burden light clients with summing up validators' balances to judge whether a 2/3 supermajority of stake in the committee has voted for a block. Ideally, they can just count the participation flags. To make this somewhat reliable, we weight the probability that a validator participates in proportion to its effective balance.
Used by | get_next_sync_committee() |
Uses | get_active_validator_indices() , get_seed() , compute_shuffled_index() , uint_to_bytes() |
See also | SYNC_COMMITTEE_SIZE , compute_proposer_index() |
get_next_sync_committee
Note: The function
get_next_sync_committee
should only be called at sync committee period boundaries and when upgrading state to Altair.
The random seed that generates the sync committee is based on the number of the next epoch. get_next_sync_committee_indices()
doesn't contain any check that the epoch corresponds to a sync-committee change boundary, which allowed the timing of the Altair upgrade to be more flexible. But a consequence is that you will get an incorrect committee if you call get_next_sync_committee()
at the wrong time.
def get_next_sync_committee(state: BeaconState) -> SyncCommittee:
"""
Return the next sync committee, with possible pubkey duplicates.
"""
indices = get_next_sync_committee_indices(state)
pubkeys = [state.validators[index].pubkey for index in indices]
aggregate_pubkey = eth_aggregate_pubkeys(pubkeys)
return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
get_next_sync_committee()
is a simple wrapper around get_next_sync_committee_indices()
that packages everything up into a nice SyncCommittee
object.
See the SyncCommittee
type for an explanation of how the aggregate_pubkey
is intended to be used.
Used by | process_sync_committee_updates() , initialize_beacon_state_from_eth1() |
Uses | get_next_sync_committee_indices() , eth_aggregate_pubkeys() |
See also | SyncCommittee |
get_unslashed_participating_indices
def get_unslashed_participating_indices(state: BeaconState, flag_index: int, epoch: Epoch) -> Set[ValidatorIndex]:
"""
Return the set of validator indices that are both active and unslashed for the given ``flag_index`` and ``epoch``.
"""
assert epoch in (get_previous_epoch(state), get_current_epoch(state))
if epoch == get_current_epoch(state):
epoch_participation = state.current_epoch_participation
else:
epoch_participation = state.previous_epoch_participation
active_validator_indices = get_active_validator_indices(state, epoch)
participating_indices = [i for i in active_validator_indices if has_flag(epoch_participation[i], flag_index)]
return set(filter(lambda index: not state.validators[index].slashed, participating_indices))
get_unslashed_participating_indices()
returns the list of validators that made a timely attestation with the type flag_index
during the epoch
in question.
It is used with the TIMELY_TARGET_FLAG_INDEX
flag in process_justification_and_finalization()
to calculate the proportion of stake that voted for the candidate checkpoint in the current and previous epochs.
It is also used with the TIMELY_TARGET_FLAG_INDEX
for applying inactivity penalties in process_inactivity_updates()
and get_inactivity_penalty_deltas()
. If a validator misses a correct target vote during an inactivity leak then it is considered not to have participated at all (it is not contributing anything useful).
And it is used in get_flag_index_deltas()
for calculating rewards due for each type of correct vote.
Slashed validators are ignored. Once slashed, validators no longer receive rewards or participate in consensus, although they are subject to penalties until they have finally been exited.
get_attestation_participation_flag_indices
def get_attestation_participation_flag_indices(state: BeaconState,
data: AttestationData,
inclusion_delay: uint64) -> Sequence[int]:
"""
Return the flag indices that are satisfied by an attestation.
"""
if data.target.epoch == get_current_epoch(state):
justified_checkpoint = state.current_justified_checkpoint
else:
justified_checkpoint = state.previous_justified_checkpoint
# Matching roots
is_matching_source = data.source == justified_checkpoint
is_matching_target = is_matching_source and data.target.root == get_block_root(state, data.target.epoch)
is_matching_head = is_matching_target and data.beacon_block_root == get_block_root_at_slot(state, data.slot)
assert is_matching_source
participation_flag_indices = []
if is_matching_source and inclusion_delay <= integer_squareroot(SLOTS_PER_EPOCH):
participation_flag_indices.append(TIMELY_SOURCE_FLAG_INDEX)
if is_matching_target and inclusion_delay <= SLOTS_PER_EPOCH:
participation_flag_indices.append(TIMELY_TARGET_FLAG_INDEX)
if is_matching_head and inclusion_delay == MIN_ATTESTATION_INCLUSION_DELAY:
participation_flag_indices.append(TIMELY_HEAD_FLAG_INDEX)
return participation_flag_indices
This is called by process_attestation()
during block processing, and is the heart of the mechanism for recording validators' votes as contained in their attestations. It filters the given attestation against the beacon state's current view of the chain, and returns participation flag indices only for the votes that are both correct and timely.
data
is an AttestationData
object that contains the source, target, and head votes of the validators that contributed to the attestation. The attestation may represent the votes of one or more validators.
inclusion_delay
is the difference between the current slot on the beacon chain and the slot for which the attestation was created. For the block containing the attestation to be valid, inclusion_delay
must be between MIN_ATTESTATION_INCLUSION_DELAY
and SLOTS_PER_EPOCH
inclusive. In other words, attestations must be included in the next block, or in any block up to 32 slots later, after which they are ignored.
Since the attestation may be up to 32 slots old, it might have been generated in the current epoch or the previous epoch, so the first thing we do is to check the attestation's target vote epoch to see which epoch we should be looking at in the beacon state.
Next, we check whether each of the votes in the attestation are correct:
- Does the attestation's source vote match what we believe to be the justified checkpoint in the epoch in question?
- If so, does the attestation's target vote match the head block at the epoch's checkpoint, that is, the first slot of the epoch?
- If so, does the attestation's head vote match what we believe to be the head block at the attestation's slot? Note that the slot may not contain a block – it may be a skip slot – in which case the last known block is considered to be the head.
These three build on each other, so that it is not possible to have a correct target vote without a correct source vote, and it is not possible to have a correct head vote without a correct target vote.
The assert
statement is interesting. If an attestation does not have the correct source vote, the block containing it is invalid and is discarded. Having an incorrect source vote means that the block proposer disagrees with me about the last justified checkpoint, which is an irreconcilable difference.
After checking the validity of the votes, the timeliness of each vote is checked. Let's take them in reverse order.
- Correct head votes must be included immediately, that is, in the very next slot.
- Head votes, used for LMD GHOST consensus, are not useful after one slot.
- Correct target votes must be included within 32 slots, one epoch.
- Target votes are useful at any time, but it is simpler if they don't span more than a couple of epochs, so 32 slots is a reasonable limit. This check is actually redundant since attestations in blocks cannot be older than 32 slots.
- Correct source votes must be included within 5 slots (
integer_squareroot(32)
).- This is the geometric mean of 1 (the timely head threshold) and 32 (the timely target threshold). This is an arbitrary choice. Vitalik's view2 is that, with this setting, the cumulative timeliness rewards most closely match an exponentially decreasing curve, which "feels more logical".
The timely inclusion requirements are new in Altair. In Phase 0, all correct votes received a reward, and there was an additional reward for inclusion the was proportional to the reciprocal of the inclusion distance. This led to an oddity where it was always more profitable to vote for a correct head, even if that meant waiting longer and risking not being included in the next slot.
Used by | process_attestation() |
Uses | get_block_root() , get_block_root_at_slot() , integer_squareroot() |
See also | Participation flag indices, AttestationData , MIN_ATTESTATION_INCLUSION_DELAY |
get_flag_index_deltas
def get_flag_index_deltas(state: BeaconState, flag_index: int) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
"""
Return the deltas for a given ``flag_index`` by scanning through the participation flags.
"""
rewards = [Gwei(0)] * len(state.validators)
penalties = [Gwei(0)] * len(state.validators)
previous_epoch = get_previous_epoch(state)
unslashed_participating_indices = get_unslashed_participating_indices(state, flag_index, previous_epoch)
weight = PARTICIPATION_FLAG_WEIGHTS[flag_index]
unslashed_participating_balance = get_total_balance(state, unslashed_participating_indices)
unslashed_participating_increments = unslashed_participating_balance // EFFECTIVE_BALANCE_INCREMENT
active_increments = get_total_active_balance(state) // EFFECTIVE_BALANCE_INCREMENT
for index in get_eligible_validator_indices(state):
base_reward = get_base_reward(state, index)
if index in unslashed_participating_indices:
if not is_in_inactivity_leak(state):
reward_numerator = base_reward * weight * unslashed_participating_increments
rewards[index] += Gwei(reward_numerator // (active_increments * WEIGHT_DENOMINATOR))
elif flag_index != TIMELY_HEAD_FLAG_INDEX:
penalties[index] += Gwei(base_reward * weight // WEIGHT_DENOMINATOR)
return rewards, penalties
This function is used during epoch processing to assign rewards and penalties to individual validators based on their voting record in the previous epoch. Rewards for block proposers for including attestations are calculated during block processing. The "deltas" in the function name are the separate lists of rewards and penalties returned. Rewards and penalties are always treated separately to avoid negative numbers.
The function is called once for each of the flag types corresponding to correct attestation votes: timely source, timely target, timely head.
The list of validators returned by get_unslashed_participating_indices()
contains the ones that will be rewarded for making this vote type in a timely and correct manner. That routine uses the flags set in state for each validator by process_attestation()
during block processing and returns the validators for which the corresponding flag is set.
Every active validator is expected to make an attestation exactly once per epoch, so we then cycle through the entire set of active validators, rewarding them if they appear in unslashed_participating_indices
, as long as we are not in an inactivity leak. If we are in a leak, no validator is rewarded for any of its votes, but penalties still apply to non-participating validators.
Notice that the reward is weighted with unslashed_participating_increments
, which is proportional to the total stake of the validators that made a correct vote with this flag. This means that, if participation by other validators is lower, then my rewards are lower, even if I perform my duties perfectly. The reason for this is to do with discouragement attacks (see also this nice explainer3). In short, with this mechanism, validators are incentivised to help each other out (e.g. by forwarding gossip messages, or aggregating attestations well) rather than to attack or censor one-another.
Validators that did not make a correct and timely vote are penalised with a full weighted base reward for each flag that they missed, except for missing the head vote. Head votes have only a single slot to get included, so a missing block in the next slot is sufficient to cause a miss, but is completely outside the attester's control. Thus, head votes are only rewarded, not penalised. This also allows perfectly performing validators to break even during an inactivity leak, when we expect at least a third of blocks to be missing: they receive no rewards, but ideally no penalties either.
Untangling the arithmetic, the maximum total issuance due to rewards for attesters in an epoch, , comes out as follows, in the notation described later.
- There is some discussion around changing this to make voluntary exit messages fork-agnostic in future, but that has not yet been implemented.↩
- From a conversation on the Ethereum Research Discord server.↩
- Unfortunately, the original page,
https://hackingresear.ch/discouragement-attacks/
seems to be unavailable now. The link in the text is to archive.org, but their version is a bit broken.↩