Part 3: Annotated Specification
Beacon Chain State Transition Function
def process_block(state: BeaconState, block: BeaconBlock) -> None: process_block_header(state, block) if is_execution_enabled(state, block.body): process_withdrawals(state, block.body.execution_payload) # [New in Capella] process_execution_payload(state, block.body.execution_payload, EXECUTION_ENGINE) # [Modified in Capella] process_randao(state, block.body) process_eth1_data(state, block.body) process_operations(state, block.body) # [Modified in Capella] process_sync_aggregate(state, block.body.sync_aggregate)
These are the tasks that the beacon node performs in order to process a block and update the state. If any of the called functions triggers the failure of an
assert statement, or any other kind of exception, then the entire block is invalid, and any state changes must be rolled back.
Note: The call to the
process_execution_payloadmust happen before the call to the
process_randaoas the former depends on the
randao_mixcomputed with the reveal of the previous block.
The call to
process_execution_payload() was added in the Bellatrix pre-Merge upgrade. The
EXECUTION_ENGINE object is not really defined in the beacon chain spec, but corresponds to an API that calls out to an attached execution client (formerly Eth1 client) that will do most of the payload validation.
process_operations() covers the processing of any slashing reports (proposer and attester) in the block, any attestations, any deposits, and any voluntary exits.
def process_block_header(state: BeaconState, block: BeaconBlock) -> None: # Verify that the slots match assert block.slot == state.slot # Verify that the block is newer than latest block header assert block.slot > state.latest_block_header.slot # Verify that proposer index is the correct index assert block.proposer_index == get_beacon_proposer_index(state) # Verify that the parent matches assert block.parent_root == hash_tree_root(state.latest_block_header) # Cache current block as the new latest block state.latest_block_header = BeaconBlockHeader( slot=block.slot, proposer_index=block.proposer_index, parent_root=block.parent_root, state_root=Bytes32(), # Overwritten in the next process_slot call body_root=hash_tree_root(block.body), ) # Verify proposer is not slashed proposer = state.validators[block.proposer_index] assert not proposer.slashed
A straightforward set of validity conditions for the block header data.
The version of the block header object that this routine stores in the state is a duplicate of the incoming block's header, but with its
state_root set to its default empty
Bytes32() value. See
process_slot() for the explanation of this.
def get_expected_withdrawals(state: BeaconState) -> Sequence[Withdrawal]: epoch = get_current_epoch(state) withdrawal_index = state.next_withdrawal_index validator_index = state.next_withdrawal_validator_index withdrawals: List[Withdrawal] =  bound = min(len(state.validators), MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP) for _ in range(bound): validator = state.validators[validator_index] balance = state.balances[validator_index] if is_fully_withdrawable_validator(validator, balance, epoch): withdrawals.append(Withdrawal( index=withdrawal_index, validator_index=validator_index, address=ExecutionAddress(validator.withdrawal_credentials[12:]), amount=balance, )) withdrawal_index += WithdrawalIndex(1) elif is_partially_withdrawable_validator(validator, balance): withdrawals.append(Withdrawal( index=withdrawal_index, validator_index=validator_index, address=ExecutionAddress(validator.withdrawal_credentials[12:]), amount=balance - MAX_EFFECTIVE_BALANCE, )) withdrawal_index += WithdrawalIndex(1) if len(withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD: break validator_index = ValidatorIndex((validator_index + 1) % len(state.validators)) return withdrawals
This is used in both block processing and block building to construct the list of automatic validator withdrawals that we expect to see in the block.
MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP validators will be considered for a withdrawal. As described under that heading, this serves to bound the load on nodes when eligible validators are few and far between.
Picking up where the previous sweep left off (
state.next_withdrawal_validator_index), we consider validators in turn, in increasing order of their validator indices. If a validator is eligible for a full withdrawal then a withdrawal transaction for its entire balance is added to the list. If a validator is eligible for a partial_withdrawal then a withdrawal transaction for its excess balance above
MAX_EFFECTIVE_BALANCE is added to the list.
next_withdrawal_validator_index counters in the beacon state are not updated here, but in the calling function.
def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None: expected_withdrawals = get_expected_withdrawals(state) assert len(payload.withdrawals) == len(expected_withdrawals) for expected_withdrawal, withdrawal in zip(expected_withdrawals, payload.withdrawals): assert withdrawal == expected_withdrawal decrease_balance(state, withdrawal.validator_index, withdrawal.amount) # Update the next withdrawal index if this block contained withdrawals if len(expected_withdrawals) != 0: latest_withdrawal = expected_withdrawals[-1] state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1) # Update the next validator index to start the next withdrawal sweep if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD: # Next sweep starts after the latest withdrawal's validator index next_validator_index = ValidatorIndex((expected_withdrawals[-1].validator_index + 1) % len(state.validators)) state.next_withdrawal_validator_index = next_validator_index else: # Advance sweep by the max length of the sweep if there was not a full set of withdrawals next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP next_validator_index = ValidatorIndex(next_index % len(state.validators)) state.next_withdrawal_validator_index = next_validator_index
The withdrawal transactions in a block appear in its
ExecutionPayload since they span both the consensus and execution layers. When processing the withdrawals, we first check that they match what we expect to see. This is taken care of by the call to
get_expected_withdrawals() and the pairwise comparison within the
for loop1. If any of the
assert tests fails then the entire block is invalid and all changes, including balance updates already made, must be rolled back. For each withdrawal, the corresponding validator's balance is decreased; the execution client will add the same amount to the validator's Eth1 withdrawal address on the execution layer.
After that we have some trickery for updating the values of
next_withdrawal_validator_index in the beacon state.
next_withdrawal_index, which just counts the number of withdrawals every made, we take the index of the last withdrawal in our list and add one. Adding the length of the list to our current value would be equivalent.
next_withdrawal_validator_index, we have two cases. If we have a full list of
MAX_WITHDRAWALS_PER_PAYLOAD withdrawal transactions then we know that this is the condition that terminated the sweep. Therefore the first validator we need to consider next time is the one after the validator in the last withdrawal transaction. Otherwise, the sweep was terminated by reaching
MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP, and the first validator we need to consider next time is the one after that.
I can't help thinking that it would have been easier to return these both from
get_expected_withdrawals(), where they have just been calculated independently.
|See also||WithdrawalIndex, ValidatorIndex,
def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None: # Verify consistency of the parent hash with respect to the previous execution payload header if is_merge_transition_complete(state): assert payload.parent_hash == state.latest_execution_payload_header.block_hash # Verify prev_randao assert payload.prev_randao == get_randao_mix(state, get_current_epoch(state)) # Verify timestamp assert payload.timestamp == compute_timestamp_at_slot(state, state.slot) # Verify the execution payload is valid assert execution_engine.notify_new_payload(payload) # Cache execution payload header state.latest_execution_payload_header = ExecutionPayloadHeader( parent_hash=payload.parent_hash, fee_recipient=payload.fee_recipient, state_root=payload.state_root, receipts_root=payload.receipts_root, logs_bloom=payload.logs_bloom, prev_randao=payload.prev_randao, block_number=payload.block_number, gas_limit=payload.gas_limit, gas_used=payload.gas_used, timestamp=payload.timestamp, extra_data=payload.extra_data, base_fee_per_gas=payload.base_fee_per_gas, block_hash=payload.block_hash, transactions_root=hash_tree_root(payload.transactions), withdrawals_root=hash_tree_root(payload.withdrawals), # [New in Capella] )
Since the Merge, the execution payload (formerly an Eth1 block) now forms part of the beacon block.
There isn't much beacon chain processing to be done for execution payloads as they are for the most part opaque blobs of data that are meaningful only to the execution client. However, the beacon chain does need to know whether the execution payload is valid in the view of the execution client. An execution payload that is invalid by the rules of the execution (Eth1) chain makes the beacon block containing it invalid.
Some initial sanity checks are performed:
- Unless this is the very first execution payload that we have seen then its
parent_hashmust match the
block_hashthat we have in the beacon state, that of the last execution payload we processed. This ensures that the chain of execution payloads is continuous, since it is essentially a blockchain within a blockchain.
- We check that the
prev_randaovalue is correctly set, otherwise a block proposer could trivially control the randomness on the execution layer.
- The timestamp on the execution payload must match the slot timestamp. Again, this prevents proposers manipulating the execution layer time for any smart contracts that depend on it.
Next we send the payload over to the execution engine via the Engine API, using the
notify_new_payload() function it provides. This serves two purposes: first it requests that the execution client check the validity of the payload, and second, if the payload is valid, it allows the execution layer to update its own state by running the transactions contained in the payload.
Finally, the header of the execution payload is stored in the beacon state, primarily so that the
parent_hash check can be made next time this function is called. The remainder of the execution header data is not currently used in the beacon chain specification, despite being stored.
This function was added in the Bellatrix pre-Merge upgrade.
def process_randao(state: BeaconState, body: BeaconBlockBody) -> None: epoch = get_current_epoch(state) # Verify RANDAO reveal proposer = state.validators[get_beacon_proposer_index(state)] signing_root = compute_signing_root(epoch, get_domain(state, DOMAIN_RANDAO)) assert bls.Verify(proposer.pubkey, signing_root, body.randao_reveal) # Mix in RANDAO reveal mix = xor(get_randao_mix(state, epoch), hash(body.randao_reveal)) state.randao_mixes[epoch % EPOCHS_PER_HISTORICAL_VECTOR] = mix
A good source of randomness is foundational to the operation of the beacon chain. Security of the protocol depends significantly on being able to unpredictably and uniformly select block proposers and committee members. In fact, the very name "beacon chain" was inspired by Dfinity's concept of a randomness beacon.
The current mechanism for providing randomness is a RANDAO, in which each block proposer provides some randomness and all the contributions are mixed together over the course of an epoch. This is not unbiasable (a malicious proposer may choose to skip a block if it is to its advantage to do so), but is good enough. In future, Ethereum might use a verifiable delay function (VDF) to provide unbiasable randomness.
Early designs had the validators pre-committing to "hash onions", peeling off one layer of hashing at each block proposal. This was changed to using a BLS signature over the epoch number as the entropy source. Using signatures is both a simplification, and an enabler for multi-party (distributed) validators. The (reasonable) assumption is that sufficient numbers of validators generated their secret keys with good entropy to ensure that the RANDAO's entropy is adequate.
process_randao() function simply uses the proposer's public key to verify that the RANDAO reveal in the block is indeed the epoch number signed with the proposer's private key. It then mixes the hash of the reveal into the current epoch's RANDAO accumulator. The hash is used in order to reduce the signature down from 96 to 32 bytes, and to make it uniform.
EPOCHS_PER_HISTORICAL_VECTOR past values of the RANDAO accumulator at the ends of epochs are stored in the state.
From Justin Drake's notes:
process_randaois (slightly) more secure than using
hash. To illustrate why, imagine an attacker can grind randomness in the current epoch such that two of his validators are the last proposers, in a different order, in two resulting samplings of the next epochs. The commutativity of
xormakes those two samplings equivalent, hence reducing the attacker's grinding opportunity for the next epoch versus
hash(which is not commutative). The strict security improvement may simplify the derivation of RANDAO security formal lower bounds.
Note that the
assert statement means that the whole block is invalid if the RANDAO reveal is incorrectly formed.
def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None: state.eth1_data_votes.append(body.eth1_data) if state.eth1_data_votes.count(body.eth1_data) * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH: state.eth1_data = body.eth1_data
Blocks may contain
Eth1Data which is supposed to be the proposer's best view of the Eth1 chain and the deposit contract at the time. There is no incentive to get this data correct, or penalty for it being incorrect.
If there is a simple majority of the same vote being cast by proposers during each voting period of
EPOCHS_PER_ETH1_VOTING_PERIOD epochs (6.8 hours) then the Eth1 data is committed to the beacon state. This updates the chain's view of the deposit contract, and new deposits since the last update will start being processed.
This mechanism has proved to be fragile in the past, but appears to be workable if not perfect.
def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: # Verify that outstanding deposits are processed up to the maximum number of deposits assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index) def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None: for operation in operations: fn(state, operation) for_ops(body.proposer_slashings, process_proposer_slashing) for_ops(body.attester_slashings, process_attester_slashing) for_ops(body.attestations, process_attestation) for_ops(body.deposits, process_deposit) for_ops(body.voluntary_exits, process_voluntary_exit) for_ops(body.bls_to_execution_changes, process_bls_to_execution_change) # [New in Capella]
Just a dispatcher for handling the various optional contents in a block.
Deposits are optional only in the sense that some blocks have them and some don't. However, as per the
assert statement, if, according to the beacon chain's view of the Eth1 chain, there are deposits pending, then the block must include them, otherwise the block is invalid.
Regarding incentives for block proposers to include each of these elements:
- Proposers are explicitly rewarded for including any available attestations and slashing reports.
- There is a validity condition, and thus an implicit reward, related to including deposit messages.
- The incentive for including voluntary exits is that a smaller validator set means higher rewards for the remaining validators.
- There is no incentive, implicit or explicit, for including BLS withdrawal credential change messages. These are handled on a purely altruistic basis.
def process_proposer_slashing(state: BeaconState, proposer_slashing: ProposerSlashing) -> None: header_1 = proposer_slashing.signed_header_1.message header_2 = proposer_slashing.signed_header_2.message # Verify header slots match assert header_1.slot == header_2.slot # Verify header proposer indices match assert header_1.proposer_index == header_2.proposer_index # Verify the headers are different assert header_1 != header_2 # Verify the proposer is slashable proposer = state.validators[header_1.proposer_index] assert is_slashable_validator(proposer, get_current_epoch(state)) # Verify signatures for signed_header in (proposer_slashing.signed_header_1, proposer_slashing.signed_header_2): domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(signed_header.message.slot)) signing_root = compute_signing_root(signed_header.message, domain) assert bls.Verify(proposer.pubkey, signing_root, signed_header.signature) slash_validator(state, header_1.proposer_index)
ProposerSlashing is a proof that a proposer has signed two blocks at the same height. Up to
MAX_PROPOSER_SLASHINGS of them may be included in a block. It contains the evidence in the form of a pair of
The proof is simple: the two proposals come from the same slot, have the same proposer, but differ in one or more of
body_root. In addition, they were both signed by the proposer. The conflicting blocks do not need to be valid: any pair of headers that meet the criteria, irrespective of the blocks' contents, are liable to be slashed.
As ever, the
assert statements ensure that the containing block is invalid if it contains any invalid slashing claims.
Fun fact: the first slashing to occur on the beacon chain was a proposer slashing. Two clients running side-by-side with the same keys will often produce the same attestations since the protocol is designed to encourage that. Independently producing the same block is very unlikely as blocks contain much more data.
def process_attester_slashing(state: BeaconState, attester_slashing: AttesterSlashing) -> None: attestation_1 = attester_slashing.attestation_1 attestation_2 = attester_slashing.attestation_2 assert is_slashable_attestation_data(attestation_1.data, attestation_2.data) assert is_valid_indexed_attestation(state, attestation_1) assert is_valid_indexed_attestation(state, attestation_2) slashed_any = False indices = set(attestation_1.attesting_indices).intersection(attestation_2.attesting_indices) for index in sorted(indices): if is_slashable_validator(state.validators[index], get_current_epoch(state)): slash_validator(state, index) slashed_any = True assert slashed_any
AttesterSlashings are similar to proposer slashings in that they just provide the evidence of the two aggregate
IndexedAttestations that conflict with each other. Up to
MAX_ATTESTER_SLASHINGS of them may be included in a block.
The validity checking is done by
is_slashable_attestation_data(), which checks the double vote and surround vote conditions, and by
is_valid_indexed_attestation() which verifies the signatures on the attestations.
Any validators that appear in both attestations are slashed. If no validator is slashed, then the attester slashing claim was not valid after all, and therefore its containing block is invalid.
def process_attestation(state: BeaconState, attestation: Attestation) -> None: data = attestation.data assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state)) assert data.target.epoch == compute_epoch_at_slot(data.slot) assert data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot <= data.slot + SLOTS_PER_EPOCH assert data.index < get_committee_count_per_slot(state, data.target.epoch) committee = get_beacon_committee(state, data.slot, data.index) assert len(attestation.aggregation_bits) == len(committee) # Participation flag indices participation_flag_indices = get_attestation_participation_flag_indices(state, data, state.slot - data.slot) # Verify signature assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation)) # Update epoch participation flags if data.target.epoch == get_current_epoch(state): epoch_participation = state.current_epoch_participation else: epoch_participation = state.previous_epoch_participation proposer_reward_numerator = 0 for index in get_attesting_indices(state, data, attestation.aggregation_bits): for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS): if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index): epoch_participation[index] = add_flag(epoch_participation[index], flag_index) proposer_reward_numerator += get_base_reward(state, index) * weight # Reward proposer proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator) increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
Block proposers are rewarded here for including attestations during block processing, while attesting validators receive their rewards and penalties during epoch processing.
This routine processes each attestation included in the block. First a bunch of validity checks are performed. If any of these fails, then the whole block is invalid (it is most likely from a proposer on a different fork, and so useless to us):
- The target vote of the attestation must be either the previous epoch's checkpoint or the current epoch's checkpoint.
- The target checkpoint and the attestation's slot must belong to the same epoch.
- The attestation must be no newer than
MIN_ATTESTATION_INCLUSION_DELAYslots, which is one. So this condition rules out attestations from the current or future slots.
- The attestation must be no older than
SLOTS_PER_EPOCHslots, which is 32.2
- The attestation must come from a committee that existed when the attestation was created.
- The size of the committee and the size of the aggregate must match (
- The (aggregate) signature on the attestation must be valid and must correspond to the aggregated public keys of the validators that it claims to be signed by. This (and other criteria) is checked by
Once the attestation has passed the checks it is processed by converting the votes from validators that it contains into flags in the state.
It's easy to skip over amidst all the checking, but the actual attestation processing is done by
get_attestation_participation_flag_indices(). This takes the source, target, and head votes of the attestation, along with its inclusion delay (how many slots late was it included in a block) and returns a list of up to three flags corresponding to the votes that were both correct and timely, in
For each validator that signed the attestation, if each flag in
participation_flag_indices is not already set for it in its
epoch_participation record, then the flag is set, and the proposer is rewarded. Recall that the validator making the attestation is not rewarded until the end of the epoch. If the flag is already set in the corresponding epoch for a validator, no proposer reward is accumulated: the attestation for this validator was included in an earlier block.
The proposer reward is accumulated, and weighted according to the weight assigned to each of the flags (timely source, timely target, timely head).
If a proposer includes all the attestations only for one slot, and all the relevant validators vote, then its reward will be, in the notation established earlier,
Where is the total maximum reward per epoch for attesters, calculated in
get_flag_index_deltas(). The total available reward in an epoch for proposers including attestations is 32 times this.
The code in this section handles deposit transactions that were included in a block. A deposit is created when a user transfers one or more ETH to the deposit contract. We need to check that the data sent with the deposit is valid. If it is, we either create a new validator record (for the first deposit for a validator) or update an existing record.
def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator: effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE) return Validator( pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, activation_eligibility_epoch=FAR_FUTURE_EPOCH, activation_epoch=FAR_FUTURE_EPOCH, exit_epoch=FAR_FUTURE_EPOCH, withdrawable_epoch=FAR_FUTURE_EPOCH, effective_balance=effective_balance, )
Create a newly initialised validator object based on deposit data. This was factored out of
process_deposit() for better code reuse between the Phase 0 spec and the (now deprecated) sharding spec.
pubkey is supplied in the initial deposit transaction. The depositor generates the validator's public key from the its private key.
def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None: validator_pubkeys = [validator.pubkey for validator in state.validators] if pubkey not in validator_pubkeys: # Verify the deposit signature (proof of possession) which is not checked by the deposit contract deposit_message = DepositMessage( pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount, ) domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks signing_root = compute_signing_root(deposit_message, domain) # Initialize validator if the deposit signature is valid if bls.Verify(pubkey, signing_root, signature): state.validators.append(get_validator_from_deposit(pubkey, withdrawal_credentials, amount)) state.balances.append(amount) # [New in Altair] state.previous_epoch_participation.append(ParticipationFlags(0b0000_0000)) state.current_epoch_participation.append(ParticipationFlags(0b0000_0000)) state.inactivity_scores.append(uint64(0)) else: # Increase balance by deposit amount index = ValidatorIndex(validator_pubkeys.index(pubkey)) increase_balance(state, index, amount)
Deposits are signed with the private key of the depositing validator, and the corresponding public key is included in the deposit data. This constitutes a "proof of possession" of the private key, and prevents nastiness like the rogue key attack. Note that
compute_domain() is used directly here when validating the deposit's signature, rather than the more usual
get_domain() wrapper. This is because deposit messages are valid across beacon chain upgrades (such as Phase 0, Altair, and Bellatrix), so we don't want to mix the fork version into the domain. In addition, deposits can be made before
genesis_validators_root is known.
An interesting quirk of this routine is that only the first deposit for a validator needs to be signed. Subsequent deposits for the same public key do not have their signatures checked. This could allow one staker (the key holder) to make an initial deposit (1 ETH, say), and for that to be topped up by others who do not have the private key. I don't know of any practical uses for this feature, but would be glad to hear of any. It slightly reduces the risk for stakers making multiple deposits for the same validator as they don't need to worry about incorrectly signing any but the first deposit.
Similarly, once a validator's withdrawal credentials have been set by the initial deposit transaction, the withdrawal credentials of subsequent deposits for the same validator are ignored. Only the credentials appearing on the initial deposit are stored on the beacon chain. This is an important security measure. If an attacker steals a validator's signing key (which signs deposit transactions), we don't want them to be able to change the withdrawal credentials in order to steal the stake for themselves. However, it works both ways, and a vulnerability was identified for staking pools in which a malicious operator could potentially front-run a deposit transaction with a 1 ETH deposit to set the withdrawal credentials to their own.
Note that the
withdrawal_credential in the deposit data is not checked in any way. It's up to the depositor to ensure that they are using the correct prefix and contents to be able to receive their rewards and retrieve their stake back after exiting the consensus layer.
def process_deposit(state: BeaconState, deposit: Deposit) -> None: # Verify the Merkle branch assert is_valid_merkle_branch( leaf=hash_tree_root(deposit.data), branch=deposit.proof, depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in index=state.eth1_deposit_index, root=state.eth1_data.deposit_root, ) # Deposits must be processed in order state.eth1_deposit_index += 1 apply_deposit( state=state, pubkey=deposit.data.pubkey, withdrawal_credentials=deposit.data.withdrawal_credentials, amount=deposit.data.amount, signature=deposit.data.signature, )
Here, we process a deposit from a block. If the deposit is valid, either a new validator is created or the deposit amount is added to an existing validator.
The call to
is_valid_merkle_branch() ensures that it is not possible to fake a deposit. The
eth1data.deposit_root from the deposit contract has been agreed by the beacon chain and includes all pending deposits visible to the beacon chain. The deposit itself contains a Merkle proof that it is included in that root. The
state.eth1_deposit_index counter ensures that deposits are processed in order. In short, the proposer provides
branch, but neither
If the Merkle branch check fails, then the whole block is invalid. However, individual deposits can fail the signature check without invalidating the block.
Deposits must be processed in order, and all available deposits must be included in the block (up to
MAX_DEPOSITS - checked in
process_operations()). This ensures that the beacon chain cannot censor deposit transactions, except at the expense of stopping block production entirely.
def process_voluntary_exit(state: BeaconState, signed_voluntary_exit: SignedVoluntaryExit) -> None: voluntary_exit = signed_voluntary_exit.message validator = state.validators[voluntary_exit.validator_index] # Verify the validator is active assert is_active_validator(validator, get_current_epoch(state)) # Verify exit has not been initiated assert validator.exit_epoch == FAR_FUTURE_EPOCH # Exits must specify an epoch when they become valid; they are not valid before then assert get_current_epoch(state) >= voluntary_exit.epoch # Verify the validator has been active long enough assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD # Verify signature domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch) signing_root = compute_signing_root(voluntary_exit, domain) assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature) # Initiate exit initiate_validator_exit(state, voluntary_exit.validator_index)
A voluntary exit message is submitted by a validator to indicate that it wishes to cease being an active validator. A proposer receives voluntary exit messages via gossip or via its own API and then includes the message in a block so that it can be processed by the network.
Most of the checks are straightforward, as per the comments in the code. Note the following.
- Voluntary exits are invalid if they are included in blocks before the given
epoch, so nodes should buffer any future-dated exits they see before putting them in a block.
- A validator must have been active for at least
SHARD_COMMITTEE_PERIODepochs (27 hours). See there for the rationale.
- Voluntary exits are signed with the validator's usual signing key. There is some discussion about changing this to also allow signing of a voluntary exit with the validator's withdrawal key.
If the voluntary exit message is valid then the validator is added to the exit queue by calling
def process_bls_to_execution_change(state: BeaconState, signed_address_change: SignedBLSToExecutionChange) -> None: address_change = signed_address_change.message assert address_change.validator_index < len(state.validators) validator = state.validators[address_change.validator_index] assert validator.withdrawal_credentials[:1] == BLS_WITHDRAWAL_PREFIX assert validator.withdrawal_credentials[1:] == hash(address_change.from_bls_pubkey)[1:] # Fork-agnostic domain since address changes are valid across forks domain = compute_domain(DOMAIN_BLS_TO_EXECUTION_CHANGE, genesis_validators_root=state.genesis_validators_root) signing_root = compute_signing_root(address_change, domain) assert bls.Verify(address_change.from_bls_pubkey, signing_root, signed_address_change.signature) validator.withdrawal_credentials = ( ETH1_ADDRESS_WITHDRAWAL_PREFIX + b'\x00' * 11 + address_change.to_execution_address )
The Capella upgrade provides a one-time operation to allow stakers to change their withdrawal credentials from BLS type (
BLS_WITHDRAWAL_PREFIX), which do not allow withdrawals, to Eth1 style (
ETH1_ADDRESS_WITHDRAWAL_PREFIX), which enable automatic withdrawals.
Stakers can make the change by signing a
BLSToExecutionChange message and broadcasting it to the network. At some point a proposer will include the change message in a block and it will arrive at this function in the state transition.
For BLS credentials the withdrawal credential contains the last 31 bytes of the SHA256 hash of a public key. That public key is the validator's withdrawal key, distinct from its signing key, although often derived from the same mnemonic. By checking its hash, we are confirming that the public key provided in the change message is the same one that created the withdrawal credential in the initial deposit.
Once we are satisfied that the public key is the same on previously committed to, then we can use it to verify the signature on the withdrawal transaction. Again, this transaction must be signed with the validator's withdrawal private key, not its usual signing key.
Having verified the signature, we can finally, and irrevocably, update the validator's withdrawal credentials from BLS style to Eth1 style.
def process_sync_aggregate(state: BeaconState, sync_aggregate: SyncAggregate) -> None: # Verify sync committee aggregate signature signing over the previous slot block root committee_pubkeys = state.current_sync_committee.pubkeys participant_pubkeys = [pubkey for pubkey, bit in zip(committee_pubkeys, sync_aggregate.sync_committee_bits) if bit] previous_slot = max(state.slot, Slot(1)) - Slot(1) domain = get_domain(state, DOMAIN_SYNC_COMMITTEE, compute_epoch_at_slot(previous_slot)) signing_root = compute_signing_root(get_block_root_at_slot(state, previous_slot), domain) assert eth_fast_aggregate_verify(participant_pubkeys, signing_root, sync_aggregate.sync_committee_signature) # Compute participant and proposer rewards total_active_increments = get_total_active_balance(state) // EFFECTIVE_BALANCE_INCREMENT total_base_rewards = Gwei(get_base_reward_per_increment(state) * total_active_increments) max_participant_rewards = Gwei(total_base_rewards * SYNC_REWARD_WEIGHT // WEIGHT_DENOMINATOR // SLOTS_PER_EPOCH) participant_reward = Gwei(max_participant_rewards // SYNC_COMMITTEE_SIZE) proposer_reward = Gwei(participant_reward * PROPOSER_WEIGHT // (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT)) # Apply participant and proposer rewards all_pubkeys = [v.pubkey for v in state.validators] committee_indices = [ValidatorIndex(all_pubkeys.index(pubkey)) for pubkey in state.current_sync_committee.pubkeys] for participant_index, participation_bit in zip(committee_indices, sync_aggregate.sync_committee_bits): if participation_bit: increase_balance(state, participant_index, participant_reward) increase_balance(state, get_beacon_proposer_index(state), proposer_reward) else: decrease_balance(state, participant_index, participant_reward)
Similarly to how attestations are handled, the beacon block proposer includes in its block an aggregation of sync committee votes that agree with its local view of the chain. Specifically, the sync committee votes are for the head block that the proposer saw in the previous slot. (If the previous slot is empty, then the head block will be from an earlier slot.)
We validate these votes against our local view of the chain, and if they agree then we reward the participants that voted. If they do not agree with our local view, then the entire block is invalid: it is on another branch.
To perform the validation, we form the signing root of the block at the previous slot, with
DOMAIN_SYNC_COMMITTEE mixed in. Then we check if the aggregate signature received in the
SyncAggregate verifies against it, using the aggregate public key of the validators who claimed to have signed it. If either the signing root (that is, the head block) is wrong, or the list of participants is wrong, then the verification will fail and the block is invalid.
Like proposer rewards, but unlike attestation rewards, sync committee rewards are not weighted with the participants' effective balances. This is already taken care of by the committee selection process that weights the probability of selection with the effective balance of the validator.
Running through the calculations:
total_active_increments: the sum of the effective balances of the entire active validator set normalised with the
EFFECTIVE_BALANCE_INCREMENTto give the total number of increments.
total_base_rewards: the maximum rewards that will be awarded to all validators for all duties this epoch. It is at most in the notation established earlier.
max_participant_rewards: the amount of the total reward to be given to the entire sync committee in this slot.
participant_reward: the reward per participating validator, and the penalty per non-participating validator.
proposer_reward: one seventh of the participant reward.
Each committee member that voted receives a reward of
participant_reward, and the proposer receives one seventh of this in addition.
Each committee member that failed to vote receives a penalty of
participant_reward, and the proposer receives nothing.
In our notation the maximum issuance (reward) due to sync committees per slot is as follows.
The per-epoch reward is thirty-two times this. The maximum reward for the proposer in respect of sync aggregates:
- The use of
zip()here is quite Pythonic, but just means that with two lists of equal length we take their elements pairwise in turn.↩
- This is due to change in EIP-7045, scheduled for inclusion in the Deneb upgrade. The change will allow attestations to be included from the whole of current and previous epochs.↩
- EIP-6110 is a potential future upgrade that would allow deposits to be processed more or less instantly, rather than having to go through the Eth1 follow distance and Eth1 voting period as they do now.↩