def is_active_validator(validator: Validator, epoch: Epoch) -> bool: """ Check if ``validator`` is active. """ return validator.activation_epoch <= epoch < validator.exit_epoch
Validators don't explicitly track their own state (eligible for activation, active, exited, withdrawable - the sole exception being whether they have been slashed or not). Instead, a validator's state is calculated by looking at the fields in the
Validator record that store the epoch numbers of state transitions.
In this case, if the validator was activated in the past and has not yet exited, then it is active.
This is used a few times in the spec, most notably in
get_active_validator_indices() which returns a list of all active validators at an epoch.
def is_eligible_for_activation_queue(validator: Validator) -> bool: """ Check if ``validator`` is eligible to be placed into the activation queue. """ return ( validator.activation_eligibility_epoch == FAR_FUTURE_EPOCH and validator.effective_balance == MAX_EFFECTIVE_BALANCE )
It is possible to deposit any amount over
MIN_DEPOSIT_AMOUNT (currently 1 Ether) into the deposit contract. However, validators do not become eligible for activation until their effective balance is equal to
MAX_EFFECTIVE_BALANCE, which corresponds to an actual balance of 32 Ether or more.
This predicate is used during epoch processing to find validators that have acquired the minimum necessary balance, but have not yet been added to the queue for activation. These validators are then marked as eligible for activation by setting the
validator.activation_eligibility_epoch to the next epoch.
def is_eligible_for_activation(state: BeaconState, validator: Validator) -> bool: """ Check if ``validator`` is eligible for activation. """ return ( # Placement in queue is finalized validator.activation_eligibility_epoch <= state.finalized_checkpoint.epoch # Has not yet been activated and validator.activation_epoch == FAR_FUTURE_EPOCH )
A validator that
is_eligible_for_activation() has had its
activation_eligibility_epoch set, but its
activation_epoch is not yet set.
To avoid any ambiguity or confusion on the validator side about its state, we wait until its eligibility activation epoch has been finalised before adding it to the activation queue by setting its
activation_epoch. Otherwise, it might at one point become active, and then the beacon chain could flip to a fork in which it is not active. This could happen if the latter fork had fewer blocks and had thus processed fewer deposits.
def is_slashable_validator(validator: Validator, epoch: Epoch) -> bool: """ Check if ``validator`` is slashable. """ return (not validator.slashed) and (validator.activation_epoch <= epoch < validator.withdrawable_epoch)
An unslashed validator remains eligible to be slashed from when it becomes active right up until it becomes withdrawable. This is
MIN_VALIDATOR_WITHDRAWABILITY_DELAY epochs (around 27 hours) after it has exited from being a validator and ceased validation duties.
def is_slashable_attestation_data(data_1: AttestationData, data_2: AttestationData) -> bool: """ Check if ``data_1`` and ``data_2`` are slashable according to Casper FFG rules. """ return ( # Double vote (data_1 != data_2 and data_1.target.epoch == data_2.target.epoch) or # Surround vote (data_1.source.epoch < data_2.source.epoch and data_2.target.epoch < data_1.target.epoch) )
There are two ways for validators to get slashed under Casper FFG:
- A double vote: voting more than once for the same target epoch, or
- A surround vote: the source–target interval of one attestation entirely contains the source–target interval of a second attestation from the same validator or validators. The reporting block proposer needs to take care to order the
IndexedAttestations within the
AttesterSlashingobject so that the first set of votes surrounds the second. (The opposite ordering also describes a slashable offence, but is not checked for here.)
def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool: """ Check if ``indexed_attestation`` is not empty, has sorted and unique indices and has a valid aggregate signature. """ # Verify indices are sorted and unique indices = indexed_attestation.attesting_indices if len(indices) == 0 or not indices == sorted(set(indices)): return False # Verify aggregate signature pubkeys = [state.validators[i].pubkey for i in indices] domain = get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch) signing_root = compute_signing_root(indexed_attestation.data, domain) return bls.FastAggregateVerify(pubkeys, signing_root, indexed_attestation.signature)
An IndexedAttestation passes this validity test only if all of the following apply.
- There is at least one validator index present.
- The list of validators contains no duplicates (the Python
setfunction performs deduplication).
- The indices of the validators are sorted. (It is not clear to me why this is required. It's used in the duplicate check here, but that could just be replaced by checking the set size.)
- Its aggregated signature verifies against the aggregated public keys of the listed validators.
Verifying the signature uses the magic of aggregated BLS signatures. The indexed attestation contains a BLS signature that is supposed to be the combined individual signatures of each of the validators listed in the attestation. This is verified by passing it to
bls.FastAggregateVerify() along with the list of public keys from the same validators. The verification succeeds only if exactly the same set of validators signed the message (
signing_root) as appear in the list of public keys. Note that
get_domain() mixes in the fork version, so that attestations are not valid across forks.
No check is done here that the
attesting_indices (which are the global validator indices) are all members of the correct committee for this attestation. In
process_attestation() they must be, by construction. In
process_attester_slashing() it doesn't matter: any validator signing conflicting attestations is liable to be slashed.
|See also||IndexedAttestation, Attestation|
def is_valid_merkle_branch(leaf: Bytes32, branch: Sequence[Bytes32], depth: uint64, index: uint64, root: Root) -> bool: """ Check if ``leaf`` at ``index`` verifies against the Merkle ``root`` and ``branch``. """ value = leaf for i in range(depth): if index // (2**i) % 2: value = hash(branch[i] + value) else: value = hash(value + branch[i]) return value == root
This is the classic algorithm for verifying a Merkle branch (also called a Merkle proof). Nodes are iteratively hashed as the tree is traversed from leaves to root. The bits of
index select whether we are the right or left child of our parent at each level. The result should match the given
root of the tree.
In this way we prove that we know that
leaf is the value at position
index in the list of leaves, and that we know the whole structure of the rest of the tree, as summarised in
We use this function in
process_deposit() to check whether the deposit data we've received is correct or not. Based on the deposit data they have seen, Eth2 clients build a replica of the Merkle tree of deposits in the deposit contract. The proposer of the block that includes the deposit constructs the Merkle proof using its view of the deposit contract, and all other nodes use
is_valid_merkle_branch() to check that their view matches the proposer's. It is a consensus failure if there is a mismatch, perhaps due to one client considering a deposit valid while another considers it invalid for some reason.