Table of Contents
Intro
This book will contain code walkthroughs for the most popular L2s. Its main purpose is to act as the main internal knowledge base for L2BEAT, but it can be used by anyone.
Arbitrum
Table of Contents
Intro
TODO
Table of Contents
Forced transactions
The BoLD proof system
Table of Contents
- High-level overview
- The
RollupUserLogic
contractstakeOnNewAssertion
functionnewStakeOnNewAssertion
functionnewStake
functionconfirmAssertion
functionreturnOldDeposit
andreturnOldDepositFor
functionswithdrawStakerFunds
functionaddToDeposit
functionreduceDeposit
functionremoveWhitelistAfterValidatorAfk
functionremoveWhitelistAfterFork
function
- Fast withdrawals
- The
EdgeChallengeManager
contract - [WIP] The
OneStepProofEntry
contract
High-level overview
Each pending assertion is backed by one single stake. A stake on an assertion also counts as a stake for all of its ancestors in the assertions tree. If an assertion has a child made by someone else, its stake can be moved anywhere else since there is already some stake backing it. Implicitly, the stake is tracked to be on the latest assertion a staker is staked on, and the outside logic makes sure that a new assertion can be created only in the proper conditions. In other words, it is made impossible for one actor to be staked on multiple assertions at the same time. If the last assertion of a staker has a child or is confirmed, then the staker is considered "inactive". If conflicting assertions are created, then one stake amount will be moved to a "loser stake escrow" as the protocol guarantees that only one stake will eventually remain active, and that the other will be slashed. The token used for staking is defined in the stakeToken
onchain value.
The RollupUserLogic
contract
Calls to the Rollup proxy are forwarded to this contract if the msg.sender
is not the designated proxy admin.
stakeOnNewAssertion
function
The entry point to propose new state roots, given that the staker is already staked on some other assertion on the same branch, is the stakeOnNewAssertion
function in the RollupProxy
contract, more specifically in the RollupUserLogic
implementation contract.
function stakeOnNewAssertion(
AssertionInputs calldata assertion,
bytes32 expectedAssertionHash
) public onlyValidator(msg.sender) whenNotPaused
The function is gated by the onlyValidator
modifier, which checks whether the validator whitelist is disabled, or if the caller is whitelisted. Usage of the whitelist is recommended for all chains without very high amounts of value secured. Realistically, only Arbitrum One will operate without a whitelist.
It is then checked that the caller is staked by querying the _stakerMap
mapping, which maps from addresses to Staker
struct, defined as:
struct Staker {
uint256 amountStaked;
bytes32 latestStakedAssertion;
uint64 index;
bool isStaked;
address withdrawalAddress;
}
in particular, the isStaked
field is checked to be true
.
The AssertionInputs
struct is defined as:
struct AssertionInputs {
// Additional data used to validate the before state
BeforeStateData beforeStateData;
AssertionState beforeState;
AssertionState afterState;
}
The BeforeStateData
struct is defined as:
struct BeforeStateData {
// The assertion hash of the prev of the beforeState(prev)
bytes32 prevPrevAssertionHash;
// The sequencer inbox accumulator asserted by the beforeState(prev)
bytes32 sequencerBatchAcc;
// below are the components of config hash
ConfigData configData;
}
The ConfigData
struct is defined as:
struct ConfigData {
bytes32 wasmModuleRoot;
uint256 requiredStake;
address challengeManager;
uint64 confirmPeriodBlocks;
uint64 nextInboxPosition;
}
It is then verified that the amountStaked
is at least the required amount. This is checked against the user-supplied requiredStake
in the configData
of the beforeStateData
. The correspondence of the user provided data will be later checked against the one already stored onchain.
The AssertionState
struct is defined as:
struct AssertionState {
GlobalState globalState;
MachineStatus machineStatus;
bytes32 endHistoryRoot;
}
An assertion hash (as in the expectedAssertionHash
param) is calculated by calling the assertionHash
function of the RollupLib
library, which takes as input the previous assertion hash, the current state hash and the current sequencer inbox accumulator. In the case of the beforeStateData
, the previous assertion hash is the prevPrevAssertionHash
, the current state hash is the beforeState
hash and the current sequencer inbox accumulator is the sequencerBatchAcc
of the beforeStateData
. On a high level, this corresponds to hashing the previous state with the current state and the inputs leading from the previous to the current state.
The _assertions
mapping maps from assertion hashes to AssertionNode
struct, which are defined as:
struct AssertionNode {
// This value starts at zero and is set to a value when the first child is created. After that it is constant until the assertion is destroyed or the owner destroys pending assertions
uint64 firstChildBlock;
// This value starts at zero and is set to a value when the second child is created. After that it is constant until the assertion is destroyed or the owner destroys pending assertions
uint64 secondChildBlock;
// The block number when this assertion was created
uint64 createdAtBlock;
// True if this assertion is the first child of its prev
bool isFirstChild;
// Status of the Assertion
AssertionStatus status;
// A hash of the context available at the time of this assertions creation. It should contain information that is not specific
// to this assertion, but instead to the environment at the time of creation. This is necessary to store on the assertion
// as this environment can change and we need to know what it was like at the time this assertion was created. An example
// of this is the wasm module root which determines the state transition function on the L2. If the wasm module root
// changes we need to know that previous assertions were made under a different root, so that we can understand that they
// were valid at the time. So when resolving a challenge by one step, the edge challenge manager finds the wasm module root
// that was recorded on the prev of the assertions being disputed and uses it to resolve the one step proof.
bytes32 configHash;
}
The function will check that such previous assertion hash already exists in the _assertions
mapping by verifying that the status
is different than NoAssertion
. The possible statuses are NoAssertion
, Pending
or Confirmed
.
To effectively move their stake, the function then checks that the msg.sender
's last assertion their staked on is the previous assertion hash claimed during this call, or that the last assertion their staked on has at least one child by checking the firstChildBlock
field, meaning that someone else has decided to back the claim with another assertion.
Before creating the new assertion, it is made sure that the config data of the claimed previous state matches the one that is already stored in the _assertions
mapping. This is necessary because the assertion hashes do not contain the config data and the previous check does not cover this case. It is then checked that the final machine status is either FINISHED
or ERRORED
as a sanity check [^2]. The possible machine statuses are RUNNING
, FINISHED
or ERRORED
. An ERRORED
state is considered valid because it proves that something went wrong during execution and governance has to intervene to resolve the issue.
Then, the correspondence between the values in assertion
and the previous assertion hash is again checked, but this check was confirmed to be redundant as the previous assertion hash is already calculated from the assertion
values.
The beforeState
's machineStatus
must be FINISHED
as it is not possible to advance from an ERRORED
state.
The GlobalState
struct is defined as:
struct GlobalState {
bytes32[2] bytes32Vals;
uint64[2] u64Vals;
}
where u64Vals[0]
represents a inbox position, u64Vals[1]
represents a position in message, bytes32Vals[0]
represents a block hash, and bytes32Vals[1]
represents a send root. It is checked that the position of the afterState
is greater than the position of the beforeState
, where the position is first checked against the inbox position, and, if equal, against the message position, to verify that the claim processes at least some new messages. It is then verified that the beforeStateData
's nextInboxPosition
is greater or equal than the afterState
's inbox position. The nextInboxPosition
can be seen as a "target" for the next assertion to process messages up to. If the current assertion didn't manage to process all messages up to the target, it is considered a "overflow" assertion. It is also checked that the current assertion doesn't claim to process more messages than currently posted by the sequencer.
The nextInboxPosition
is prepared for the next assertion to be either the current sequencer message count (as per bridge.sequencerMessageCount()
), or, if the current assertion already processed all messages, to the current sequencer message count plus one. In this way, all assertions are forced to process at least one message, and in this case, the next assertion will process exactly one message before updating the nextInboxPosition
again. The afterInboxPosition
is then checked to be non-zero. The newAssertionHash
is calculated given the previousAssertionHash
already checked, the afterState
and the sequencerBatchAcc
calculated given the afterState
's inbox position in its globalState
. It is check that this calculated hash is equal to the expectedAssertionHash
, and that it doesn't already exist in the _assertions
mapping.
The new assertion is then created using the AssertionNodeLib.createAssertion
function, which properly constructs the AssertionNode
struct. The isFirstChild
field is set to true
only if the prevAssertion
's firstChildBlock
is zero, meaning that there is none. The assertion status will be Pending
, the createdAtBlock
at the current block number, and the configHash
will contain the current onchain wasm module root, the current onchain base stake, the current onchain challenge period length (confirmPeriodBlocks
), the current onchain challenge manager contract reference and the nextInboxPosition
as previously calculated. It is then saved in the previous assertion that a child has been created, and that the _assertions
mapping is updated with the new assertion hash.
The _stakerMap
is then updated to store the new latest assertion. If the assertion is not an overflow assertion, i.e. it didn't process all messages up to the target set by the previous assertion, a minimumAssertionPeriod
gets enforced, meaning that validators cannot arbitrarily post assertions at any time of any size.
If the assertion is not a first child, then the stake already present in this contract is transferred to the loserStakeEscrow
contract, as only one stake is needed to be ready to be refunded from this contract.
newStakeOnNewAssertion
function
This function is used to create a new assertion and stake on it if the staker is not already staked on any assertion on the same branch.
function newStakeOnNewAssertion(
uint256 tokenAmount,
AssertionInputs calldata assertion,
bytes32 expectedAssertionHash,
address _withdrawalAddress
) public
It first checks that the validator is in the whitelist or that the whitelist is disabled, and that it is not already staked. Both the stakerList
and the stakerMap
mappings are updated with the new staker information. In particular, the latest confirmed assertion is used as the latest staked assertion. Any pending assertion trivially sits on the same branch as this one. After this, the function flow follows the same as the stakeOnNewAssertion
function. Finally, the tokens are transferred from the staker to the contract.
An alternative function signature can be found, where the msg.sender
is passed as the withdrawal address:
function newStakeOnNewAssertion(
uint256 tokenAmount,
AssertionInputs calldata assertion,
bytes32 expectedAssertionHash
) external
newStake
function
This function is used to join the staker set without adding a new assertion.
function newStake(
uint256 tokenAmount,
address _withdrawalAddress
) external whenNotPaused
as above, under the hood, the latest confirmed assertion is used as the latest staked assertion for this staker. The funds are then transferred from the staker to the contract.
confirmAssertion
function
The function is used to confirm an assertion and make it available for withdrawals and in general L2 to L1 messages to be executed on L1.
function confirmAssertion(
bytes32 assertionHash,
bytes32 prevAssertionHash,
AssertionState calldata confirmState,
bytes32 winningEdgeId,
ConfigData calldata prevConfig,
bytes32 inboxAcc
) external onlyValidator(msg.sender) whenNotPaused
It is first checked that the challenge period has passed by comparing the current block time, the createdAtBlock
value of the assertion to be confirmed and the confirmPeriodBlocks
of the config of the previous assertion. The previous assertion must be the latest confirmed assertion, meaning that assertions must be confirmed in order. It is checked whether the previous assertion has only one child or not. If not, it means that a challenge took place, so it is verified that the assertion to be confirmed is the winner. To assert this, a winningEdgeId
is provided to fetch an edge from the challengeManager
contract, specified again in the config of the previous assertion.
The ChallengeEdge
struct is defined as:
struct ChallengeEdge {
/// @notice The origin id is a link from the edge to an edge or assertion at a lower level.
/// Intuitively all edges with the same origin id agree on the information committed to in the origin id
/// For a SmallStep edge the origin id is the 'mutual' id of the length one BigStep edge being claimed by the zero layer ancestors of this edge
/// For a BigStep edge the origin id is the 'mutual' id of the length one Block edge being claimed by the zero layer ancestors of this edge
/// For a Block edge the origin id is the assertion hash of the assertion that is the root of the challenge - all edges in this challenge agree
/// that that assertion hash is valid.
/// The purpose of the origin id is to ensure that only edges that agree on a common start position
/// are being compared against one another.
bytes32 originId;
/// @notice A root of all the states in the history up to the startHeight
bytes32 startHistoryRoot;
/// @notice The height of the start history root
uint256 startHeight;
/// @notice A root of all the states in the history up to the endHeight. Since endHeight > startHeight, the startHistoryRoot must
/// commit to a prefix of the states committed to by the endHistoryRoot
bytes32 endHistoryRoot;
/// @notice The height of the end history root
uint256 endHeight;
/// @notice Edges can be bisected into two children. If this edge has been bisected the id of the
/// lower child is populated here, until that time this value is 0. The lower child has startHistoryRoot and startHeight
/// equal to this edge, but endHistoryRoot and endHeight equal to some prefix of the endHistoryRoot of this edge
bytes32 lowerChildId;
/// @notice Edges can be bisected into two children. If this edge has been bisected the id of the
/// upper child is populated here, until that time this value is 0. The upper child has startHistoryRoot and startHeight
/// equal to some prefix of the endHistoryRoot of this edge, and endHistoryRoot and endHeight equal to this edge
bytes32 upperChildId;
/// @notice The edge or assertion in the upper level that this edge claims to be true.
/// Only populated on zero layer edges
bytes32 claimId;
/// @notice The entity that supplied a mini-stake accompanying this edge
/// Only populated on zero layer edges
address staker;
/// @notice The block number when this edge was created
uint64 createdAtBlock;
/// @notice The block number at which this edge was confirmed
/// Zero if not confirmed
uint64 confirmedAtBlock;
/// @notice Current status of this edge. All edges are created Pending, and may be updated to Confirmed
/// Once Confirmed they cannot transition back to Pending
EdgeStatus status;
/// @notice The level of this edge.
/// Level 0 is type Block
/// Last level (defined by NUM_BIGSTEP_LEVEL + 1) is type SmallStep
/// All levels in between are of type BigStep
uint8 level;
/// @notice Set to true when the staker has been refunded. Can only be set to true if the status is Confirmed
/// and the staker is non zero.
bool refunded;
/// @notice TODO
uint64 totalTimeUnrivaledCache;
}
where EdgeStatus
can either be Pending
or Confirmed
.
In particular, the claimId
is checked to be the assertion hash to be confirmed, the status
has to be Confirmed
and the confirmedAtBlock
value should not be zero. On top of the challenge period, it is required that the confirmedAtBlock
value is at least challengeGracePeriodBlocks
old, with the purpose of being able to recover in case an invalid assertion is confirmed because of a bug.
The current assertion is checked to be Pending
, as opposed to NoAssertion
or Confirmed
. An external call to the Outbox is made by passing the sendRoot
and blockHash
saved in the current assertion's globalState
. Finally, the _latestConfirmed
asserrtion is updated with the current one and the status is updated to Confirmed
.
returnOldDeposit
and returnOldDepositFor
functions
This function is used to initiate a refund of the staker's deposit when its latest assertion either has a child or is confirmed.
function returnOldDeposit() external override onlyValidator(msg.sender) whenNotPaused
function returnOldDepositFor(
address stakerAddress
) external override onlyValidator(stakerAddress) whenNotPaused
In the first case, it is checked than the msg.sender
is the validator itself, while in the second case that the sender is the designated withdrawal address for the staker. Then it is verified that the staker is actively staked, and that it is "inactive". A staker is defined as inactive when their latest assertion is either confirmed or has at least one child, meaning that there is some other stake backing it.
At this point the _withdrawableFunds
mapping value is increased by the staker's deposit for its withdrawal address, as well as the totalWithdrawableFunds
value. The staker is then deleted from the _stakerList
and _stakerMap
mappings. The funds are not actually transferred at this point.
withdrawStakerFunds
function
This function is used to finalize the withdrawal of uncommitted funds from this contract to the msg.sender
.
function withdrawStakerFunds() external override whenNotPaused returns (uint256)
This is done by checking the _withdrawableFunds
mapping, which maps from addresses to uint256
amounts. The mapping is then set to zero, and the totalWithdrawableFunds
value is updated accordingly. Finally, the funds are transferred to the msg.sender
.
addToDeposit
function
This function is used to add funds to the staker's deposit.
function addToDeposit(
address stakerAddress,
address expectedWithdrawalAddress,
uint256 tokenAmount
) external whenNotPaused
The staker is supposed to be already staked when calling this function. In particular, the amountStaked
is increased by the amount sent.
reduceDeposit
function
This function is used to reduce the staker's deposit.
function reduceDeposit(
uint256 target
) external onlyValidator(msg.sender) whenNotPaused
The staker is required to be inactive. The difference between the current deposit and the target
is then added to the amount of withdrawable funds.
removeWhitelistAfterValidatorAfk
function
If a whitelist is enabled, the system allows for its removal if all validators are inactive for a certain amount of time. The function checks whether the latest confirmed assertion, or its first child if present, is older than validatorAfkBlocks
. If the validatorAfkBlocks
onchain value is set to 0, this mechanism is disabled.
function removeWhitelistAfterValidatorAfk() external
If the validatorAfkBlocks
is set to be greater than the challenge period (or more precisely, two times the challenge period in the worst case), then the child will be confirmed (if valid) before being used for the calculation. The first child check is likely used in case the validatorAfkBlocks
is set to be smaller than the challenge period.
It's important to note that this function is quite different from its pre-BoLD version.
There is an edge case in case the minimumAssertionPeriod
is set lower than the difference between the challenge period and the validatorAfkBlocks
, where the whitelist gets removed no matter what.
Under standard deployments, the validatorAfkBlocks
value is set to be around twice the maximum delay caused by the challenge protocol, which is two times the challenge period.
removeWhitelistAfterFork
function
This function is used to remove the whitelist in case the chain id of the underlying chain changes.
function removeWhitelistAfterFork() external
It simply checks that the deploymentTimeChainId
, which is stored onchain, matches the block.chainId
value.
Fast withdrawals
Fast withdrawals is a feature introduced in nitro-contracts v2.1.0 for AnyTrust chains. It allows to specify a anyTrustFastConfirmer
address that can propose and confirm assertions without waiting for the challenge period to pass.
fastConfirmAssertion
and fastConfirmNewAssertion
functions
To immediately confirm an already proposed assertion, the fastConfirmAssertion
function is used in the RollupUserLogic
contract:
function fastConfirmAssertion(
bytes32 assertionHash,
bytes32 parentAssertionHash,
AssertionState calldata confirmState,
bytes32 inboxAcc
) public whenNotPaused
the function checks that the msg.sender
is the anyTrustFastConfirmer
address, and that the assertion is pending. The assertion is then confirmed as in the confirmAssertion
function.
The anyTrustFastConfirmer
is also allowed to propose new assertions without staker checks, and also immediately confirm such assertions. To do so, the fastConfirmNewAssertion
function is used:
function fastConfirmNewAssertion(
AssertionInputs calldata assertion,
bytes32 expectedAssertionHash
) external whenNotPaused
Both functions, in practice, act very similar to the admin-gated forceCreateAssertion
and forceConfirmAsserton
functions in the RollupAdminLogic
contract, see Admin operations for more details.
The EdgeChallengeManager
contract
This contract implements the challenge protocol for the BoLD proof system.
createLayerZeroEdge
function
This function is used to initiate a challenge between sibling assertions. All "layer zero" edges have a starting "height" of zero and a starting "length" of one.
function createLayerZeroEdge(
CreateEdgeArgs calldata args
) external returns (bytes32)
The CreateEdgeArgs
struct is defined as:
struct CreateEdgeArgs {
/// @notice The level of edge to be created. Challenges are decomposed into multiple levels.
/// The first (level 0) being of type Block, followed by n (set by NUM_BIGSTEP_LEVEL) levels of type BigStep, and finally
/// followed by a single level of type SmallStep. Each level is bisected until an edge
/// of length one is reached before proceeding to the next level. The first edge in each level (the layer zero edge)
/// makes a claim about an assertion or assertion in the lower level.
/// Finally in the last level, a SmallStep edge is added that claims a lower level length one BigStep edge, and these
/// SmallStep edges are bisected until they reach length one. A length one small step edge
/// can then be directly executed using a one-step proof.
uint8 level;
/// @notice The end history root of the edge to be created
bytes32 endHistoryRoot;
/// @notice The end height of the edge to be created.
/// @dev End height is deterministic for different levels but supplying it here gives the
/// caller a bit of extra security that they are supplying data for the correct level of edge
uint256 endHeight;
/// @notice The edge, or assertion, that is being claimed correct by the newly created edge.
bytes32 claimId;
/// @notice Proof that the start history root commits to a prefix of the states that
/// end history root commits to
bytes prefixProof;
/// @notice Edge type specific data
/// For Block type edges this is the abi encoding of:
/// bytes32[]: Inclusion proof - proof to show that the end state is the last state in the end history root
/// AssertionStateData: the before state of the edge
/// AssertionStateData: the after state of the edge
/// bytes32 predecessorId: id of the prev assertion
/// bytes32 inboxAcc: the inbox accumulator of the assertion
/// For BigStep and SmallStep edges this is the abi encoding of:
/// bytes32: Start state - first state the edge commits to
/// bytes32: End state - last state the edge commits to
/// bytes32[]: Claim start inclusion proof - proof to show the start state is the first state in the claim edge
/// bytes32[]: Claim end inclusion proof - proof to show the end state is the last state in the claim edge
/// bytes32[]: Inclusion proof - proof to show that the end state is the last state in the end history root
bytes proof;
}
In practice, the number of levels is usually set to be 3, with NUM_BIGSTEP_LEVEL
set to 1. The claimId
corresponds to an assertion hash.
If a whitelist is enabled in the system being validated, then the msg.sender
must be whitelisted. The whitelist is referenced through the assertionChain
onchain value. The type of the edge is fetched based on the level: if 0
then the type is Block
, if 1
then the type is BigStep
, if 2
then the type is SmallStep
. This section will first discuss layer zero edges of type Block
.
Block-level layer zero edges
If the edge is of type Block
, then the proof
field is decoded to fetch two AssertionStateData
structs, one for the predecessorStateData
and the other for the claimStateData
.
The AssertionStateData
struct is defined as:
struct AssertionStateData {
/// @notice An execution state
AssertionState assertionState;
/// @notice assertion Hash of the prev assertion
bytes32 prevAssertionHash;
/// @notice Inbox accumulator of the assertion
bytes32 inboxAcc;
}
It is checked that the claimStateData
produces the same hash as the claimId
, and that the predecessorStateData
produces the same hash as the claimStateData
's prevAssertionHash
. It is then checked the provided endHistoryRoot
matches the one in the claimStateData
's assertionState
.
The claimStateData
's previousAssertionHash
should be seen as a link to the information rivals agree on, which corresponds to the predecessorStateData
.
An AssertionReferenceData
struct is created, which is defined as:
struct AssertionReferenceData {
/// @notice The id of the assertion - will be used in a sanity check
bytes32 assertionHash;
/// @notice The predecessor of the assertion
bytes32 predecessorId;
/// @notice Is the assertion pending
bool isPending;
/// @notice Does the assertion have a sibling
bool hasSibling;
/// @notice The execution state of the predecessor assertion
AssertionState startState;
/// @notice The execution state of the assertion being claimed
AssertionState endState;
}
which is instantiated in the following way:
ard = AssertionReferenceData(
args.claimId,
claimStateData.prevAssertionHash,
assertionChain.isPending(args.claimId),
assertionChain.getSecondChildCreationBlock(claimStateData.prevAssertionHash) > 0,
predecessorStateData.assertionState,
claimStateData.assertionState
)
The assertion must be Pending
for its edge to be created and it has to have a rival, i.e. a sibling. It is checked that both the machineStatus
of the startState
and endState
is not RUNNING
.
The proof
is then decoded to fetch an inclusionProof
. Hashes of both the startState
and endState
are computed. The startHistoryRoot
is computed just by appending the startState
hash to an empty merkle tree, as it is the initial state of a layer zero node. It is checked that the endState
hash is included in the endHistoryRoot
using the inclusionProof
. The position of such hash is saved in the LAYERZERO_BLOCKEDGE_HEIGHT
constant. Then it is checked that the previously computed startHistoryRoot
is a prefix of endHistoryRoot
by using the prefixProof
.
Finally, a ChallengeEdge
is created using the endState
's prevAssertionHash
as the originId
, the startHistoryRoot
computed before, a startHeight
of zero, the endHistoryRoot
provided, the proper endHeight
, the claimId
, the msg.sender
as the staker, the appropriate level, the status
is set to Pending
, the createdAtBlock
is set to the current block number, and the confirmedAtBlock
, lowerChildId
, upperChildId
fields are initialized to zero and the refunded
field is set to false
.
If the whitelist is enabled, then it is checked that a single party cannot create two layer zero edges that rival each other. If the whitelist is disabled with check is not effective as an attacker can simply use a different address.
The edge is then added to the onchain EdgeStore store
after it is checked that it doesn't exist already. The mutualId
is calculated, which identifies all rival edges. If there is no rival, the edge is saved as UNRIVALED
(representing a dummy edge id) in the firstRivals
mapping, otherwise the current edge is saved into it.
Finally, a stake is requested to be sent to this address if there are no rivals, or to the excessStakeReceiver
otherwise, which corresponds to the loserStakeEscrow
contract. It is important to note that for the Block
level, the stake is set to zero, while for the other levels it is set to be some fractions of the bond needed to propose an assertion.
Non-block-level layer zero edges
If the edge is not of type Block
, it means that an assertion on a lower level is being proposed, and it must link to an assertion of lower level (with Block
being the lowest one). It is possible to create a non-Block
level layer zero edge only if the lower level edge is of length one and is rivaled. It is checked that such edge is also Pending
and that the level is just one lower the one being proposed.
The proof is then decoded in the following manner:
(
bytes32 startState,
bytes32 endState,
bytes32[] memory claimStartInclusionProof,
bytes32[] memory claimEndInclusionProof,
bytes32[] memory edgeInclusionProof
) = abi.decode(args.proof, (bytes32, bytes32, bytes32[], bytes32[], bytes32[]));
It is verified that the startState
is part of the startHistoryRoot
of the lower level edge and that the endState
is part of the endHistoryRoot
of the lower level edge, so that the current edge can be considered a more fine grained version of the lower level edge. It's important to note that it is still possible to propose an invalid higher-level edge for a valid lower-level edge, so it must be possible to propose multiple higher-level edges for the same lower-level edge.
The rest of the checks follow the same as the Block
level edges, starting from the ccreation of the startHistoryRoot
as a length one merkle tree, followed by the check that the endState
is included in the endHistoryRoot
using the edgeInclusionProof
, and so on.
bisectEdge
function
This function is used to bisect edges into two children to break down the dispute process into smaller steps. No new stake as any new edge is checked agaist the history root of the parent edge.
function bisectEdge(
bytes32 edgeId,
bytes32 bisectionHistoryRoot,
bytes calldata prefixProof
) external returns (bytes32, bytes32)
It is checked that the edge being bisected is still Pending
and that it is rivaled. It is then verified that the bisectionHistoryRoot
is a prefix of the endHistoryRoot
of the edge being bisected.
Then both the lower and upper children are created, using the startHistoryRoot
and bisectionHistoryRoot
for the lower child, and bisectionHistoryRoot
and endHistoryRoot
root for the upper child. The children are then saved for the parent edge under the lowerChildId
and upperChildId
fields.
confirmEdgeByOneStepProof
function
This function is used to confirm an edge of length one with a one-step proof.
function confirmEdgeByOneStepProof(
bytes32 edgeId,
OneStepData calldata oneStepData,
ConfigData calldata prevConfig,
bytes32[] calldata beforeHistoryInclusionProof,
bytes32[] calldata afterHistoryInclusionProof
) public
the function builds an ExecutionContext
struct, which is defined as:
struct ExecutionContext {
uint256 maxInboxMessagesRead;
IBridge bridge;
bytes32 initialWasmModuleRoot;
}
where the maxInboxMessagesRead
is filled with the nextInboxPosition
of the config of the previous assertion, the bridge
reference is taken from assertionChain
, and the initialWasmModuleRoot
is again taken from the config of the previous assertion. It is checked that the edge exists, that its type is SmallStep
, and that its length is one.
Then the appropriate data to pass to the oneStepProofEntry
contract for the onchain one step execution is prepared. In particular, the machine step correspondingto the start height of this edge is computed. Machine steps reset to zero with new blocks, so there's no need to fetch the corresponding Block
level edge. The machine step of a SmallStep
edge corresponds to its startHeight
plus the startHeight
of its BigStep
edge. Previous level edges are fetched through the originId
field stored in each edge and the firstRivals
mapping. It is necessary to go through the firstRivals
mapping as the originId
stores a mutual id of the edge and not an edge id, which is needed to fetch the startHeight
.
It is made sure that the beforeHash
inside oneStepData
is included in the startHistoryRoot
at position machineStep
. The OneStepData
struct is defined as:
struct OneStepData {
/// @notice The hash of the state that's being executed from
bytes32 beforeHash;
/// @notice Proof data to accompany the execution context
bytes proof;
}
The oneStepProofEntry.proveOneStep
function is then called passing the execution context, the machine step, the beforeHash
and the proof
to calculate the afterHash
. It is then checked that the afterHash
is included in the endHistoryRoot
at position machineStep + 1
.
Finally, the edge satus is updated to Confirmed
, and the confirmedAtBlock
is set to the current block number. Moreover, it is checked that no other rival is already confirmed through the confirmedRivals
mapping inside the store
, and if not the edge is saved there under its mutual id.
confirmEdgeByTime
function
This function is used to confirm an edge when enough time has passed, i.e. one challenge period on the player's clock.
function confirmEdgeByTime(bytes32 edgeId, AssertionStateData calldata claimStateData) public
Only layer zero edges can be confirmed by time.
If the edge is block-level and the claim is the first child of its predecessor, then the time between its assertoin and the second child's assertion is counted towards this edge. If this was not done, then the timer wouldn't count the time from when the assertion is created but it would need to wait it to be challenged, which is absurd.
If the edge is unrivaled, then the time between the current block number and its creation is counted. If the edge is rivaled, and it was created before the rival, then the time between the rival's creation and this edge's creation is counted. If the edge is rivaled and it was created after the rival, then no time is counted.
If the edge has been bisected, i.e. it has children, then the minimum children unrivaled time is counted. The rationale is that if a child is correct but the parent is not, it would be incorrect to count the unrivaled time of the correct child towards the parent. If the honest party acts as fast as possible, then an incorrect claim's unrivaled time would alywas be close to zero. If an edge is confirmed by a one step proof, then it's unrivaled time is set to infinity (in practice type(uint64).max
).
Finally, if the total time unrivaled is greater than the challenge period (espressed with confirmationThresholdBlock
), then the edge is confirmed. Note that this value is a different variable compared to the confirmPeriodBlocks
in the RollupProxy
contract, which determines when an assertion can be confirmed if not challenged.
The way that timers across different levels affect each other is explained in the following section.
updateTimerCacheByClaim
function
This function is used to update the timer cache with direct level inheritance.
function updateTimerCacheByClaim(
bytes32 edgeId,
bytes32 claimingEdgeId,
uint256 maximumCachedTime
) public
First, the total time unrivaled without level inheritance is calculated as explained in the confirmEdgeByTime
function. It is then checked that the provided claimingEdgeId
's claimId
corresponds to the edgeId
. The claimingEdgeId
unrivaled time is then added to the time unrivaled without level inheritance, and the edge unrivaled time is updated to this value only if it is greater than the current value.
Note that this effectively acts as taking the max unrivaled time of the children edges on the higher level, as any of them can be used to update the parent edge's timer cache. The rationale is that at least one correct corresponding higher-level edge is needed to confirm the parent edge in the lower level.
updateTimerCacheByChildren
function
This function is used to update the timer cache without direct level inheritance.
function updateTimerCacheByChildren(bytes32 edgeId, uint256 maximumCachedTime) public
[WIP] The OneStepProofEntry
contract
This contract is used as the entry point to execute one-step proofs onchain.
proveOneStep
function
This function is used called from the confirmEdgeByOneStepProof
function in the EdgeChallengeManager
contract.
function proveOneStep(
ExecutionContext calldata execCtx,
uint256 machineStep,
bytes32 beforeHash,
bytes calldata proof
) external view returns (bytes32 afterHash)
Admin operations
Table of Contents
- The
RollupAdminLogic
contractsetChallengeManager
functionsetValidatorWhitelistDisabled
functionsetInbox
functionsetSequencerInbox
functionsetDelayedInbox
functionsetOutbox
functionremoveOldOutbox
functionsetWasmModuleRoot
functionsetLoserStakeEscrow
functionforceConfirmAssertion
functionforceCreateAssertion
functionforceRefundStaker
functionsetBaseStake
functionsetConfirmPeriodBlocks
functionsetValidatorAfkBlocks
functionsetMinimumAssertionPeriod
functionsetOwner
functionsetValidator
functionpause
andresume
functionssetAnyTrustFastConfirmer
function
The RollupAdminLogic
contract
Calls to the Rollup proxy are forwarded to this contract if the msg.sender
is the designated proxy admin.
setChallengeManager
function
This function allows the proxy admin to update the challenge manager contract reference.
function setChallengeManager(
address _challengeManager
) external
The challenge manager contract is used to determine whether an assertion can be considered a winner or not when attempting to confirm it.
setValidatorWhitelistDisabled
function
This function allows the proxy admin to disable the validator whitelist.
function setValidatorWhitelistDisabled(
bool _validatorWhitelistDisabled
) external
If the whitelist is enabled, only whitelisted validators can join the staker set and therefore propose new assertions.
setInbox
function
This function allows the proxy admin to update the inbox contract reference.1
function setInbox(
IInboxBase newInbox
) external
setSequencerInbox
function
This function allows the proxy admin to update the sequencer inbox contract reference.
function setSequencerInbox(
address _sequencerInbox
) external override
The call is forwarded to the bridge
contract, specifically by calling its setSequencerInbox
function. The bridge will only accept messages to be enqueued in the main sequencerInboxAccs
array if the call comes from the sequencerInbox
. The sequencerInboxAccs
is read when creating new assertions, in particular when assigning the nextInboxPosition
to the new assertion and when checking that the currently considered assertion doesn't claim to have processed more messages than actually posted by the sequencer.
setDelayedInbox
function
This function allows the proxy admin to activate or deactivate a delayed inbox.
function setDelayedInbox(address _inbox, bool _enabled) external override
The call is forwarded to the bridge
contract, specifically by calling its setDelayedInbox
function. The bridge
contract will only accept messages to be enqueued in the delayed inbox if the call comes from an authorized inbox.
setOutbox
function
This function allows the proxy admin to update the outbox contract reference.
function setOutbox(
IOutbox _outbox
) external override
The outbox contract is used to send messages from L2 to L1. The call is forwarded to the bridge
contract, specifically by calling its setOutbox
function.
removeOldOutbox
function
This function allows the proxy admin to remove an old outbox contract reference.
function removeOldOutbox(
address _outbox
) external override
The call is forwarded to the bridge
contract, specifically by calling its setOutbox
function.
setWasmModuleRoot
function
This function allows the proxy admin to update the wasm module root, which represents the offchain program being verified by the proof system.
function setWasmModuleRoot(
bytes32 newWasmModuleRoot
) external override
The wasmModuleRoot
is included in each assertion's configData
.
setLoserStakeEscrow
function
This function allows the proxy admin to update the loser stake escrow contract reference.
function setLoserStakeEscrow(
address newLoserStakerEscrow
) external override
The loser stake escrow is used to store the excess stake when a conflicting assertion is created.
forceConfirmAssertion
function
This function allows the proxy admin to confirm an assertion without waiting for the challenge period, and without most validation of the assertions.
function forceConfirmAssertion(
bytes32 assertionHash,
bytes32 parentAssertionHash,
AssertionState calldata confirmState,
bytes32 inboxAcc
) external override whenPaused
The function can only be called when the contract is paused. It is only checked that the assertion is Pending
.
forceCreateAssertion
function
This function allows the proxy admin to create a new assertion by skipping some of the validation checks.
function forceCreateAssertion(
bytes32 prevAssertionHash,
AssertionInputs calldata assertion,
bytes32 expectedAssertionHash
) external override whenPaused
The function can only be called when the contract is paused. It skips all checks related to staking, the check that the previous assertion exists and that the minimumAssertionPeriod
has passed. Since the configHash
of the previous assertion is fetched from the _assertions
mapping, and the current assertion's configData
in its beforeStateData
is still checked against it, then this effectively acts as an existence check.
A comment in the function suggest a possible emergency procedure during which this function might be used:
// To update the wasm module root in the case of a bug:
// 0. pause the contract
// 1. update the wasm module root in the contract
// 2. update the config hash of the assertion after which you wish to use the new wasm module root (functionality not written yet)
// 3. force refund the stake of the current leaf assertion(s)
// 4. create a new assertion using the assertion with the updated config has as a prev
// 5. force confirm it - this is necessary to set latestConfirmed on the correct line
// 6. unpause the contract
forceRefundStaker
function
This function allows the proxy admin to forcefully trigger refunds of stakers' deposit, bypassing the msg.sender
checks.
function forceRefundStaker(
address[] calldata staker
)
The function still checks that each staker is inactive before triggering the refund.
setBaseStake
function
This function allows the proxy admin to update required stake to join the staker set and propose new assertions.
function setBaseStake(
uint256 newBaseStake
) external override
The function currently only allows to increase the base stake, not to decrease it, as an attacker might be able to steal honest funds from the contract.
setConfirmPeriodBlocks
function
This function allows the proxy admin to update the challenge period length.
function setConfirmPeriodBlocks(
uint64 newConfirmPeriod
) external override
the function just checks that the new value is greater than zero.
setValidatorAfkBlocks
function
This function allows the proxy admin to update the period after which the whitelist is removed if all validators are inactive.
function setValidatorAfkBlocks(
uint64 newAfkBlocks
) external override
setMinimumAssertionPeriod
function
This function allows the proxy admin to set the minimum time between two non-overflow assertions.
function setMinimumAssertionPeriod(
uint64 newPeriod
) external override
setOwner
function
This function allows the proxy admin to update the admin itself.
function setOwner(
address newOwner
) external override
it internally calls the _changeAdmin
function.
setValidator
function
This function allows the proxy admin to add or remove validators from the whitelist.
function setValidator(address[] calldata _validator, bool[] calldata _val) external override
pause
and resume
functions
These functions allow the proxy admin to pause and resume the contract.
function pause() external override
function resume() external override
setAnyTrustFastConfirmer
function
This function allows the proxy admin to set a fast confirmer that can confirm assertions without waiting for the challenge period and propose new assertions without staking.
function setAnyTrustFastConfirmer(
address _anyTrustFastConfirmer
) external
-
TODO: explain what it is and why it is referenced here. ↩
Optimism
Table of Contents
Intro
TODO
Table of Contents
Scroll
TODO: general scroll intro
Table of Contents
Sequencing
Scroll L2 operates a centralized sequencer that accepts transactions and generates new L2 blocks. The Sequencer exposes a JSON-RPC interface for accepting L2 transactions, and is built on a fork of Geth.
Until the Euclid upgrade (April, 2025), Scroll L2 nodes maintained Clique, a Proof-of-Authority consensus with the L2 Sequencer set as authorized signer for block production. Since then, the L2 nodes read the authorized unsafe block signer from the new SystemConfig contract on L1.
The block time is set at 3 seconds and maintained on a best-effort basis, not enforced by the protocol.
Forced transactions
Messages appended to the message queue (L1MessageQueueV2
) are expected to be included into a bundle by the centralized operator. Messages in the queue cannot be skipped or dropped, but the sequencer can choose to finalize a bundle without processing any queued messaged. Should a permissioned sequencer not process any queued messages within the SystemConfig.maxDelayMessageQueue
, anyone can include queue messages as commiting and finalizing bundles becomes permissionless.
High-level flow
To force transactions on Scroll through L1, the following steps are taken:
- The EOA sends a message to the L2 through the
sendTransaction
function on theEnforcedTxGateway
contract. - The
sendTransaction
function calls theappendEnforcedTransaction
function on theL1MessageQueue
contract, which pushes the message to the queue through themessageRollingHashes
(uint256 => bytes32, messageIndex => timestamp-rollingHash) mapping. - At each finalization (
finalizeBundlePostEuclidV2
) the number of messages processed in the bundle (totalL1MessagesPoppedOverall
) is passed as input - In the internal
_finalizeBundlePostEuclidV2
function, themessageQueueHash
is computed up to thetotalL1MessagesPoppedOverall - 1
queue index - The
messageQueueHash
is passed a public input to the verifier.
Should messages not be processed by the permissioned sequencer, the EOA waits for either:
SystemConfig.maxDelayEnterEnforcedMode
to pass since the last batch finalization, orSystemConfig.maxDelayMessageQueue
to pass since the first unfinalized message enqueue time. Then the EOA can finally call can submit a batch viacommitAndFinalizeBatch
and at the same time activate the permissionless sequencing mode (UpdateEnforcedBatchMode
).
EnforcedTxGateway
: the sendTransaction
function
This function acts as the entry point to send L1 to L2 messages from an EOA. There are two variants:
function sendTransaction(
address _target,
uint256 _value,
uint256 _gasLimit,
bytes calldata _data
)
function sendTransaction(
address _sender,
address _target,
uint256 _value,
uint256 _gasLimit,
bytes calldata _data,
uint256 _deadline,
bytes memory _signature,
address _refundAddress
)
The first variant is for direct calls, while the second allows for signed messages. Both functions validate that the caller is not paused and charge a fee based on the gas limit. For contract callers, L1-to-L2 address aliasing is applied. For EOAs and EIP-7702 delegated EOAs, the original address is used. The functions ultimately call appendEnforcedTransaction
on the L1MessageQueueV2
contract.
L1MessageQueueV2
: the appendEnforcedTransaction
function
The appendEnforcedTransaction
function can only be called by the authorized EnforcedTxGateway
contract.
function appendEnforcedTransaction(
address _sender,
address _target,
uint256 _value,
uint256 _gasLimit,
bytes calldata _data
) external
The function first validates that the gas limit is within the configured bounds in SystemConfig
. It then computes a transaction hash and stores it in the messageRollingHashes
mapping along with the current timestamp. The mapping uses a special encoding where the lower 32 bits store the timestamp and the upper 224 bits store a rolling hash of all messages.
ScrollChain
: the commitAndFinalizeBatch
function
This function allows forcing inclusion of transactions when the enforced batch mode conditions are met.
function commitAndFinalizeBatch(
uint8 version,
bytes32 parentBatchHash,
FinalizeStruct calldata finalizeStruct
) external
where FinalizeStruct
is defined as:
/// @notice The struct for permissionless batch finalization.
/// @param batchHeader The header of this batch.
/// @param totalL1MessagesPoppedOverall The number of messages processed after this bundle.
/// @param postStateRoot The state root after this batch.
/// @param withdrawRoot The withdraw trie root after this batch.
/// @param zkProof The bundle proof for this batch (single-batch bundle).
/// @dev See `BatchHeaderV7Codec` for the batch header encoding.
struct FinalizeStruct {
bytes batchHeader;
uint256 totalL1MessagesPoppedOverall;
bytes32 postStateRoot;
bytes32 withdrawRoot;
bytes zkProof;
}
The function first checks if either delay condition is met:
- No batch has been finalized for
maxDelayEnterEnforcedMode
seconds - No message has been included for
maxDelayMessageQueue
seconds
If either condition is met, it enables enforced batch mode by:
- Reverting any unfinalized batches
- Setting the enforced mode flag
- Allowing the batch to be committed and finalized with a ZK proof
Once in enforced mode, only batches with proofs can be submitted until the owner (Scroll Security Council) explicitly disables enforced mode. Moreover, the designated Sequencer can't commit or finalize batches anymore due to the whenEnforcedBatchNotEnabled
check.
Table of Contents
Proof system [TO BE EXPANDED]
Scroll's proof system is built to validate and finalize batches of L2 transactions that are committed on L1. The system uses ZK proofs to validate state transitions and allows for both normal sequencing and enforced batch modes.
Batch Lifecycle
A batch goes through two main phases:
- Commitment: The batch is proposed and its data is made available on L1
- Finalization: The batch is proven valid with a ZK proof and finalized
Batch Commitment
Batches can be committed in two ways:
- Normal sequencing mode via
commitBatchWithBlobProof()
orcommitBatches()
- Enforced batch mode via
commitAndFinalizeBatch()
The key differences are:
- Normal mode requires the sequencer role
- Enforced mode can be triggered by anyone after certain delay conditions are met
- Normal mode separates commitment from finalization
- Enforced mode combines commitment and finalization in one transaction
Batch Finalization
Finalization requires a valid ZK proof and can happen through:
finalizeBundleWithProof()
- For pre-EuclidV2 batchesfinalizeBundlePostEuclidV2()
- For post-EuclidV2 batchescommitAndFinalizeBatch()
- For enforced mode batches
The finalization process:
- Validates the batch exists and hasn't been finalized
- Verifies the ZK proof against the batch data
- Updates state roots and withdrawal roots
- Marks messages as finalized in the L1 message queue
Enforced Mode
The system can enter enforced mode when either:
- No batch has been finalized for
maxDelayEnterEnforcedMode
seconds - No message has been included for
maxDelayMessageQueue
seconds
In enforced mode:
- The normal sequencer is disabled
- Anyone can submit batches with proofs via
commitAndFinalizeBatch()
- Only the security council can disable enforced mode
This provides a permissionless fallback mechanism if the sequencer fails or misbehaves.
Batch Versions
The system supports multiple batch versions with different encodings:
- V0-V6: Pre-EuclidV2 formats using various chunk codecs
- V7+: Post-EuclidV2 formats using blob data
Key version transitions:
- V5: Special Euclid initial batch for ZKT/MPT transition
- V7: EuclidV2 upgrade introducing new batch format
The version determines:
- How batch data is encoded and validated
- Which finalization function to use
- What proofs are required
ZK Proof Verification
Proofs are verified by the RollupVerifier
contract which:
- Takes the batch data and proof as input
- Validates the proof matches the claimed state transition
- Returns success/failure
The proof format and verification logic varies by batch version.
Security Considerations
The system prioritizes security over liveness by allowing batch reversion and enforced mode activation only after a delay.
Admin operations
Table of Contents
- The
ScrollChain
contract - The
SystemConfig
contract - The
EnforcedTxGateway
contract - The
L1MessageQueueV2
contract
The ScrollChain
contract
The ScrollChain contract maintains data for the Scroll rollup and includes several admin operations that can only be executed by the contract owner.
addSequencer
function
This function allows the owner to add an account to the sequencer list.
function addSequencer(address _account) external onlyOwner
The account must be an EOA (Externally Owned Account) as external services rely on EOA sequencers to decode metadata directly from transaction calldata.
removeSequencer
function
This function allows the owner to remove an account from the sequencer list.
function removeSequencer(address _account) external onlyOwner
addProver
function
This function allows the owner to add an account to the prover list.
function addProver(address _account) external onlyOwner
Similar to sequencers, the account must be an EOA as external services rely on EOA provers to decode metadata from transaction calldata.
removeProver
function
This function allows the owner to remove an account from the prover list.
function removeProver(address _account) external onlyOwner
updateMaxNumTxInChunk
function
This function allows the owner to update the maximum number of transactions allowed in each chunk.
function updateMaxNumTxInChunk(uint256 _maxNumTxInChunk) external onlyOwner
setPause
function
This function allows the owner to pause or unpause the contract.
function setPause(bool _status) external onlyOwner
When paused, certain operations like committing and finalizing batches will be restricted.
disableEnforcedBatchMode
function
This function allows the owner to exit from enforced batch mode.
function disableEnforcedBatchMode() external onlyOwner
The enforced batch mode is automatically enabled when certain conditions are met (like message queue delays) and can only be disabled by the owner.
revertBatch
function
This function allows the owner to revert batches that haven't been finalized yet.
function revertBatch(bytes calldata batchHeader) external onlyOwner
This function can only revert version 7 batches and cannot revert finalized batches. During commit batch, only the last batch hash is stored in storage, so intermediate batches cannot be reverted.
The SystemConfig
contract
The SystemConfig contract manages various system-wide parameters for the Scroll rollup. It includes several admin operations that can only be executed by the contract owner.
updateMessageQueueParameters
function
This function allows the owner to update parameters related to the message queue.
function updateMessageQueueParameters(MessageQueueParameters memory _params) external onlyOwner
The parameters include:
maxGasLimit
: The maximum gas limit allowed for each L1 messagebaseFeeOverhead
: The overhead used to calculate L2 base feebaseFeeScalar
: The scalar used to calculate L2 base fee
updateEnforcedBatchParameters
function
This function allows the owner to update parameters related to the enforced batch mode.
function updateEnforcedBatchParameters(EnforcedBatchParameters memory _params) external onlyOwner
The parameters include:
maxDelayEnterEnforcedMode
: If no batch has been finalized for this duration, batch submission becomes permissionlessmaxDelayMessageQueue
: If no message is included/finalized for this duration, batch submission becomes permissionless
updateSigner
function
This function allows the owner to update the authorized signer address.
function updateSigner(address _newSigner) external onlyOwner
The signer is an authorized address that can perform certain privileged operations in the system.
Initialization
The contract is initialized with the following parameters:
function initialize(
address _owner,
address _signer,
MessageQueueParameters memory _messageQueueParameters,
EnforcedBatchParameters memory _enforcedBatchParameters
) external initializer
This function can only be called once during contract deployment and sets up:
- The contract owner
- The initial authorized signer
- Initial message queue parameters
- Initial enforced batch parameters
The EnforcedTxGateway
contract
The EnforcedTxGateway contract manages enforced transactions that can be submitted to L2. It includes admin operations that can only be executed by the contract owner.
setPause
function
This function allows the owner to pause or unpause the contract.
function setPause(bool _status) external onlyOwner
When paused, users cannot submit enforced transactions through this gateway.
The L1MessageQueueV2
contract
The L1MessageQueueV2 contract manages the queue of L1 to L2 messages after the EuclidV2 upgrade. It includes several admin operations that can only be executed by authorized contracts.
Initialization
The contract is initialized with the following parameters:
function initialize() external initializer
This function can only be called once during contract deployment and sets up:
- The initial cross-domain message indices
- The next unfinalized queue index
- The ownership structure
Message Queue Parameters
The contract relies on parameters from the SystemConfig contract to manage message processing:
- Maximum gas limit for L1 messages
- Base fee overhead and scalar for L2 fee calculation
- Message queue delay parameters
Security Model
The contract implements a strict permission model where:
- Only the L1ScrollMessenger can append cross-domain messages
- Only the ScrollChain can finalize popped messages
- Only the EnforcedTxGateway can append enforced transactions
Permissions section
Table of Contents
- Overview
- Permissioned actors
- Permissioned actions
- Grouping actors by entity
- Possible future developments
Overview
The goal of the permissions section is to list ultimate permissioned actors such as EOAs, multisigs, governors, or equivalent, that can affect the system. What falls in the equivalent class is left to intuition and discussion.
Permissioned actors
The permission section should not list nested multisigs when controlled by a single entity as it's not in our scope of assessment to evaluate members within one entity. For example, the OPFoundationUpgradeSafe
, at the time of writing, is a 5/7 multisig that contains another 2/2 multisig as a member. Such multisig should not be listed. On the other hand, the SuperchainProxyAdminOwner
is formed by two distinct entities that are relevant for the assessment, as one member is the OpFoundationUpgradeSafe
and the other is the SecurityCouncilMultisig
, so both should be listed. One possible solution is to always hide nested multisigs unless explicitly stated otherwise. Eventually, for the risk assessment purpose, we might want to explicitly assign entities to multisigs, so this logic for nested multisigs might eventually be built on top of that.
Each non-EOA permissioned actor should have a description of its code, independent of the connections to other contracts. For example, multisigs should show the threshold, size and eventual modules, while governors should describe their own mechanism and params like quorums and voting periods. All permissioned actors should list the ultimate permissioned actions that they can perform on the system. Actors that produce the exact same description should be grouped together in one entry.
- (3) 0x123, 0x456, 0x789
+ Can interact with Inbox to:
* sequence transactions
Permissioned actions
The "upgrade" permissioned action for each permissioned actor should group contracts based on the set of possible delays that can be used on them.
- **FoochainMultisig**
A multisig with 3/5 threshold.
+ Can upgrade with either 7d or no delay:
* FoochainPortal <via>
* L1StandardBridge <via>
+ Can upgrade with 7d delay:
* L1ERC721Bridge <via>
+ Can upgrade with no delay:
* SystemConfig <via>
Such grouping can be achieved by first grouping individual contracts by delay, and then contracts by set of delays.
[FoochainPortal] with [7d <via1>] delay
[FoochainPortal] with [no <via2>] delay
[L1StandardBridge] with [7d <via3>] delay
[L1StandardBridge] with [no <via4>] delay
[L1ERC721Bridge] with [7d <via5>] delay
[SystemConfig] with [no <via6>] delay
>>>
[FoochainPortal] with [7d <via1>, no <via2>]
[L1StandardBridge] with [7d <via3>, no <via4>]
[L1ERC721Bridge] with [7d <via5>]
[SystemConfig] with [no <via6>]
>>>
[FoochainPortal <via1_or_via2>, L1StandardBridge <via3_or_via4>] with [7d, no] delay
[L1ERC721Brige <via5>] with [7d] delay
[SystemConfig <via6>] with [no] delays
Each <via>
should show the list of intermediate contracts used to perform the ultimate permissioned action, starting with the contract closer to the permissioned actor. If any contract adds a delay, it should be listed as well. The total delay shown with the permissioned action should be the sum of all delays in this chain of contracts.
- **FoochainMultisig**
A multisig with 3/5 threshold.
+ Can upgrade with 7d delay:
* L1ERC721Bridge acting via Timelock1 with 3d delay -> Timelock2 with 4d delay -> ProxyAdmin or via Timelock3 with 7d delay -> ProxyAdmin
Permissioned actions outside of upgrades should group by contract first and then list the actions with the appropriate delays. Where possible, each action should be listed as a separate entry.
- **FoochainMultisig**
A multisig with 3/5 threshold.
+ Can interact with Timelock1 to:
* propose transactions with 3d delay <via>
* update the minimum delay with 7d delay <via>
Grouping actors by entity
As previously discussed, there's a will to group permissioned actors by entity. While the ultimate mechanism is still to be defined, it is worth it to first consider grouping multisigs with the same members, threshold and size under the same permissioned actor. Since these are abstracted entities, they should show the immediate underlying multisigs.
At the time of writing, Arbitrum One makes use of three distinct multisigs for the Security Council, with the same set of members and threshold: L1EmergencySecurityCouncil
, L2EmergencySecurityCouncil
and L2ProposerSecurityCouncil
. Without the grouping, the permissions section would look like this:
- **L1EmergencySecurityCouncil**
A multisig with 9/12 threshold.
+ Can upgrade with no delay:
* RollupProxy <via>
* Outbox <via>
* ...
+ Can interact with L1Timelock to:
* update the minimum delay
* manage all access control roles of the timelock
* cancel queued transactions
+ Can interact with RollupProxy to:
* pause and unpause the contract
* update sequencer management delegation
* ...
- **L2EmergencySecurityCouncil**
A multisig with 9/12 threshold.
+ Can upgrade with no delay:
* L2ERC20Gateway <via>
* L2GatewayRouter <via>
* ...
+ Can interact with L2Timelock to:
* update the minimum delay
* manage all access control roles of the timelock
- **L2ProposerSecurityCouncil**
A multisig with 9/12 threshold.
+ Can upgrade with 17d 8h delay:
* RollupProxy <via>
* Outbox <via>
+ Can interact with L2Timelock to:
* propose transactions
+ Can interact with L1Timelock to:
* propose transactions with 14d 8h delay
* update the minimum delay with 17d 8h delay
* manage all access control roles of the timelock with 17d 8h delay
* cancel queued transactions with 17d 8h delay
+ Can interact with RollupProxy to:
* pause and unpause the contract with 17d 8h delay
* update sequencer management delegation with 17d 8h delay
* ...
With the grouping, the permissions section would look like this:
- **SecurityCouncilMultisig**
A multisig with 9/12 threshold. Acts through L1EmergencySecurityCouncil, L2EmergencySecurityCouncil and L2ProposerSecurityCouncil.
+ Can upgrade with either 14d 8h or no delay:
* RollupProxy <via>
* Outbox <via>
* ...
+ Can interact with L1Timelock to:
* propose transactions with 14d 8h <via>
* update the minimum delay with either 14d 8h or no delay <via>
* manage all access control roles of the timelock with either 14d 8h or no delay <via>
* cancel queued transactions with either 17d 8h or no delay <via>
+ Can interact with L2Timelock to:
* propose transactions <via>
* update the minimum delay <via>
* manage all access control roles of the timelock <via>
+ Can interact with RollupProxy to:
* pause and unpause the contract with 17d 8h or no delay <via>
* update sequencer management delegation with 17d 8h or no delay <via>
* ...
Possible future developments
While still in the discussion phase, there's a will to show immediate permissioned given by each contract. For example, if a contract makes use of access control, each immediate role assigment would be shown, regardless of whether it is an intermediate contract or a permissioned actor. It is likely that this entries will be displayed in the contracts section under each contract rather than the permissions section.
Contracts section
Table of Contents
Overview
The goal of the contracts section is to list all contracts in the system that are not considered ultimate permissions, as per defined by the Permissions section spec. For each contract, the most relevant information should be presented. All information for a single contract should be as local as possible to allow for modularity via the template system.
Single contract view
For each contract, the basic information to be shown is the name and the address. If a contract is a proxy, the implementation contract(s) should also be shown. Optionally, a category can be shown. If a contract has any field with a defined "interact" or "act" permission, the direct permission receiver should be shown. Note that these permisions might not be ultimate permissions. Optionally, if the design allows, the ultimate permission receiver can be shown too. Upgrade permissions should be presented more distinctly, with the ultimate permission receiver shown, and its associated ultimate delay.
Let's pick some Arbitrum One's contracts to present an implementation proposal that only shows direct permissions for "interact" and "act" permissions:
- **RollupProxy** [0x4DCe…Cfc0] [Implementation #1 (Upgradable)] [Implementation #2 (Upgradable)] [Admin]()
...description...
+ Can be upgraded by: [Outbox with 3d delay] [Arbitrum Security Council with no delay]()
+ <Roles>
* `owner`: [UpgradeExecutor]()
- **UpgradeExecutor** [0x1234…5678] [Admin]()
...description...
+ Can be upgraded by: [Outbox with 3d delay] [Arbitrum Security Council with no delay]()
+ <Roles>
* `executors`: [L1Timelock] [Arbitrum Security Council]()
- **L1Timelock** [0x9abc…def0] [Admin]()
...description...
+ Can be upgraded by: [Outbox with 3d delay] [Arbitrum Security Council with no delay]()
+ <Roles>
* `proposer`: [Bridge]()
* `canceller`: [UpgradeExecutor]()
- **Bridge** [0x1a2b…3c4d] [Admin]()
...description...
+ Can be upgraded by: [Outbox with 3d delay] [Arbitrum Security Council with no delay]()
+ <Roles>
* `proposer`: [L1Timelock]()
* `canceller`: [UpgradeExecutor]()
If the design allows, the description of each "role" can be shown.
Contract categories
TBD
Finality page
Table of Contents
How to calculate time to inclusion
OP stack
The OP stack RPC directly exposes a method, optimism_syncStatus
, to fetch the latest unsafe
, safe
or finalized
L2 block number. An unsafe
block is a preconfirmed block but not yet published on L1, a safe
block is a block that has been published on L1 but not yet finalized, and a finalized
block is a block that has been finalized on L1.
The method can be called as follows:
cast rpc optimism_syncStatus --rpc-url <rpc-url>
Most RPCs do not support such method, but fortunately QuickNode does. An example of the output is as follows:
{
"current_l1": {
"hash": "0x2cd7146cf93bae42f59ec1718034ab2f56a5ef2dcddf576e00b0a2538f63a840",
"number": 22244495,
"parentHash": "0x217b42bbdb4924495699403d2884373d10b24e63f45d2702c6087dee7024a099",
"timestamp": 1744359995
},
"current_l1_finalized": {
"hash": "0xbd1ee29567ddd0eda260b9e87e782dbb8253de95ba8f3802a3cbf3a3cac5ee8e",
"number": 22244412,
"parentHash": "0x13eb164b6245a7dca628ccac2c8a37780df35cc5b764d6fad345c9afac3ec6ec",
"timestamp": 1744358999
},
"head_l1": {
"hash": "0x2cd7146cf93bae42f59ec1718034ab2f56a5ef2dcddf576e00b0a2538f63a840",
"number": 22244495,
"parentHash": "0x217b42bbdb4924495699403d2884373d10b24e63f45d2702c6087dee7024a099",
"timestamp": 1744359995
},
"safe_l1": {
"hash": "0x609b0ca4539ff39a6521ff8ac46fa1fee5e717bd4a5931f2efce27f1d3d6ec70",
"number": 22244444,
"parentHash": "0xdd12087c45f149036c9ad5ce8f4d1880d42ed0c6029124b0a66882a68335647e",
"timestamp": 1744359383
},
"finalized_l1": {
"hash": "0xbd1ee29567ddd0eda260b9e87e782dbb8253de95ba8f3802a3cbf3a3cac5ee8e",
"number": 22244412,
"parentHash": "0x13eb164b6245a7dca628ccac2c8a37780df35cc5b764d6fad345c9afac3ec6ec",
"timestamp": 1744358999
},
"unsafe_l2": {
"hash": "0x9c3fba7839fa336e448407f387d8945e64f363afc48a3eed675728a3f4ff941c",
"number": 134380614,
"parentHash": "0xcb092179b9158964c02761a0511fd33c72b5e75df44ba2be245653940a28a69d",
"timestamp": 1744360005,
"l1origin": {
"hash": "0xb47909b847438d13914c629a49ca9113dd3828abab1a7c01e146c2817e739ae9",
"number": 22244484
},
"sequenceNumber": 2
},
"safe_l2": {
"hash": "0x778aa29f31e03923e2f8dd85aa2570504d4d956dbb1d6c94b00379c282eea5b5",
"number": 134380401,
"parentHash": "0x30977603570f5134cdf98922630095b52d25857bf24a8aa367d8fd01fda8a56b",
"timestamp": 1744359579,
"l1origin": {
"hash": "0x7ff5dde61b11bf0af83c93d8e8a51c519373495fd7cb5b74d74a4894c1e2d9ec",
"number": 22244448
},
"sequenceNumber": 5
},
"finalized_l2": {
"hash": "0x295df0a07e17c35f93422f83c47479f856e9fafde5b9d03c7714381d627035b2",
"number": 134379981,
"parentHash": "0xe110aa8531bcfb0c4c5e529f60a99751b9a9842797ef4d293b2cf514e496a4b7",
"timestamp": 1744358739,
"l1origin": {
"hash": "0xf999ff954c1a823b10ecc679c611bb2bc0c7cc7e5ef344c5468a4529204eab33",
"number": 22244379
},
"sequenceNumber": 0
},
"pending_safe_l2": {
"hash": "0x778aa29f31e03923e2f8dd85aa2570504d4d956dbb1d6c94b00379c282eea5b5",
"number": 134380401,
"parentHash": "0x30977603570f5134cdf98922630095b52d25857bf24a8aa367d8fd01fda8a56b",
"timestamp": 1744359579,
"l1origin": {
"hash": "0x7ff5dde61b11bf0af83c93d8e8a51c519373495fd7cb5b74d74a4894c1e2d9ec",
"number": 22244448
},
"sequenceNumber": 5
},
"cross_unsafe_l2": {
"hash": "0x9c3fba7839fa336e448407f387d8945e64f363afc48a3eed675728a3f4ff941c",
"number": 134380614,
"parentHash": "0xcb092179b9158964c02761a0511fd33c72b5e75df44ba2be245653940a28a69d",
"timestamp": 1744360005,
"l1origin": {
"hash": "0xb47909b847438d13914c629a49ca9113dd3828abab1a7c01e146c2817e739ae9",
"number": 22244484
},
"sequenceNumber": 2
},
"local_safe_l2": {
"hash": "0x778aa29f31e03923e2f8dd85aa2570504d4d956dbb1d6c94b00379c282eea5b5",
"number": 134380401,
"parentHash": "0x30977603570f5134cdf98922630095b52d25857bf24a8aa367d8fd01fda8a56b",
"timestamp": 1744359579,
"l1origin": {
"hash": "0x7ff5dde61b11bf0af83c93d8e8a51c519373495fd7cb5b74d74a4894c1e2d9ec",
"number": 22244448
},
"sequenceNumber": 5
}
}
The time to inclusion of L2 blocks can be calculated by polling the method and checking when the safe_l2
block number gets updated. The safe_l2
value refers to the latest L2 block that has been published on L1, where the latest L1 block used by the derivation pipeline is the current_l1
block. Assuming that all blocks in between the previous safe_l2
value and the current safe_l2
value are included in the current_l1
block when the safe_l2
value is updated, the time to inclusion of each L2 block between the previous safe_l2+1
and the current safe_l2
value can be calculated by subtracting the L2 block timestamp from the current_l1
timestamp. The assumption has been lightly manually tested and seems to hold.
Why this approach?
The very first approach to calculate the time to inclusion was to decode L2 batches posted to L1, get the list of transactions, and then calculate the difference between the L2 transactions timestamp and the L2 batch timestamp. This required a lot of maintaining, given that the batch format is generally not stable. There are a few possible approaches, like using the batch decoder provided by Optimism, but it's not guaranteed that OP stack forks properly maintain the tool and we might not have access to every project's batch decoder in the first place. A different approach involves using an external API to fetch info like shown in this Blockscout's batches page, but they don't seem to expose an API for that.
Example
Here an output of a PoC script that tracks the time to inclusion of L2 blocks using the optimism_syncStatus
method:
------------------------------------------------------
Fetched SyncStatus:
Safe L2 Block:
• Hash : 0x504edd794afe4994f96825c2892e16e1e3cd8d68c303007b054f6ad790635c6b
• Number : 134389152
• Prod. Time: 1744377081
Current L1 Block:
• Hash : 0x45c4ef3f1d79101ba050f6a8af3073ef74917df89f84fe532ed3e8a2979619ef
• Number : 22245928
• Time : 1744377239
------------------------------------------------------
Current Safe L2 Block: 134389152 (Produced at: 1744377081)
Current L1 Head (Inclusion Time Candidate): 1744377239
New safe L2 blocks detected: Blocks 134388973 to 134389152
Using current L1 timestamp as inclusion time: 1744377239
------------------------------------------------------
Batch Statistics for New Safe L2 Blocks:
Minimum Time-to-Inclusion: 2 min 38.00 sec
Maximum Time-to-Inclusion: 8 min 36.00 sec
Average Time-to-Inclusion: 5 min 37.00 sec
------------------------------------------------------
How to calculate withdrawal times (L2 -> L1)
To calculate the withdrawal time from L2 to L1, we need to fetch the time when the withdrawal is initiated on L2 and the time when it is ready to be executed on L1 and calculate the interval.
OP stack (with fraud proofs)
Withdrawals are initiated by either calling bridgeETH
or bridgeETHTo
methods for ETH, or bridgeERC20
or bridgeERC20To
methods for ERC20 tokens. Both methods emit a WithdrawalInitialized
event, which is defined as follows:
// 0x73d170910aba9e6d50b102db522b1dbcd796216f5128b445aa2135272886497e
event WithdrawalInitiated(address indexed l1Token, address indexed l2Token, address indexed from, address to, uint256 amount, bytes extraData)
To track the time of these events, the L2 block number in which they are emitted can be used.
On L1, the AnchorStateRegistry
is the contract used to mantain the latest state root that is ready to be used for withdrawals. The anchor root is updated using the setAnchorState()
function, which is defined as follows:
function setAnchorState(IDisputeGame _game) public
and emits the following event:
// 0x474f180d74ea8751955ee261c93ff8270411b180408d1014c49f552c92a4d11e
event AnchorUpdated(address indexed game)
Each game
contract has a function that can be used to retrieve the L2 block number they refer to:
function l2BlockNumber() public pure returns (uint256 l2BlockNumber_)
The time when the withdrawal is ready to be executed can be calculated by tracking the AnchorUpdated
event, specifically when its respective L2 block number becomes greater than the L2 block number of the WithdrawalInitiated
event. If the goal is not to track the withdrawal time of a specific withdrawal but to more generally calculate an average, just tracking the AnchorStateRegistry
and calculating its corresponding l2BlockNumber
is good enough.
Why this approach?
Withdrawals are not directly executed based on the information in the AnchorStateRegistry
, but rather based on games whose status is GameStatus.DEFENDER_WINS
. Since the AnchorStateRegistry
's latest anchor root can be updated with the same condition, it is enough to track that to determine when a withdrawal is ready to be executed on L1. This assumes that the AnchorStateRegistry
is always updated as soon as possible with the latest root that has been confirmed by the proof system. In practice the assumption holds since a game terminates with a closeGame()
call, which also calls setAnchorState()
on the AnchorStateRegistry
to update the root if it is newer than the current saved one.
Another approach consists in tracking finalized withdrawals directly, but this would skew the calculation since not every withdrawal is finalized as soon as they are available to be finalized, and outliers would be introduced in the data set. For completeness, when a withdrawal is finalized, the WithdrawalFinalized
event is emitted, which is defined as follows:
// 0xdb5c7652857aa163daadd670e116628fb42e869d8ac4251ef8971d9e5727df1b
event WithdrawalFinalized(bytes32 indexed withdrawalHash, bool success)
The withdrawalHash
can be calculated as follows:
function hashWithdrawal(Types.WithdrawalTransaction memory _tx) internal pure returns (bytes32) {
return keccak256(abi.encode(_tx.nonce, _tx.sender, _tx.target, _tx.value, _tx.gasLimit, _tx.data));
}
where the nonce is must be fetched through the SentMessage
event emitted by the L2CrossDomainMessenger
when a withdrawal is initiated. The SentMessage
event is defined as follows:
// 0xcb0f7ffd78f9aee47a248fae8db181db6eee833039123e026dcbff529522e52a
event SentMessage(address indexed target, address sender, bytes message, uint256 messageNonce, uint256 gasLimit)
Example
Let's take this withdrawal as an example to show how to calculate the withdrawal time. The transaction emits the WithdrawalInitiated
event as expected, and the corresponding L2 block number is 134010739
, whose timestamp is 1743620255
(Apr-02-2025 06:57:35 PM +UTC).
This script can be used to find the time in which the AnchorStateRegistry
was updated with a root past the L2 block number of the withdrawal. This is the output:
╔══════════╤═════════════════════╤═══════════════╤═══════════╤═══════════════╤═══╗
║ Block │ Time │ Game │ L2 Block │ Tx │ ? ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22232676 │ 2025-04-09 16:55:11 │ 0xf944...d159 │ 134006190 │ 0x77ca...19ba │ ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22232975 │ 2025-04-09 17:54:59 │ 0x302d...c419 │ 134008034 │ 0x07e7...8bda │ ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22233284 │ 2025-04-09 18:56:59 │ 0x794B...bf9B │ 134010097 │ 0x147d...357c │ ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22233578 │ 2025-04-09 19:55:47 │ 0xAB7e...35b1 │ 134011549 │ 0x33eb...3693 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22233879 │ 2025-04-09 20:56:23 │ 0xC593...45BF │ 134013468 │ 0xa687...55fd │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22234179 │ 2025-04-09 21:56:47 │ 0x05fd...9bA3 │ 134015440 │ 0xa941...cee0 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22234474 │ 2025-04-09 22:56:11 │ 0x91c9...2e8A │ 134017191 │ 0x1011...e9c7 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22234779 │ 2025-04-09 23:57:11 │ 0xd065...A461 │ 134018922 │ 0x28c1...080c │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22235080 │ 2025-04-10 00:57:35 │ 0x5D8e...d691 │ 134020776 │ 0xb822...1456 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22235380 │ 2025-04-10 01:57:35 │ 0x34a2...2aA9 │ 134022724 │ 0x0ba7...db52 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22235682 │ 2025-04-10 02:57:59 │ 0x500e...497C │ 134024336 │ 0x8109...cd76 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22235987 │ 2025-04-10 03:58:59 │ 0xFF4E...4F10 │ 134026350 │ 0xdaed...def0 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22236286 │ 2025-04-10 04:58:47 │ 0xce9E...0F17 │ 134028088 │ 0x2c52...9b7a │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22236586 │ 2025-04-10 05:58:47 │ 0x764B...6294 │ 134029727 │ 0x1d03...c14e │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22236891 │ 2025-04-10 06:59:59 │ 0xA066...e3F9 │ 134031693 │ 0x44d0...c506 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22237189 │ 2025-04-10 07:59:47 │ 0x1528...120C │ 134033337 │ 0x6da3...9bce │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22237494 │ 2025-04-10 09:00:59 │ 0xC40b...7C32 │ 134035313 │ 0xed42...2687 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22237791 │ 2025-04-10 10:00:35 │ 0xF7BC...29e8 │ 134036917 │ 0x042b...efb5 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22238086 │ 2025-04-10 10:59:47 │ 0x4E3B...f55A │ 134038845 │ 0xf622...9348 │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22238396 │ 2025-04-10 12:01:47 │ 0xf3D4...97B8 │ 134040625 │ 0x4d74...9fcf │ X ║
╟──────────┼─────────────────────┼───────────────┼───────────┼───────────────┼───╢
║ 22238695 │ 2025-04-10 13:01:35 │ 0x1cF1...824F │ 134042382 │ 0xe3ec...523d │ X ║
╚══════════╧═════════════════════╧═══════════════╧═══════════╧═══════════════╧═══╝
i.e. 7 days, 58 mins and 12 seconds have passed between the withdrawal being initiated and the time when it was ready to be executed on L1.
Scroll
Scroll uses a ZK-rollup architecture with an asynchronous message bridge between L2 and L1.
Withdrawals (and every other L2→L1 message) follow the life-cycle illustrated by
the events below:
// Emitted on L2 when a message is queued
event AppendMessage(uint256 index, bytes32 messageHash);
// 0x5300000000000000000000000000000000000000 (Scroll: L2 Message Queue)
// Emitted on L1 when a message is executed
event RelayedMessage(bytes32 indexed messageHash);
// 0x6774bcbd5cECEf1336B5300Fb5186a12DDD8B367 (Scroll: L1 Scroll Messenger Proxy)
// Emitted on L1 when a batch proof is verified and the batch becomes final
event FinalizeBatch(
uint256 indexed batchIndex,
bytes32 indexed batchHash,
bytes32 stateRoot,
bytes32 withdrawRoot
);
Time to withdrawal calculation
-
Track initiation on L2
Listen for theAppendMessage
on the L2 Message Queue, storemessageHash
together with the L2 block timestamp in which the event was emitted. -
Track execution on L1
Listen forRelayedMessage
on the L1 Scroll Messenger.
When a matchingmessageHash
is found, fetch the transaction data. The calldata callsrelayMessageWithProof(...)
; decode it and read the_proof.batchIndex
field. -
Find the first-available timestamp
With the extractedbatchIndex
, search theScrollChain
contract for the correspondingFinalizeBatch
event.
The timestamp of the transaction that emitted this event represents the moment the batch became final and every withdrawal in it could have been executed. -
Compute the intervals
• Earliest withdrawal time
FinalizeBatch.timestamp − AppendMessage.timestamp
(how long users wait until the withdrawal can be executed)• Actual withdrawal time (optional)
RelayedMessage.timestamp − AppendMessage.timestamp
(includes any additional delay introduced by the relayer)
Orbit Stack
When users initiate a withdrawal on L2 (for example, calling ArbSys.withdrawEth() or an ERC-20 bridge's withdraw function), the sequence of events is:
- The withdrawal triggers an L2-to-L1 message. Internally this is done via ArbSys.sendTxToL1, which emits an L2 event and adds the message to Arbitrum's outgoing message Merkle tree.
- The L2 transaction (and its outgoing message) get included in a rollup assertion that the validator posts to the L1 rollup contract (SequencerInbox/Bridge) in a batch. This marks the inclusion of the withdrawal request on L1.
- Once the assertion is posted, it enters the dispute window on L1. For Arbitrum One this is roughly 7 days (currently ~6.4 days in seconds). During this period, any validator can challenge the posted state if they detect fraud. In the normal case with no fraud proofs, the assertion simply "ages" for the full challenge period.
- Once the dispute period expires without a successful challenge, at least one honest validator will confirm the rollup assertion on L1. The Arbitrum Rollup contract finalizes the L2 state root and posts the assertion's outgoing message Merkle root to the Outbox contract on L1. At this point the L2 transaction's effects are fully finalized on L1 (equivalent to an L1 transaction's finality, aside from Ethereum's own finalization delay). The withdrawal message is now provably included in the Outbox's Merkle root of pending messages.
Time to withdrawal calculation
-
Track initiation on L2
Listen for theL2ToL1Tx
event from theArbSys
pre-compile
(0x0000000000000000000000000000000000000064
):event L2ToL1Tx( address caller, // sender on L2 address indexed destination, // receiver on L1 uint256 indexed hash, // unique message hash uint256 indexed position, // (level<<192)|leafIndex uint256 arbBlockNum, // L2 block number uint256 ethBlockNum, // 0 at emission uint256 timestamp, // L2 timestamp uint256 callvalue, // ETH value bytes data // calldata for L1 );
Store:
position
- the global message indexhash
- unique identifier (for quick look-ups)timestamp
- L2 time of initiation
From
position
you can extract:level = position >> 192
(always 0 in Nitro)leafIndex = position & ((1<<192) - 1)
arbBlockNum
- the L2 block number where the withdrawal was initiated.
-
Detect when the withdrawal becomes executable
After the ≈7-day fraud-proof window a validator confirms the rollup assertion. Assertion confirmation could also incur in a “challenge grace period” delay, which allows the Security Council to intervene at the end of a dispute in case of any severe bugs in the OneStepProver contracts. During assertion confirmation the Rollup contract emits:event AssertionConfirmed( bytes32 indexed assertionHash, bytes32 indexed blockHash, // L2 block hash of the assertion's end bytes32 sendRoot // root of the Outbox tree );
The confirmation routine calls
Outbox.updateSendRoot(sendRoot, l2ToL1Block)
, which emits:event SendRootUpdated( bytes32 indexed outputRoot, // == sendRoot above bytes32 indexed l2BlockHash // L2 block hash corresponding to this root );
This
l2BlockHash
signifies that all L2-to-L1 messages initiated in L2 blocks up to and including the L2 block represented by thisl2BlockHash
are now covered by theoutputRoot
and are executable.To check if the specific withdrawal (with
leafIndex
andarbBlockNum
from Step 1) is executable:- Find the
SendRootUpdated
event. - Get the L2 block number corresponding to
SendRootUpdated.l2BlockHash
. - If
your_withdrawal.arbBlockNum <= L2_block_number_of_SendRootUpdated_event
, then your withdrawal (identified by itsleafIndex
) can be executed. The L1 timestamp of thisSendRootUpdated
event is the earliest time your withdrawal becomes executable.
- Find the
-
Compute the intervals
- Earliest withdrawal time
SendRootUpdated.timestamp − L2ToL1Tx.timestamp
(where SendRootUpdated meets the condition in Step 2)
- Earliest withdrawal time
This PoC script calculates the time to withdrawal for Arbitrum One.
Stages edge cases
Table of Contents
- Introduction
- Liveness failure upper bound
- Forced transaction delay upper bound
- Frontrunning risk
- Based preconfs
Introduction
The goal of this document is to describe certain edge cases that are not explicitly covered by the Stage 1 requirements, either voluntarily or because they were not considered at the time of writing, and start a discussion on how to handle them. The document is not exhaustive, and it is expected that more edge cases will be added in the future.
Liveness failure upper bound
The new Stage 1 requirement announced here is presented as follows:
➡️ The only way (other than bugs) for a rollup to indefinitely block an L2→L1 message (e.g. a withdrawal) or push an invalid L2→L1 message (e.g. an invalid withdrawal) is by compromising ≥75% of the Security Council.
⚠️ Assumption: if the proposer set is open to anyone with enough resources, we assume at least one live proposer at any time (i.e. 1-of-N assumption with unbounded N). We don’t assume it to be non-censoring.
While "indefinitely" corresponds to permanent liveness failures, it is unreasonable to classify chains as Stage 1 if they allow "bounded" liveness failures of a million years, hence the need to define an acceptable upper bound. Bounded liveness failures are allowed in Stage 1 to allow teams to quickly respond to threats by pausing the system and handing over control to the Security Council (SC).
As a reminder, in the new Stage 1 principle it is assumed that at least a quorum blocking minority of the Security Council is honest and can be trusted to prevent permanent liveness failures. This can either be implemented by allowing the minimum quorum blocking minority or lower to unpause, or indirectly by implementing an expiration mechanism for the pause plus a cooldown mechanism to prevent repeated pauses. If the Security Council minority is employed, no upper bound is needed to be defined. If the second strategy is used, the expiration time defines the upper bound for the liveness failure. A cooldown period of zero would convert any bounded liveness failure into a permanent one, hence the need to define a minimum cooldown period too.
Pause mechanisms can differ, as they can either affect withdrawals, deposits, or both. For this reason, it's worth evaluating whether different upper bounds are needed for each of them.
Case study 1: Optimism
Optimism describes the way a standard OP stack is supposed to satisfy the new Stage 1 requirements in their specs. A "guardian" (i.e. the Security Council in the Superchain) can trigger a pause. The guardian can delegate such role to another actor called the "pause deputy" via a "deputy pause" Safe module. The pause automatically expires and cannot be triggered again unless the mechanism is explicitly reset by the guardian, meaning that the cooldown period is infinite. The pause can either be activated globally (e.g. Superchain-wide pause) or locally to the single chain. The expiration time of a standard OP stack is defined to be 3 months. Since both pauses can be chained, the liveness failure bound is actually 6 months. The guardian can explicitly unpause the system, and if so, the pause mechanism can be reused immediately. In additional to this mechanism, the guardian can always unilaterally revoke the deputy guardian (and with it the deputy pauser-) role.
It's important to note that the pause only affects withdrawals, and not deposits or forced transactions. If a user needs to perform any action on the L2, for example to save an open position, they are still allowed to do so after the usual forced transaction delay in the worst case.
Case study 2: Scroll
At the time of writing, Scroll allows a non-SC actor to pause the system. Scroll is assessed as a Stage 1 system with the old requirements because the Security Council majority can always recover from a malicious pause by revoking the non-SC role that allows to pause the system. With the new requirements, either a minority of the SC should be allowed to unpause and revoke, or the pause should expire. More importantly, the pause mechanism also affects forced transactions as it would not be possible to call the depositTransaction
function in the EnforcedTxGateway
contract. In such case, users would not be able to perform any action on the L2.
Let's assume for a moment that these issues with the pause mechanism are fixed. An explicit pause mechanism is not the only way to cause a liveness failure. The protocol currently employs a permissioned sequencer that can ignore forced transactions, up until the "enforced liveness mechanism" gets activated and everyone can then submit and prove blocks. If the delay is 7d, then the mechanism effectively acts as a pause that lasts 7 days and should be handled accordingly.
Forced transaction delay upper bound
OP stack and Orbit stack notably have a 12h and a 24h max forced transaction delay, respectively. Given that years long delays are unreasonable, an upper bound must be defined. While the forced transaction delay is taken into consideration when calculating exit windows, if a project exclusively relies on a Security Council for upgrades, such delays are not accounted for anywhere. Moreover, some protocols don't have well defined boundaries on the delay, like the recent forced transaction mechanism introduced by Taiko, where in the worst case a forced transaction is processed from the queue every 255 batches, but if the queue is too long, the delay for a specific forced transaction can be much longer.
Given that forced transaction delays effectively act as a temporary pause from the perspective of censored users, it should be considered whether the upper bound value should be aligned with the liveness failure upper bound for sequencing (e.g. 7d).
Frontrunning risk
Intuition: if there are two permissioned provers, one being a SC minority and one being a non-SC actor, the non-SC might frontrun the SC and prevent them from, for example, enforcing censorship resistance if no other mechanism is present. The threat model should be made more explicit on whether malicious actors have the ability to continuously frontrun other actors. For reference, Arbitrum's BoLD paper is described around this threat model.
Based preconfs
Pre-Pacaya Taiko uses free-for-all based sequencing, and therefore does not need a forced transaction mechanism. After Pacaya, a "preconf router" can be added with the intent of restricting sequencing rights to a staked whitelist of opted-in L1 proposers. It's important to remember that most of censorship resistance guarantees today come from the long tail of self-building proposers that do not use mev-boost, or in the future by FOCIL. Arguably, proposers that do not opt-in to mev-boost will also do not opt-in to stake to the preconf router, and therefore will not be able to provide censorship resistance, and neither would transactions originated from users through FOCIL as the sequencing call is gated. Because of this, the addition of a preconf router suggests the need of an additional forced transaction mechanism.