Celo L2 Node Operator Guide
This guide is designed to help node operators run Celo L2 nodes and to explain the process of switching from running a Celo L1 node to a Celo L2 node.
Migration overview
In the Celo L1 to L2 transition, Celo L1 nodes will migrate all historical Celo data into the Celo L2 node, ensuring that blocks, transactions, logs, and receipts are fully accessible within the Celo L2 environment.
This process involves 4 steps:
- Upgrading the L1 node to the latest client release so it stops producing blocks at the hardfork
- Waiting for the hardfork
- Pulling the new network configuration files once they are known (e.g. hardfork block time)
- Launching the L2 node with snap sync
Optionally, for node operators that want to verify the entire chain history through the upgrade, they may manually migrate their L1 node chain data into a format compatible with the L2 node. This is required for all nodes looking to full sync and for anyone looking to run an archive node. Doing so involves these steps:
- Upgrading the L1 node to the latest client release so it stops producing blocks at the hardfork
- Optionally running pre-migrations of the L1 datadir (see note below)
- Waiting for the hardfork
- Shutting down the L1 node
- Pulling the new network configuration files once they are known (e.g. hardfork block time)
- Running the migration tool to migrate the L1 datadir and produce the hardfork block
- Launching the L2 node with the migrated datadir
If a node operator is interested in minimizing downtime during the hardfork, they can run the migration tool ahead of time to migrate the majority of the data before the hardfork. The migration tool can be run multiple times as the L1 chain data grows and will continue migrating from where it last left off. Please note that the node must be stopped before the migration tool is run.
Detailed instructions
To simplify running L2 nodes, cLabs has created a celo-l2-node-docker-compose repo with all the necessary configuration files and docker compose templates, which make it easy to pull network configuration files and launch all the services needed to run an L2 node.
For node operators interested in using Kubernetes, we recommend using Kompose to convert the docker compose template to Kubernetes helm charts.
Prior to the hardfork, node operators must upgrade their existing L1 nodes to the respective release below. These releases will have a hardfork block configured. L1 nodes with a hardfork block will cease producing or accepting blocks at the block immediately preceding the hardfork block.
- Alfajores 26384000
- Celo L1 client: celo-blockchain v1.8.7. Docker image
us-docker.pkg.dev/celo-org/us.gcr.io/geth-all:1.8.7
- Celo L1 client: celo-blockchain v1.8.7. Docker image
- Baklava not yet decided
- Release not yet created
- Mainnet not yet decided
- Release not yet created
Once this block number is reached, node operators can then launch the L2 node. Instructions are provided for both snap syncing and full syncing as the process is quite different.
Snap sync
- Pull the latest version of
celo-l2-node-docker-compose
and
cd
into the root of the project. - Run
cp <network>.env .env
where<network>
is one ofalfajores
,baklava
, ormainnet
. - Open
.env
and optionally configure any setting you may wish to change, such as settingNODE_TYPE=archive
to enable archive mode. - Run
docker-compose up -d --build
. - To check the progress of the node you can run
docker-compose logs -n 50 -f op-geth
. This will display the last 50 lines of the logs and follow the logs as they are written. In a syncing node, you would expect to see lines of the formSyncing beacon headers downloaded=...
where the downloaded number is increasing and later lines such as"Syncing: chain download in progress","synced":"21.07%"
where the percentage is increasing. Once the percentage reaches 100%, the node should be synced. - At this point, you should be able to validate the progression of the node by fetching the current block number via the RPC API and seeing that it is increasing. (Note that until fully synced, the RPC API will return 0 for the head block number).
Full sync
- Stop your L1 node.
- Pull the latest version of
celo-l2-node-docker-compose and
cd
into the root of the project. - Run
./migrate full <network> <path-to-your-L1-datadir> [<l2_destination_datadir>]
where<network>
is one ofalfajores
,baklava
, ormainnet
. If a destination datadir is specified, ensure that the value ofDATADIR_PATH
inside.env
is updated to match. The migration process will take at least some minutes to complete. Note that in all cases, the migration does not modify the original datadir; the migrated data is written to a new directory. - Verify that the migration was successful by looking for the
Migration successful
message at the end of the output. - Run
cp <network>.env .env
where<network>
is one ofalfajores
,baklava
, ormainnet
. - Open
.env
and setOP_GETH__SYNCMODE=full
and optionally configure any setting you may wish to change, such as settingNODE_TYPE=archive
to enable archive mode. - Run
docker-compose up -d --build
. - To inspect the progress of the node you can run
docker-compose logs -n 50 -f op-geth
. This will display the last 50 lines of the logs and follow the logs as they are written. In a syncing node, you would expect to see lines of the formSyncing beacon headers downloaded=...
where the downloaded number is increasing. Once syncing of the beacon headers is complete, full sync will begin by applying blocks on top of the hardfork block. - At this point, you should be able to validate the progression of the node by fetching the current block number via the RPC API and seeing that it is increasing.
Pre-migration
Node operators who wish to minimize the migration downtime during the hardfork can perform pre-migrations with the following steps.
- Stop your L1 node.
- Pull the latest version of
celo-l2-node-docker-compose and
cd
into the root of the project. - Run
./migrate pre <network> <path-to-your-L1-datadir>
where<network>
is one ofalfajores
,baklava
, ormainnet
. This will likely take some minutes to complete. - Once the pre-migration is complete, you can start your L1 node again.
Pre-hardfork archive state access and execution
Node operators who were running archive nodes before the migration and wish to maintain execution
and state access functionality for pre-hardfork blocks will need to continue to run their L1 node
and configure their L2 node to proxy pre-hardfork execution and state access requests to the L1 node
by setting the OP_GETH__HISTORICAL_RPC
in .env
to the RPC address of their L1 node.