Skip to main content

Celo L1 → Celo L2 Operator Guide

Overview

warning

The information in this guide is subject to change.

In the Celo L2 transition we are planning to migrate the historical Celo data into the Celo L2 node so that all blocks, transactions, logs and receipts will be accessible within the Celo L2 node.

At the transition point operators will need to shut down their Celo nodes and run a migration script on the datadir to convert it to a form that can be run with a Celo L2 node. Then the Celo L2 node can be run with the converted datadir. Operators that wish to full sync or run an archive node will need to start from a migrated datadir, because archive or full sync nodes will require the full state at the transition point in order to continue applying transactions. Otherwise operators will be able to start a Celo L2 node and snap sync without a migrated datadir.

In order to simplify the task of maintaining the Celo L2 node we are planning to not include any old execution logic in Celo L2, however RPC calls that invoke execution for pre-transition blocks will still be supported by proxying from the Celo L2 node to an archive Celo node. So operators that wish to run full archive nodes will now need to run both a Celo node and a Celo L2 node, and since the Celo L2 node does include the full chain history those operators will need to start with roughly double the storage they currently require for an archive Celo node.

Operation instructions

Running a fullnode on the Dango testnet (migration block - 24940101)

The new node consists of 3 services op-geth (the execution client), op-node (the consensus client) and the eigenda-proxy (provides offchain data availability). The op-node service connects to a p2p network of other op-node instances through which the sequencer distributes new blocks, it also has a direct connection to its op-geth instance to which it forwards received blocks for execution and, finally, it connects to the eigenda-proxy which acts as a gateway to the data availability layer.

note

Note the instructions below bind services to all interfaces (0.0.0.0) and op-geth uses a vhosts value of ‘*’ which is useful in a test/debug environment, however you may want to change that when running production code. The instructions also assume that all services are run on the same machine such that op-node can access op-geth and eigenda-proxy via localhost. If that is not the case then you will need to configure the following op node flags appropriately. You will also need to copy the jwtsecret generated by op-geth at <path-to-your-datadir>/geth/jwtsecret to a location where the op-node process can access it.

op-node updated networking flags:

--l2=<your-public-op-geth-auth-rpc-addr>
--l2.jwt-secret=<location-of-copied-jwt-secret>
--plasma.da-server=<your-public-eigenda-proxy-address>

Running op-geth

The op-geth service should be started before the op-node service, because the op-node service will shutdown after a few seconds if it cannot connect to its corresponding op-geth instance.

Download and extract the migrated chaindata:

note

Note that this chaindata is from a fullnode, so it lacks historical states, and therefore nodes using it will not be able to perform RPC actions that rely on state for blocks prior to the migration block.

wget https://storage.googleapis.com/cel2-rollup-files/dango/dango-migrated-datadir.tar.zst
tar -xvf dango-migrated-datadir.tar.zst

Position it correctly under a datadir:

mkdir -p <your-datadir-name>/geth
mv chaindata <your-datadir-name>/geth

Clone the celo op-geth repo and build op-geth:

git clone https://github.com/celo-org/op-geth.git
cd op-geth
git checkout badaf7f297762fbda117bc654b744e74a0ad6fe1
make geth

Run op-geth (note --datadir flag needs replacing with actual path to datadir):

./build/bin/geth \
--datadir=<your-datadir-path> \
--gcmode=archive \
--syncmode=full \
--authrpc.port=8551 \
--authrpc.addr=0.0.0.0 \
--authrpc.vhosts='*' \
--http \
--http.addr=0.0.0.0 \
--http.port=8545 \
--http.vhosts='*' \
--http.api=eth,net,web3,debug,txpool,engine,admin \
--ws \
--ws.port=8545 \
--ws.addr=0.0.0.0 \
--ws.api=eth,net,web3,debug,txpool,engine \
--rollup.sequencerhttp=https://forno.dango.celo-testnet.org \
--rollup.disabletxpoolgossip=true \
--rollup.halt=major \
--bootnodes="" \
--nodiscover

Running eigenda-proxy

Clone the repo and build the proxy:

git clone https://github.com/Layr-Labs/eigenda-proxy.git
cd eigenda-proxy
git checkout v1.2.0
make

Download and validate required g1 files (these files are part of the eigenda kzg trusted setup), see https://github.com/Layr-Labs/rust-kzg-bn254:

mkdir eigenda-resources
( cd eigenda-resources
wget \
https://srs-mainnet.s3.amazonaws.com/kzg/g1.point \
https://srs-mainnet.s3.amazonaws.com/kzg/g2.point.powerOf2 \
https://raw.githubusercontent.com/Layr-Labs/eigenda-operator-setup/master/resources/srssha256sums.txt
sha256sum -c srssha256sums.txt )

Generate a private key for your eigenda proxy instance; you will need cast (part of foundry) and jq for this, see Install Foundry & Download jq:

cast wallet new -j | jq -r '.[0].private_key' | tail -c 3 > eigenda-priv-key.txt

Run the proxy:

./bin/eigenda-proxy \
--addr=0.0.0.0 \
--port=4242 \
--eigenda-disperser-rpc=disperser-holesky.eigenda.xyz:443 \
--eigenda-g1-path=eigenda-resources/g1.point \
--eigenda-g2-tau-path=eigenda-resources/g2.point.powerOf2 \
--eigenda-disable-tls=false \
--eigenda-signer-private-key-hex `cat eigenda-priv-key.txt`

Running op-node

Clone the celo optimism repo and build op-node:

git clone https://github.com/celo-org/optimism.git
cd optimism/op-node
git checkout 42f2a5bbb7218c0828a996c48ad6bceb1e5f561a
make op-node

Download the rollup config file:

wget https://storage.googleapis.com/cel2-rollup-files/dango/rollup.json

Run op-node (note --l2.jwt-secret flag needs updating with the path to your op-geth datadir):

./bin/op-node \
--l1=https://ethereum-holesky-rpc.publicnode.com \
--l1.beacon=https://ethereum-holesky-beacon-api.publicnode.com \
--l1.trustrpc=true \
--l2=http://localhost:8551 \
--l2.jwt-secret=<your-datdair-path>/geth/jwtsecret \
--rollup.load-protocol-versions=true \
--rollup.config=rollup.json \
--verifier.l1-confs=4 \
--p2p.advertise.ip 127.0.0.1 \
--p2p.listen.tcp=9222 \
--p2p.listen.udp=9222 \
--p2p.priv.path=op-node_p2p_priv.txt \
--p2p.static=\
/ip4/34.19.9.48/tcp/9222/p2p/16Uiu2HAmM4Waw3Qmw9eFjvLbPzNUSaNvdD91RzodwxDzTXCM3Rp1,\
/ip4/34.83.14.89/tcp/9222/p2p/16Uiu2HAkwGzuYEzVY7CXoN44BnfUiZWe92TMU1dJesbAi4CYGQFS,\
/ip4/34.19.27.0/tcp/9222/p2p/16Uiu2HAmSf7FE4FXCy6ks5VPjX2Vdo21N9H3PQk7H7T8HbDMqEB8,\
/ip4/34.82.212.175/tcp/9222/p2p/16Uiu2HAm2xo9mPhbMW9eAzjLMjp6JFEa1gijWu2CsBpWEqVWh7Kg \
--plasma.enabled=true \
--plasma.da-server=http://127.0.0.1:4242 \
--plasma.da-service=true \
--plasma.verify-on-read=false

Migrating the L1 data (for illustrative and testing purposes)

note

Note that the instructions that follow allow you to perform a migration for testing purposes. For the Dango testnet you must use the migrated datadir that we provide. For future testnets we will support migrating your own datadir.

Checkout the celo optimism repo and run make build, this will take a few minutes.

git clone https://github.com/celo-org/optimism.git
cd optimism
make build

Generate the l2 allocs file

This step generates the allocs file which defines the state modifications that occur at the transition point from L1 to L2. It is a required input for the migration process.

(cd packages/contracts-bedrock &&
CONTRACT_ADDRESSES_PATH=../../op-chain-ops/cmd/celo-migrate/testdata/deployment-l1-holesky.json \
DEPLOY_CONFIG_PATH=../../op-chain-ops/cmd/celo-migrate/testdata/deploy-config-holesky-alfajores.json \
STATE_DUMP_PATH=../../op-chain-ops/cmd/celo-migrate/testdata/l2-allocs-alfajores.json \
forge script ./scripts/L2Genesis.s.sol:L2Genesis \
--sig 'runWithStateDump()')

Run the migration

You will be able to run an instance of op-geth with the migrated datadir but we don't have a sequencer deployed for this config and there is no agreed-upon block to migrate at, so there is no scope for receiving new blocks or being part of a network with this setup. However you will be able to test the RPC API.

Build the tool:

cd op-chain-ops
make celo-migrate

And then run the migration with the next command, you will need to substitute the following 2 fields:

  • <old-datadir> refers to the datadir of the node you're migrating, note the node must be stopped in order for the migration to work correctly.
  • <new-datadir> refers to the datadir that you will use for the op-geth node.
note

Note that migration will take about 15 minutes to complete.

./bin/celo-migrate full \
--old-db=<old-datadir>/celo/chaindata \
--new-db=<new-datadir>/geth/chaindata \
--deploy-config=cmd/celo-migrate/testdata/deploy-config-holesky-alfajores.json \
--l1-deployments=cmd/celo-migrate/testdata/deployment-l1-holesky.json \
--l1-rpc=https://ethereum-holesky-rpc.publicnode.com \
--l2-allocs=cmd/celo-migrate/testdata/l2-allocs-alfajores.json \
--outfile.rollup-config=rollup-config.json

Run op-geth on migrated datadir

Checkout the celo op-geth repo and build geth:

git clone https://github.com/celo-org/op-geth.git
cd op-geth
make geth

Then run the node on the migrated datadir with your choice of flags, for example an indexer could use:

build/bin/geth \
--datadir=<new-datadir> \
--syncmode=full \
--gcmode=archive \
--snapshot=true \
--http \
--http.api=eth,net,web3,debug,txpool,engine

Note that for the actual deployment you will need to run an instance of op-node, which will communicate with the sequencer and your op-geth instance, but since we currently have no publicly available sequencer there is no reason to run it now.

Fullnode with full sync

Start the Celo L2 execution client with a fully synced and migrated Celo datadir.

Fullnode with snap sync

Start the Celo L2 execution client with an empty datadir.

Archive node with support for execution (tracing, calls, gas estimation …)

  • Start Celo node with fully synced datadir in archive mode.
  • Start the Celo L2 execution client with a fully synced and migrated Celo datadir and additionally provide a flag with the RPC API address of the Celo node (this flag will be used to proxy any requests needing execution to the Celo node)

Mainnet Migration Process Overview (WIP)

Note this section is subject to change and intended for illustrative purposes only.

Stopping the L1 Celo network

A new version of celo-blockchain will be released that allows the flag--l2migrationbock to be passed, specifying the block number of the first L2 block (i.e. the block immediately after the last block of the Celo network as an L1). Nodes ran with this flag will stop producing, inserting and sharing blocks when the block number before --l2migrationblock is inserted.

When this version of celo-blockchain is released and a migration block number has been chosen, node operators will be asked to upgrade and pass in the correct block number when restarting their node.

Starting an L2 node

As is detailed in other parts of this guide, node operators who wish to run an L2 node will have two options.

  1. Perform a migration of the l1 chaindata. This option is required for nodes running in full or archive mode and is a more involved process.
  2. Start an L2 node with snap sync. This option does not require running the migration script.

The rest of this section applies only to option 1.

Pre-migration

~48 hours before the migration block, node operators will need to run a pre-migration script. This script migrates the majority of the chaindata in advance so that the full migration can quickly complete the migration on top of the latest state when the migration block is reached. (Method of distributing the migration script is TBD, right now we build and run from source).

  1. Create a snapshot of the node's chaindata directory. This is a subdirectory of the node's datadir.
    • Do not run the migration script on a datadir that is actively being used by a node.
    • In order to ensure liveness of the network, please do not stop your node to perform the migration.
    • Make sure you have at least enough disk space to store double the size of the snapshot. As you will be storing both the old and new (migrated) chaindata.
  2. Run the pre-migration script.
celo-migrate pre --old-db /path/to/old/datadir/chaindata --new-db /path/to/new/datadir/chaindata
  • old-db should be the path to the chaindata snapshot.
  • new-db should be the path where you want the l2 chaindata to be written.
  • Please run the pre-migration script at least 24 hours before the migration block so that there is time to trouble shoot any issues.
  • Ensure the pre-migration script completes successfully (this should be clear from the logs). If it does not, please reach out for assistance.
  • Keep the new-db as is until the full migration script is run. If this data is corrupted or lost, the full migration may fail or take a very long time to complete.

Full migration

Your node will stop adding blocks immediately before the block passed as --l2migrationblock. That is, the last block added will be l2migrationblock - 1, and the first block of the l2 will l2migrationblock. When your node stops adding blocks, you may shut it down.

NOTE: Do not run the migration script on a datadir that is actively being used by a node even if it has stopped adding blocks.

  1. Run the full migration script
celo-migrate full  --old-db /path/to/old/datadir/chaindata --new-db /path/to/new/datadir/chaindata --deploy-config /path/to/deploy-config.json --l1-deployments /path/to/l1-deployments.json --l1-rpc <ETHEREUM_RPC_URL> --l2-allocs /path/to/l2-allocs.json --outfile.rollup-config /path/to/rollup-config.json --outfile.genesis /path/to/genesis.json
  • old-db should be the path to the chaindata snapshot or the chaindata of your stopped node.

  • new-db should be the path where you want the l2 chaindata to be written. This should be the same path as in the pre-migration script, otherwise all the work done in the pre-migration will be lost.

  • deploy-config should be the path to the JSON file that was used for the l1 contracts deployment. This will be distributed by cLabs.

  • l1-deployments should be the path to the L1 deployments JSON file, which is the output of running the bedrock contracts deployment for the given 'deploy-config'. This will be distributed by cLabs.

  • l1-rpc should be the rpc url of an l1 (i.e. Ethereum) node.

  • l2-allocs should be the path to the JSON file defining necessary state modifications that will be made during the full migration. This will be distributed by cLabs.

  • outfile.rollup-config should be the path where you want the rollup-config.json file to be written by the migration script. You will need to pass this file when starting the l2 node.

  • outfile.genesis should be the path where you want the genesis.json file to be written by the migration script. Any node wishing to snap sync on the L2 chain will need this file.

  • Ensure the full migration script completes successfully (this should be clear from the logs). If it does not, please reach out for assistance.

Start L2 Node

  • Start the l2 node using the migrated chaindata written to the new-db path.
  • Exact command is TBD. Right now we run both op-node and op-geth from source seperately and pass in a bunch of flags to each. We will provide a more user friendly way to start the entire node.