Skip to content

P2P Networking Stack

Base nodes run two independent peer-to-peer networks — one for the consensus layer (CL) and one for the execution layer (EL) — and bridge them locally through the Engine API. This guide walks through both stacks, why they exist, and where the relevant code lives in base/base.

A blockchain is a network of computers that must coordinate without a central server. Peer-to-peer networking provides that coordination. Each node connects to a handful of peers, and those peers connect to others, forming a mesh — a web of overlapping links where every node can reach every other node by some path. When a node sees a new block or transaction, it relays it to its neighbors, and within seconds the entire network has converged. This propagation pattern is gossip.

P2P networking solves three problems:

  1. Discovery — how a fresh node finds peers to connect to.
  2. Transport — how two nodes establish a secure session and exchange data.
  3. Gossip — how connected nodes broadcast new information efficiently without flooding the network.

Before The Merge in September 2022, Ethereum ran a single P2P network. Every node ran one execution client (such as Geth) that handled discovery, transaction gossip, block propagation, and proof-of-work consensus. The networking stack was DevP2P, a custom protocol suite designed for Ethereum starting in 2014.

The Merge introduced a second binary: the consensus client. Consensus clients follow the beacon chain — a separate chain coordinating proof-of-stake — and decide which blocks are canonical. The beacon chain was designed from scratch on libp2p rather than DevP2P because libp2p is more modular and better suited to the publish/subscribe patterns required for attestation and block gossip in proof-of-stake. Merging the stacks would have been a massive undertaking with little practical benefit, so Ethereum kept them separate and bridged them with a local API.

After The Merge, every Ethereum node runs two processes with two separate P2P networks. The execution layer continues to use DevP2P over RLPx (an encrypted transport described later) for transaction and historical block exchange. The consensus layer uses libp2p with gossipsub (a broadcast protocol described later) for beacon blocks and attestations. The two layers communicate locally through the Engine API — JSON-RPC over HTTP or WebSocket on localhost.

A rollup like Base inherits the same two-layer architecture but ships different payloads. Where Ethereum L1 gossips beacon blocks and attestations from hundreds of thousands of validators, Base’s CL gossips L2 execution payloads — the new blocks produced by the sequencer (the single designated node that orders and produces L2 blocks).

The sequencer signs each block with its private key so other nodes can verify it came from the authorized producer rather than from an attacker. It then publishes the signed block to the CL gossip network, and every Base node picks it up from there.

This is a latency optimization, not a security mechanism. Rollup security comes entirely from L1: batch data posted to Ethereum L1 is the canonical source of truth, and the derivation pipeline (software that reconstructs L2 state from L1 batch data) can rebuild the entire L2 chain from L1 alone. P2P gossip lets nodes see new L2 blocks several minutes before the corresponding batch lands on L1, reducing user-visible latency. The EL still runs a DevP2P network for transaction pool gossip and historical sync, though in practice the sequencer often receives transactions directly rather than through EL gossip.

How the CL and EL communicate: the Engine API

Section titled “How the CL and EL communicate: the Engine API”

Before walking through each stack, it helps to know how they connect. The consensus and execution clients do not share a P2P network — they share a local RPC bridge called the Engine API: a JSON-RPC interface exposed by the execution client (typically on port 8551) and secured with JWT authentication, a shared-secret mechanism that proves the caller is authorized. Only the co-located consensus client can issue these calls.

The Engine API has three core methods. Think of it as a three-step exchange: “start building a block,” “give me what you built,” and “here’s a block someone else built — check it.”

engine_forkchoiceUpdated tells the execution client which block is the current head, the latest safe block, and the latest finalized block. When the consensus client also passes optional payload attributes (timestamp, fee recipient, and other parameters describing the block to build), the call additionally instructs the execution client to begin assembling a new block. engine_getPayload retrieves the block the execution client has been assembling from its mempool. engine_newPayload sends an execution payload received from gossip to the execution client for validation and execution. The execution client re-executes all transactions, verifies the state root (a cryptographic fingerprint of the entire blockchain state after executing the block’s transactions), and responds with VALID, INVALID, or SYNCING (meaning the node has not caught up to the chain tip yet and cannot validate the block).

For block production on Base, the flow is: the sequencer’s CL calls engine_forkchoiceUpdated with payload attributes to start building, waits, calls engine_getPayload to retrieve the built payload, signs it, and publishes it to the CL gossip network. For block validation, the flow is reversed: the CL receives a signed block from gossip, validates the signature, sends the execution payload to the EL via engine_newPayload, and uses the validity response to accept or reject the block.

This separation lets each layer evolve its networking independently. It also means implementations are swappable — any CL and EL that speak the Engine API can be paired.

The Base consensus P2P stack lives under crates/consensus/ and is composed of four crates that layer on top of one another: peers, disc, gossip, and the network actor in the service crate.

The base-consensus-peers crate provides the fundamental types for identifying and managing peers on the consensus network.

The central concept is the ENR (Ethereum Node Record), defined in EIP-778. An ENR is a compact, self-describing identity document a node uses to advertise itself. It is encoded with RLP (Recursive Length Prefix, Ethereum’s standard binary serialization format) and contains a cryptographic signature, a monotonically increasing sequence number (so newer records always rank higher), and a sorted set of key-value pairs. Predefined keys include id (the identity scheme — currently “v4” for secp256k1, the elliptic curve Ethereum uses for public-key cryptography), secp256k1 (the node’s compressed 33-byte public key), ip (IPv4 address), and tcp/udp (port numbers). When a node’s information changes (for example, its IP address rotates), it bumps the sequence number and re-signs the record; the signature ensures records cannot be tampered with. ENRs are deliberately small (300 bytes maximum) because they propagate frequently and may need to fit in constrained transports such as DNS TXT records.

Different layers of the Ethereum stack use ENR extension keys for chain-specific metadata. Ethereum L1’s execution layer carries an eth key with fork ID information. The L1 consensus layer carries an eth2 key with the fork digest and attestation subnet bitfield. On Base — and OP Stack chains in general — every ENR includes an opstack key encoding the L2 chain ID and a version number. This is how nodes on different chains (Base Mainnet versus Base Sepolia) tell each other apart during discovery. ENRs are typically displayed as a base64-encoded string prefixed with enr:.

The BaseEnr struct handles this encoding:

/// The unique L2 network identifier
pub struct BaseEnr {
/// Chain ID
pub chain_id: u64,
/// The version. Always set to 0.
pub version: u64,
}
impl BaseEnr {
/// The ENR key literal string for the consensus layer.
pub const OP_CL_KEY: &str = "opstack";
/// Constructs a BaseEnr from a chain id.
pub const fn from_chain_id(chain_id: u64) -> Self {
Self { chain_id, version: 0 }
}
}

When a node discovers another node’s ENR, EnrValidation checks that the opstack key is present, decodes correctly, and that the chain ID matches. If a Base Mainnet node (chain ID 8453) sees an ENR for a different chain, it ignores it.

The peers crate also provides a BootStore, a JSON file that persists discovered ENRs to disk so the node does not need to bootstrap discovery from scratch on every restart. The store caps at 2048 entries and prunes the oldest when full.

The base-consensus-disc crate implements peer discovery using discv5. Discv5 is a UDP-based protocol that maintains a distributed hash table (DHT) of node records. It succeeds discv4 (used by the EL) and was designed for the consensus layer’s needs.

Discv5 is inspired by Kademlia, an academic DHT protocol from 2002. A DHT is a system where many computers collectively maintain a lookup directory without a central coordinator. The Kademlia insight is using XOR as a distance metric. Every node has a 256-bit ID derived from its public key; the “distance” between two nodes is the XOR of their IDs interpreted as a number. The metric is symmetric (distance A→B equals B→A) and unrelated to physical or network distance. Two nodes with similar IDs can sit on opposite sides of the planet. Think of comparing two phone numbers digit by digit: numbers sharing a long prefix are “close” in Kademlia space even if the people holding them live in different countries.

Each node maintains a routing table organized into 256 “k-buckets” — one per possible bit-length of XOR distance. Bucket 0 holds nodes whose IDs differ from the local ID only in the least significant bit; bucket 255 holds nodes that differ in the most significant bit. Each bucket holds up to k entries (typically k=16). Imagine sorting your contacts into folders by how their IDs differ from yours — most contacts end up in “close” folders and fewer in the “far” folders, which is the right shape for efficient lookups.

To discover new peers, a node performs an iterative lookup: it picks a target ID (often random, to populate its routing table), finds the closest nodes it already knows, and sends them FINDNODE requests. Unlike discv4, which asks for nodes “near” a specific ID, discv5 FINDNODE requests specify a logarithmic distance — a number from 0 to 256 representing the bit position where IDs first differ (“give me all nodes at XOR distance 2^n from you”). It is more efficient and harder to abuse. The node receives lists of nodes at that distance, queries those newly learned nodes, and gets progressively closer to the target with each round.

Fresh nodes bootstrap by connecting to a small set of well-known bootnodes whose addresses are hardcoded into the client. The bootnode responds to FINDNODE requests, giving the new node its first set of peers. From there the new node performs a few random lookups to fill its routing table and within minutes has a healthy set of diverse peers.

The Discv5Driver orchestrates discovery. On startup it initializes the discv5 UDP service with exponential backoff retries (waiting progressively longer between attempts — for example 1s, 2s, 4s, 8s). Once that succeeds, it starts the event stream, retrying up to 10 times with 2-second delays if initialization fails. Then it bootstraps by loading the boot store from disk and adding hardcoded boot nodes. Each ENR is validated against the expected chain ID before being added to the routing table:

let validation = EnrValidation::validate(&enr, chain_id);
if validation.is_invalid() {
trace!(target: "discovery::bootstrap", enr = ?enr, validation = ?validation,
"Ignoring Invalid Bootnode ENR");
continue;
}
if let Err(e) = disc.add_enr(enr.clone()) {
debug!(target: "discovery::bootstrap", error = ?e, "Failed to add enr");
continue;
}
store.add_enr(enr);

After bootstrapping, the driver enters its main loop, doing several things concurrently via tokio::select! (a Rust macro that waits on multiple async operations and runs the code for whichever completes first). It runs periodic random node queries (every 5 seconds by default) to keep discovering peers. It listens for discv5 events such as Discovered, SessionEstablished, and UnverifiableEnr, and when it sees a valid ENR it forwards it through an mpsc (multi-producer, single-consumer) channel to the gossip layer so it can dial the peer. An mpsc channel is a thread-safe queue where multiple senders push to a single receiver — the primary way components communicate in async Rust. The driver also persists the current set of known ENRs to the boot store every 60 seconds.

The driver communicates with the rest of the system through a Discv5Handler, a thin wrapper around an mpsc::Sender. Other parts of the system can request metrics, peer lists, the local ENR, or ask discovery to ban specific addresses. This channel-based design avoids shared mutable state across async boundaries.

Gossip — broadcasting blocks with libp2p and gossipsub

Section titled “Gossip — broadcasting blocks with libp2p and gossipsub”

The base-consensus-gossip crate is where the action happens — this is the layer that actually receives and broadcasts L2 blocks across the network.

Gossipsub is a publish/subscribe protocol on libp2p that solves the problem of distributing messages efficiently in a decentralized network where no node can be trusted. The naive approach (floodsub) forwards every message to every peer — it works but consumes huge bandwidth because every node receives each message from multiple sources. Gossipsub instead builds a sparse “mesh” overlay per topic.

When a node subscribes to a topic, it announces the subscription to its peers. It then selects D peers (the “desired mesh degree”) that are also subscribed to the topic and establishes mesh links by sending GRAFT control messages. Mesh links are symmetric: if A grafts B, B also considers A part of its mesh. When a message is published, it flows eagerly through these links — each receiving node forwards the message to all of its mesh peers for that topic (except the sender). The result is a connected overlay where messages propagate in O(log N) hops with bounded bandwidth per node.

The protocol layers reliability on top of the mesh. Every heartbeat interval, each node sends IHAVE control messages to a random subset of non-mesh peers, listing recently seen message IDs. If a peer has not seen one of those messages (perhaps a mesh path failed or was slow), it replies with IWANT and the original node sends the full message. This lazy repair ensures messages reach everyone within a few heartbeat rounds even if the mesh is partitioned or a peer drops out. PRUNE messages are the inverse of GRAFT: a node sends PRUNE to remove a peer from its mesh, either because the mesh is too large or the peer’s score is poor. Pruned peers are not disconnected — they simply move from the eager-push mesh to the lazy-gossip pool.

Gossipsub v1.1, which Base uses, adds a peer scoring system. Each node evaluates its peers based on behavior (whether they deliver valid messages promptly, whether they spam duplicates) and preferentially keeps well-scored peers in the mesh while pruning poorly scored ones. A peer that repeatedly sends invalid messages accumulates a negative score, and once it crosses a configurable threshold it is pruned and eventually disconnected. v1.1 also added flood publishing (where the original publisher sends to all connected peers, not just mesh peers), though Base disables this by default to conserve bandwidth.

The gossipsub configuration in Base lives in gossip/src/config.rs:

pub const DEFAULT_MESH_D: usize = 8; // target peers in mesh
pub const DEFAULT_MESH_DLO: usize = 6; // minimum before mesh repair
pub const DEFAULT_MESH_DHI: usize = 12; // maximum before pruning
pub const DEFAULT_MESH_DLAZY: usize = 6; // peers for lazy gossip
pub const GOSSIP_HEARTBEAT: Duration = Duration::from_millis(500);
pub const MAX_GOSSIP_SIZE: usize = 10 * (1 << 20); // 10 MB

In practice, each node aims for 8 peers in its mesh per topic. If it falls below 6 it grafts new peers; above 12 it prunes some. Every 500ms the heartbeat fires and the protocol checks mesh health. The lazy gossip parameter means that for messages a node has seen but did not forward via the mesh, it still tells 6 additional peers via lightweight metadata so they can request them if needed.

Messages are compressed with Snappy (a fast compression algorithm prioritizing speed over ratio) before being sent, and each message ID is computed by SHA-256 hashing (a cryptographic hash function that produces a unique fixed-size fingerprint of arbitrary data) the decompressed content with a domain prefix (a few extra bytes prepended before hashing to distinguish valid from invalid encodings). This is how the network deduplicates messages.

The gossip topics are where Base’s L2-specific design becomes visible. The BlockHandler manages four versioned topics, one per protocol version:

blocks_v1_topic: IdentTopic::new(format!("/optimism/{chain_id}/0/blocks")),
blocks_v2_topic: IdentTopic::new(format!("/optimism/{chain_id}/1/blocks")),
blocks_v3_topic: IdentTopic::new(format!("/optimism/{chain_id}/2/blocks")),
blocks_v4_topic: IdentTopic::new(format!("/optimism/{chain_id}/3/blocks")),

For Base Mainnet (chain ID 8453), these resolve to /optimism/8453/0/blocks through /optimism/8453/3/blocks. The version is selected based on the active hardfork (a protocol upgrade that changes network rules, activated at a specific timestamp) at the block’s timestamp. V1 covers pre-Canyon blocks, V2 Canyon/Delta, V3 Ecotone, and V4 Isthmus. Each version uses a slightly different encoding for the execution payload envelope. When a node subscribes to gossip, it subscribes to all four topics simultaneously so it can handle blocks from any active version.

The BlockHandler implements the Handler trait, with two methods: handle() for processing incoming messages and topics() for declaring which topics it cares about. When a gossip message arrives, the handler first checks the topic to choose the decoder, then decodes the payload, then validates it:

fn handle(&mut self, msg: Message) -> (MessageAcceptance, Option<NetworkPayloadEnvelope>) {
let decoded = if msg.topic == self.blocks_v1_topic.hash() {
NetworkPayloadEnvelope::decode_v1(&msg.data)
} else if msg.topic == self.blocks_v2_topic.hash() {
NetworkPayloadEnvelope::decode_v2(&msg.data)
} else if msg.topic == self.blocks_v3_topic.hash() {
NetworkPayloadEnvelope::decode_v3(&msg.data)
} else if msg.topic == self.blocks_v4_topic.hash() {
NetworkPayloadEnvelope::decode_v4(&msg.data)
} else {
return (MessageAcceptance::Reject, None);
};
match decoded {
Ok(envelope) => match self.block_valid(&envelope) {
Ok(()) => (MessageAcceptance::Accept, Some(envelope)),
Err(err) => (err.into(), None),
},
Err(err) => (MessageAcceptance::Reject, None),
}
}

The MessageAcceptance result feeds back into gossipsub’s peer scoring. Accept means the message was valid and the peer earns credit. Reject means the message was invalid and the peer’s score takes a hit. Ignore is used for already-seen blocks and does not penalize the peer.

Block validation — keeping gossip honest

Section titled “Block validation — keeping gossip honest”

Block validation in block_validity.rs is one of the most important pieces of the P2P stack because it determines what the node accepts. The checks run in sequence and the order matters.

First, the timestamp must fall within an acceptable window — no more than 5 seconds in the future or 60 seconds in the past. This rejects stale blocks and prevents replay attacks (where an attacker re-broadcasts old, legitimate messages to confuse the network).

Second, the block hash is recomputed from the payload and compared against the hash in the envelope. A mismatch indicates tampering.

Third, version-specific payload constraints are enforced. V3 (Ecotone) and later payloads must include a non-empty parent beacon block root and zero blob gas usage. V4 (Isthmus) payloads must include a withdrawals root. This step catches blocks that are structurally invalid for their version.

Fourth, the handler consults its seen-hashes tracking. It maintains a BTreeMap keyed by block height with a 1,000-entry cache. If more than 5 different blocks already exist for a given height (a sixth unique block is accepted but a seventh is rejected), the new block is rejected. If this exact block hash has been seen before, it is ignored without penalty. Multiple valid blocks at the same height can occur with a single sequencer when it produces a replacement block — for example, after a reorg (chain reorganization) triggered from L1.

Fifth, and most important for rollup security, the signature is verified. The sequencer signs each block with its private key, and every node knows the expected signer’s address. The expected signer is read first, then the signature is recovered from the payload hash via ECDSA recovery (a property of elliptic curve signatures that lets you compute the signer’s public key from just the signature and the signed message). If the recovered address does not match the expected signer, the block is rejected:

let msg = envelope.payload_hash.signature_message(self.rollup_config.l2_chain_id.id());
let block_signer = *self.signer_recv.borrow();
let Ok(msg_signer) = envelope.signature.recover_address_from_prehash(&msg) else {
return Err(BlockInvalidError::Signature);
};
if msg_signer != block_signer {
return Err(BlockInvalidError::Signer { expected: block_signer, received: msg_signer });
}

Only after the signature passes does the handler insert the block hash into the seen-hashes map, marking it as processed. This prevents spam without rejecting legitimate competing blocks.

Connection gating — controlling who connects

Section titled “Connection gating — controlling who connects”

The ConnectionGater is a rate-limiting layer that controls peer connections. It tracks dial attempts per peer address and enforces a configurable dial period (default: 1 hour). By default, redialing is disabled entirely — a peer can only be dialed once per period. The CLI overrides this to allow up to 500 redials per period via --p2p.redial. The gater also supports explicitly blocking peers by ID, IP address, or subnet, and protecting specific peers from disconnection regardless of score.

Without connection gating, a misbehaving or misconfigured node could repeatedly attempt to connect, wasting resources. The gater bounds connection attempts and lets known-bad actors be blocked at the connection level rather than just at the gossip level.

The libp2p Behaviour — combining protocols

Section titled “The libp2p Behaviour — combining protocols”

The Behaviour struct is a libp2p NetworkBehaviour that combines several sub-protocols into a single swarm (libp2p’s term for the combination of transport, protocol behaviors, and connection management — essentially the “networking engine”):

#[derive(NetworkBehaviour)]
pub struct Behaviour {
pub ping: libp2p::ping::Behaviour,
pub gossipsub: libp2p::gossipsub::Behaviour,
pub identify: libp2p::identify::Behaviour,
pub sync_req_resp: libp2p_stream::Behaviour,
}

ping sends periodic keepalive pings and measures round-trip times. gossipsub handles block gossip. identify exchanges capability information when peers first connect (Base advertises its agent version as "base"). sync_req_resp supports a legacy request-response protocol called payload_by_number that is part of the OP Stack spec. It is being deprecated and the Base implementation responds “not found” to all requests, but it is still present so op-nodes do not penalize Base nodes for missing it.

The GossipDriver (gossip/src/driver.rs) wraps the swarm and provides higher-level operations. Its start() method binds the swarm to a TCP address (default 0.0.0.0:9222), waits for the NewListenAddr event confirming the listener is up, and returns. Its publish() method takes an execution payload envelope (the signed wrapper around a block’s contents — transactions, state root, gas used, and so on), selects the appropriate topic based on the block’s timestamp and active hardfork, encodes it with version-appropriate serialization, and publishes it to gossipsub. Its dial() method takes an ENR from discovery, validates the chain ID, extracts the TCP multiaddr (a self-describing network address format used by libp2p, e.g. /ip4/192.168.1.1/tcp/9222), checks the connection gate, and initiates the connection.

The NetworkActor in the service crate ties everything together. It follows the actor pattern, a concurrency design where each component runs as an independent task that communicates with others exclusively through message channels, avoiding shared mutable state. The actor is the top-level component the consensus node’s main loop interacts with. It is generic over a GossipTransport trait, which lets the real networking stack be swapped out for an in-process test transport:

#[async_trait]
pub trait GossipTransport: Send + 'static {
type Error: std::fmt::Debug + Send + 'static;
async fn publish(&mut self, payload: BaseExecutionPayloadEnvelope) -> Result<(), Self::Error>;
async fn next_unsafe_block(&mut self) -> Option<NetworkPayloadEnvelope>;
fn set_block_signer(&mut self, address: Address);
fn handle_p2p_rpc(&mut self, request: P2pRpcRequest);
}

The production implementation, NetworkHandler, composes the GossipDriver and Discv5Handler. It runs a tokio::select! loop that simultaneously handles several things: receiving ENRs from discovery and dialing them as new gossip peers, receiving blocks from gossip and forwarding them to the consensus engine, publishing blocks produced locally if the node is the sequencer, inspecting peer scores every 15 seconds and banning low-scoring peers (disconnecting them from gossip and banning their addresses in discovery), and handling administrative RPC requests.

The NetworkDriver handles the startup sequence. It starts the gossip swarm first, gets back the actual listen address, optionally updates the local ENR with that address (so other nodes discover the correct port), and then starts discovery. Ordering matters because the ENR must contain the real TCP port that gossip is listening on.

The NetworkActor communicates with the rest of the consensus node through mpsc channels bundled in a NetworkInboundData struct:

pub struct NetworkInboundData {
pub signer: mpsc::Sender<Address>,
pub p2p_rpc: mpsc::Sender<P2pRpcRequest>,
pub admin_rpc: mpsc::Sender<NetworkAdminQuery>,
pub gossip_payload_tx: mpsc::Sender<BaseExecutionPayloadEnvelope>,
}

Other actors in the node (such as the sequencer or the admin RPC server) push data to the network actor through these senders. The signer channel updates the expected unsafe block signer address. The gossip_payload_tx channel lets the sequencer publish newly produced blocks. This message-passing architecture isolates all networking concerns in one actor without shared mutable state.

When you launch the Base consensus binary, P2P configuration comes from CLI flags defined in base-client-cli. Key flags include --p2p.listen.tcp (default 9222) and --p2p.listen.udp (default 9223) for local bind addresses, --p2p.advertise.ip for NAT (Network Address Translation) scenarios where the node sits behind a router and its public IP differs from its local IP, --p2p.priv.path for the node’s secp256k1 private key, and the mesh parameters such as --p2p.gossip.mesh.d for the target mesh size.

The startup flow in the CLI’s exec() function loads the rollup configuration, parses the P2P arguments into a NetworkConfig, constructs a NetworkBuilder with both the gossip and discovery builders, and passes it to the RollupNodeBuilder which starts the NetworkActor. From that point on, the node is live on the consensus P2P network.

The execution layer P2P stack is built on Reth, a high-performance Ethereum execution client written in Rust. The Base-specific customizations live under crates/execution/, and the node definition is in crates/execution/node/. For an architectural overview of how this fits the rest of the system, see the architecture overview.

Reth implements the standard Ethereum execution layer networking stack. At the transport level it uses DevP2P with RLPx, a cryptographic transport where peers handshake to establish an encrypted, authenticated session. The mechanics resemble HTTPS: both nodes exchange temporary keys and derive a shared secret, after which all communication is encrypted. (For the curious, the handshake uses ECIES — Elliptic Curve Integrated Encryption Scheme — with ephemeral key shares and Diffie-Hellman key agreement. The session is encrypted with AES-256 and authenticated with keccak256-based MACs. Understanding these details is not necessary to work with the codebase.) Once the encrypted channel is up, the two nodes exchange “Hello” messages negotiating which sub-protocols they both support.

On top of this transport, Reth speaks the eth/68 wire protocol (version 68 of the Ethereum sub-protocol for exchanging chain data between execution clients). It handles block headers, block bodies, receipts, and transaction announcements. For transaction gossip specifically, eth/68 uses a two-phase announcement system: when a node receives a new transaction, it sends the full transaction to a small random fraction of its peers and sends lightweight hash announcements (with type and size metadata) to all other peers, who can then request the full transaction if they do not already have it. Reth also supports the snap/1 protocol for snapshot-based state synchronization, letting new nodes download state much faster than replaying chain history.

For peer discovery, Reth supports both discv4 (the older UDP-based Kademlia protocol) and discv5 (the newer protocol also used by the consensus layer). Nodes can additionally be configured with static boot nodes and DNS-based discovery.

The BaseNetworkBuilder configures Reth’s network for Base. It exposes two flags:

pub struct BaseNetworkBuilder {
pub disable_txpool_gossip: bool,
pub disable_discovery_v4: bool,
}

The disable_txpool_gossip flag is particularly important for rollup nodes. When a node is configured with a sequencer endpoint (so it should forward transactions to the sequencer rather than including them itself), transaction pool gossip is disabled. This stops the node from broadcasting pending transactions to the rest of the network, because in a rollup the sequencer is the only entity that orders transactions.

The network_config method assembles Reth’s NetworkConfig by applying the discovery settings. The code below builds the configuration step by step: it sets the RLPx socket address, conditionally disables discv4, enables discv5 with boot nodes, and finally sets the transaction gossip flag:

let network_builder = ctx
.network_config_builder()?
.apply(|mut builder| {
let rlpx_socket = (args.addr, args.port).into();
if disable_discovery_v4 || args.discovery.disable_discovery {
builder = builder.disable_discv4_discovery();
}
if !args.discovery.disable_discovery {
builder = builder.discovery_v5(
args.discovery.discovery_v5_builder(
rlpx_socket,
ctx.config()
.network
.resolved_bootnodes()
.or_else(|| ctx.chain_spec().bootnodes())
.unwrap_or_default(),
),
);
}
builder
});
let mut network_config = ctx.build_network_config(network_builder);
network_config.tx_gossip_disabled = disable_txpool_gossip;

Discv4 is disabled by default for Base (--rollup.discovery.v4 defaults to false) while discv5 is enabled. Boot nodes resolve from CLI arguments first and fall back to the chain specification. Once the config is built, build_network creates a NetworkManager, starts it, and logs the local enode record (the DevP2P equivalent of an ENR — a URL-formatted node identifier such as enode://<pubkey>@<ip>:<port>).

The Base transaction pool lives in crates/execution/txpool/. It extends Reth’s standard transaction pool with rollup-specific validation and ordering.

The OpTransactionValidator wraps Reth’s EthTransactionValidator and adds L1 data gas fee checks. Every transaction on Base pays both an L2 execution gas cost and an L1 data fee (the cost of posting transaction data to Ethereum L1). The validator confirms the sender’s balance covers both. It also rejects EIP-4844 blob transactions (a special transaction type used on L1 to carry large data blobs for rollups, which are not meaningful on the L2 itself).

The ordering strategy is configurable via --rollup.txpool-ordering and defined in ordering.rs:

pub enum BaseOrdering<T> {
CoinbaseTip(CoinbaseTipOrdering<T>),
Timestamp(TimestampOrdering<T>),
}

CoinbaseTip is the standard Ethereum priority-fee ordering — higher fees first. Timestamp is a rollup-specific FIFO ordering that prioritizes by arrival time regardless of fee, which can produce fairer sequencing.

For non-sequencer nodes, transactions accepted into the mempool need to be forwarded to the sequencer for inclusion. This is handled by the consumer/forwarder pipeline in the txpool crate.

The SpawnedConsumer polls the transaction pool for new pending transactions and broadcasts them through a tokio::broadcast channel. The SpawnedForwarder subscribes to that channel and forwards each transaction via a custom JSON-RPC method (base_insertValidatedTransactions) to configured builder endpoints. One forwarder task is spawned per builder URL, so multiple downstream builders can receive transactions simultaneously. The pipeline runs as background tasks on the node’s task executor.

The execution layer P2P is configured through Reth’s standard network flags plus Base-specific rollup flags defined in args.rs. Key flags:

--rollup.sequencer sets the sequencer endpoint for transaction forwarding. --rollup.disable-tx-pool-gossip disables transaction gossip on the DevP2P network (this is a separate flag — setting the sequencer endpoint does not automatically disable gossip, so operators typically set both). --rollup.discovery.v4 enables the legacy discv4 discovery protocol (disabled by default since Base uses discv5). --rollup.txpool-ordering selects between coinbase-tip and timestamp ordering strategies.

Standard Reth network flags still apply: --network.addr and --network.port control the RLPx bind address (default port 30303), --network.discovery.disable-discovery turns off all peer discovery, and --network.discovery.bootnodes provides custom boot node addresses.

When a Base node is running, the consensus and execution processes work in tandem but network independently. Block reception flows like this:

The sequencer produces a new L2 block and signs it. The signed block is published as a NetworkPayloadEnvelope to the CL gossip network on the appropriate versioned topic (for example, /optimism/8453/3/blocks for Isthmus). Every consensus node’s BlockHandler receives the message, validates it (timestamp, hash, signature, deduplication), and if valid marks it Accept and passes it to the consensus engine. The engine sends the execution payload to the local execution client via engine_newPayloadV*. The execution client validates the block against the EVM, computes the new state root, and reports the result back.

Transaction submission flows in the opposite direction. A user submits a transaction to an EL node’s RPC endpoint. If transaction pool gossip is enabled, the transaction is broadcast to other EL peers via the DevP2P eth/68 protocol. If a sequencer endpoint is configured, the transaction forwarder sends it directly to the sequencer via JSON-RPC. The sequencer includes it in the next block, which then propagates through the CL gossip network as described above.

Discovery on each layer is independent. The CL uses discv5 on a UDP port (default 9223) to find other CL peers, validating ENRs by chain ID so it only connects to Base nodes. The EL uses discv5 (and optionally discv4) on its own UDP port to find other EL peers. The two discovery networks are entirely separate and serve different purposes.

Consensus layer peers and ENR management:

Consensus layer discovery:

Consensus layer gossip:

Consensus layer orchestration:

Execution layer node and networking:

Execution layer transaction pool:

Attestation — A validator’s vote that a particular block is valid.

Beacon chain — The chain introduced with Ethereum’s proof-of-stake upgrade that coordinates consensus.

Bootnode — A well-known node whose address is hardcoded into client software, used to bootstrap new nodes into the network.

CL (Consensus Layer) — The part of an Ethereum node responsible for consensus (deciding which blocks are canonical).

DevP2P — The execution layer’s P2P protocol suite, including RLPx transport and the eth wire protocol.

DHT (Distributed Hash Table) — A system where many computers collectively maintain a lookup directory without any central coordinator.

Discv4 / Discv5 — Node Discovery Protocol versions 4 and 5. UDP-based protocols for finding peers.

EL (Execution Layer) — The part of an Ethereum node responsible for executing transactions and maintaining state.

Engine API — The JSON-RPC interface that the consensus and execution layers use to communicate locally.

ENR (Ethereum Node Record) — A signed, self-describing identity document that a node uses to advertise itself (IP, ports, public key, chain metadata).

EVM (Ethereum Virtual Machine) — The runtime environment that executes smart contract bytecode.

Gossipsub — A publish/subscribe protocol built on libp2p that uses a mesh overlay for efficient message distribution.

GRAFT / PRUNE — Gossipsub control messages for adding or removing a peer from the mesh.

Hardfork — A protocol upgrade that changes the rules of the network, activated at a specific timestamp.

IHAVE / IWANT — Gossipsub control messages for the lazy repair mechanism (advertising and requesting missed messages).

k-bucket — A fixed-size list of peers at a particular XOR distance in a Kademlia routing table.

L1 (Layer 1) — The base Ethereum chain.

L2 (Layer 2) — A chain that runs on top of L1 for scalability (e.g. Base).

libp2p — A modular networking framework used by the consensus layer, originally developed for IPFS.

Mempool — The pool of pending transactions waiting to be included in a block.

mpsc — Multi-producer, single-consumer channel. A thread-safe queue used for async communication in Rust.

Multiaddr — A self-describing network address format used by libp2p (e.g. /ip4/192.168.1.1/tcp/9222).

NAT (Network Address Translation) — When a node is behind a router and its public IP differs from its local IP.

Reorg — A chain reorganization where the network switches to a different sequence of blocks.

RLP (Recursive Length Prefix) — Ethereum’s standard binary serialization format.

RLPx — The execution layer’s encrypted TCP transport protocol.

Rollup — A type of blockchain that executes transactions on its own chain but posts data back to Ethereum for security.

Sequencer — The single designated node that orders and produces L2 blocks.

Snappy — A fast compression algorithm used for gossip message compression.

State root — A cryptographic fingerprint of the entire blockchain state after executing a block’s transactions.

Swarm — libp2p’s term for the combination of transport, protocol behaviors, and connection management.

XOR distance — The distance metric used in Kademlia, computed by XOR-ing two node IDs. Has nothing to do with geographic distance.