Hybrid transaction ordering policy

Inspired by some discussions and posts about transaction ordering (like this one), we’re thinking about how we might maintain the desirable properties of our transaction ordering policy (such as low latency, transparency, and ability to decentralize without affecting policy) with measures to reduce the prevalence of “latency racing” among transaction submitters.

Toward that end, we’re looking at potential policies that have the following properties:

  1. Dark mempool: Submitted transactions are not visible to anyone other than the sequencer, until they are included in the published sequence. This prevents parties from front-running or sandwiching others’ transactions. (The sequencer is trusted not to engage in such tactics.)
  2. Low latency: Every transaction that arrives at the sequencer is emitted into the sequence within some time bound, perhaps 1/2 second.
  3. Over short time intervals, transactions with higher “tip” (per-gas priority fee) are ordered first. This is intended to induce parties who are contending for fast placement to do so by increasing their tip rather than expending resources to reduce their latency in delivering transactions to the sequencer.
  4. Stable after decentralization: The goals of the policy will still be satisfied, after the sequencer is decentralized (assuming a suitable supermajority of sequencers apply the policy honestly).

One policy that might work would divide time into “chunks” of 1/2 second each, and sort the transactions arriving within the same chunk into decreasing tip order, breaking ties by earliest arrival. The sequencer would buffer incoming transactions for up to 1/2 second, then sort the buffer and emit into the sequence in sorted order, before repeating the process for the next chunk.

There are also more sophisticated algorithms that provide similar guarantees without having sharp boundaries between chunks.

We’re interested in the community’s feedback on transaction ordering policies along those general lines, as we consider whether to pursue a change like this.


We’re also interested in talking to anyone who might be interested in a bidding-based transaction ordering system like the one described in this thread.


So basically a model with 0.5sec blocks and no mempool ?
From a end consumer pov, it’s somehow a faster and sandwich-proof version of polygon ? If I got it right, big yes !

I’ve always liked the idea of a dark mempool, this protects users from malicious actors that listen in on the system. But then it means we have to trust the sequencers, which then brings about when they are decentralised, what stops someone running a malicious sequencer?

low latency is always a must have!


With decentralized sequencing, the membership on the sequencer committee will still be permissioned. And there are ways to incorporate threshold decryption into the distributed sequencing protocol, so that individual sequencers (or collusions of a few sequencers) can’t see transactions before they’re sequenced.


Firstly I do not see why sequencers always need to be permissioned? Is there a process of how this permissioning takes place,? Really defeats the purpose of decentralised systems if they are gated…

Secondly , threshold decryption is great but becomes slower in respect to the number of sequencers needed to provide a slice for the decryption. making the process ever so slower as more nodes join the system unless there’s a way to have groups of sequencers.

For practical reasons, the number of sequencers participating at any given time needs to be limited, although that set can be changed over time, ideally in a way consistent with the community’s sentiment.

Because the number will be limited, threshold decryption shouldn’t be a bottleneck.

We’ll be talking more about decentralized sequencing architectures in the future.

1 Like