Shared Sequencing Economics

We study shared sequencing from an economic efficiency perspective and its effects on arbitrage searcher (bidder) behavior. We assume that the only source of arbitrage is backrunning or price adjustment between decentralized exchanges on different chains. In order to win the race, an arbitrage searcher should win races on all chains simultaneously. That is, its transaction should be scheduled (and therefore, executed) earlier than the transactions of all other players on all chains.
The main reason for having a shared sequencer in our model is uncertain latency. That is, the time a transaction from each user reaches each sequencer is a random variable, maybe with low variance. With this assumption, users always benefit with the shared sequencer, as the shared sequencer guarantees that one of them wins the race. With separate sequencers, it can be that different users win races on different chains, and therefore, none of them win the global race.
First, we consider a first come first serve transaction ordering policy. We show that in a simple latency competition, in which users try to reduce their average transaction delivery times, total expenditure in case of shared sequencer is lower than the sum of expenditures with separate sequencers. This goes in favor of shared sequencing.
With transaction ordering policies that try to extract some value from arbitrage searchers, the value of interest is total revenue. Surprisingly enough, in a simple example with 2 bidders and 2 chains, we show that the total revenue with the shared sequencer is not always higher than the sum of revenues with separate sequencers. A general question whether the revenue with shared sequencer is larger than the sum of revenues of separate sequencers heavily depends on the transaction ordering policy, and the bidding scheme associated with it. The minimal formal model and examples are work in progress, and we will update with a short paper soon.

Apart from an obvious economies of scale for having a shared sequencer and the friction that we introduce about uncertain latencies, what can drive economic efficiency of a shared sequencer?

6 Likes

Very interesting overview! Extremely eager to see the paper and model.

1 Like

Hey,

I will assume that you’re referring to shared sequencing models that focus exclusively on state ordering, such as the Espresso model.

Would like to draw a parallel to the familiar PBS vibes from Ethereum. In this scenario, shared sequencers take on the role of block proposers, implying they’d need to be in sync with block builders (potentially SUAVE in future developments).

Within the PBS framework, the builder can get a blind commitment from the proposer (in this context, the shared sequencer) to propose a specific block. Interestingly, the proposer remains aware only of the total utility (i.e the builder’s bid) they stand to gain from endorsing the block, but they remain in the dark about its actual content.

To piece this together:

Imagine a searcher aiming for cross-chain arbitrage. Independently, SUAVE could construct and dispatch to two distinct rollups:

  • Block 1: Contains Trade 1 - Purchasing ETH at a reduced price on Rollup 1.
  • Block 2: Encompasses Trade 2 - Offloading ETH at a premium on Rollup 2.

As you mentioned, there’s a very real possibility that Block 1 is successful in its respective auction while Block 2 doesnt make it, and vice versa. Now let’s envisage a scenario where both rollups are sharing the same sequencer.

Considering shared sequencer nodes are only doing transaction ordering, they remain unaware to the rollups’ current states and, by extension, the underlying function of the transactions. That’s why they are relaying on external entities (like SUAVE or alternative MEV aware builders) to generate a comprehensive block on their behalf to maximize efficiency. In such an environment, SUAVE executors could potentially forward both B1 and B2 to the shared sequencer under the condition that both blocks follow a all or none directive (i.e both are executed atomically or neither are processed).

The point here is the assurance of robust economic benefits throughout:

  • SUAVE: Guarantees the resultant state, dependent on both B1 and B2 being included and executed atomically.
  • Shared Sequencer: Ensures both B1 and B2 undergo atomic inclusion and execution.

Hope that makes sense!

2 Likes

Thank you for your engagement. Yes, you’re right that our modeling is closer to what Espresso system proposes. The competition is only about getting transaction(s) scheduled earlier, as we look into only arbitrage opportunities that require back-running. In this model, Espresso and SUAVE would be equivalent. I also agree SUAVE may offer extra opportunities, but all of them would include front-running in some form.

1 Like

The paper containing model and results is available online: 2310.02390.pdf (arxiv.org). It is a joint work with @Christoph.