Efficient L2 batch posting strategy on L1

Posting data batches of layer two (L2) rollup chains on layer one (L1) constitutes most of the costs the former chains incur. We try to find an optimal strategy for when to post transaction batches as the L1 price of posting data fluctuates. The tradeoff is clear: we want to avoid delaying the posting of batches, but at the same time, we want to avoid posting when the price is high. The question is motivated by historical experience with the Ethereum base fee, which fluctuates in a partially predictable way, and occasionally has intervals of very high base fee.

We decompose the cost in two parts: a posting cost that can be directly observed when posting batches, and a delay cost that is not directly measurable. Delay cost has several components. The first is psychological: users do not like when the batches are not posted for long period, as it may suggest to them that components of the system are down or unavailable. The second part is related to delayed finality: L2 transactions are not fully final until they are posted as part of a batch and that batch posting has finality on L1. Until that time, users must either wait for finality or trust the L2 system’s sequencer to be honest and non-faulty. Delayed finality imposes costs on some applications. The third part is related to specific technical nature of transaction fee computation on L2 rollups. Namely, transaction fee is calculated when transactions are created, not when they are posted on L1. Therefore, more delay causes less precise estimates of the L1 cost to attribute to L2 transactions, increasing the risk of unfair or inefficient pricing of L2 transactions.

We model the problem as a Markov decision process. In each round, we calculate total costs independently from the past rounds. Each round is characterized by the current queue size and price, which gives a state. The price in the next round is a random variable, which depends on the current price. Depending on the strategy in the current random and the random variable indicating the price in the next round, we move to the next state. To solve the optimization problem of finding the optimal strategy in each round, we use tools from dynamic programming, in particular, q-learning. The structure of the solution allows us to design a practical algorithm, that we test against benchmarks on the previous year’s Ethereum base fee data.

See details in the paper: https://arxiv.org/pdf/2212.10337.pdf

Any feedback on modelling assumptions? Any suggestions on functional form of delay costs or how else can we optimize it together with posting costs instead of simply adding them? Suggestions for related literature?


Impressive work here! :clap: Some of this was over my head but I appreciate the detailed research and followed most of it I believe.

Are there any uodates to this in the past couple weeks since this was originally published?

I was wondering if someone could confirm one of the takeaways I had… if posting batch data to L1 is a primary driver of cost - and it appears using an algorithm to time the exeution of this is a potential optimization, couldn’t the possibility of using a resolver contract that queries the cost of gas be used here?

And when the conditions are satisfactory…the resolver returns a boolean that it is the optimal time to execute the Tx.




Hello Weston. Thank you for your feedback.
There is no follow up work published anywhere. We are working on the implementation details. Namely, how L2 protocol should interpret this algorithm to do L1 pricing part of transactions.
Batch posting is generically delegated to a “batch poster” and currently the sequencer plays the role. Therefore, gas cost querying and checking conditions of posting is done by the sequencer. L2 protocol pays the batch poster. Another research question is to come up with a simple incentive scheme so that batch poster has the same objective function. We are also working on it.

1 Like

Got it - thanks for the info! It is super interesting. So for the last piece you mentioned…looking for a way to basically align the economic incentives then of the batch poster, which could then be outsourced to another party besides the sequencer? As I imagine this is not the ideal goal but just the current best capability until a better solve is found.

Will think on this…thank you!

1 Like

Yes, you are right.
Right now the paper takes the protocol’s perspective, i.e., minimizes L1 publishing costs and tries to minimize delay at the same time. The sequencer (who currently posts batches) or any other batch poster in the future may not have the same goal, for whatever reasons. So it would be ideal if the incentives of the protocol and whoever does the batch posting were aligned. By design, the sequencer is also not an internal part of the protocol and can have its own rationale.