Multi-constraint pricing

This post introduces some new concepts in L2 pricing, which are being proposed for adoption by the Arbitrum DAO.

I’ll start by reviewing how L2 pricing works on Arbitrum today. This is fairly typical of how blockchains price their gas.

Then I’ll move on to the new ideas.

This post will only cover the pricing of L2 gas, and won’t address the data posting fees that get charged to cover data posting on L1. I’ll also set aside questions of multi-dimensional pricing–those can be combined with ideas from this post, but trying to cover that would make the post longer than it already is.

L2 gas pricing on Arbitrum today

Today, Arbitrum prices L2 gas using an exponential mechanism, which is characterized by three parameters, a gas target T (currently 7 Mgas/sec on Arbitrum One), an adjustment time A (currently 102 seconds on Arbitrum One), and a minimum price P_\mathrm{min} (currently 0.01 gwei).

(There is also a gas limit, which is the maximum amount of gas that the chain is ever allowed to use. This is typically much higher than the gas target, so it won’t be relevant until later in this post.)

The system tracks a “gas backlog” B which is updated as follows:

  • If a transaction uses g gas, set B \gets B+g

  • If s seconds elapse on the clock, set B \gets \max(0, B-sT)

The gas price is

P = P_\mathrm{min}\cdot e^\frac{B}{AT}

Intuitively, the mechanism is trying to enforce a constraint that the gas usage does not go above the target T. If usage is above the target, the price will go up; when below the target, the price will go down unless it’s already at the minimum. The adjustment time controls how patient the pricer is–if usage is twice the target for a full adjustment period, the price will go up by a factor of e \approx 2.718. The default adjustment time of 102 seconds is chosen to closely match the Ethereum pricer, which will roughly multiply the price by e per 102 seconds if usage is twice Ethereum’s target.

Why does the chain limit gas usage?

Ultimately, we need to limit gas usage to make sure that the chain’s infrastructure doesn’t get overwhelmed and run out of a resource. In practice, execution nodes are the limiting factor, so we’ll focus on them.

There are three main things we need to protect:

  • Execution: make sure that a node that is executing the tip of the chain can keep up with the blocks being issued by the sequencer

  • Sync speed: make sure that a node that is behind the tip and synching can catch up quickly enough

  • State growth: make sure that the state of the chain doesn’t exceed the size of practical, affordable storage devices on a node

Each one of these implies an upper bound on the gas used by the chain, averaged over some period. But the periods are very different! For execution we care about a period of seconds; for sync speed a period of hours to days; and for state growth a period of weeks to months.

These are effectively three different constraints on gas usage, which operate over different periods of time. The current pricer effectively mashes these together into a single constraint which is supposed to protect all three resources, at different levels over different periods of time.

As an example, suppose that for execution we need to limit usage to 50 Mgas/sec over an adjustment time of 102 seconds, and that to control state growth, we need to limit usage to 25 Mgas/sec over an adjustment time of one day. If we insist on using a single constraint to protect both resources, as the current pricer requires, we’ll probably be stuck with a target of 25 Mgas/sec and adjustment time of 102 seconds, because safety seems to require us to use the minimum of the two gas targets with the minimum of the two adjustment times. (We can be a little more aggressive than that, but not much.)

To see why that “Frankenstein constraint” isn’t ideal, consider what happens with that single constraint if usage is 30 Mgas/sec for ten minutes. This shouldn’t be a big deal for the chain, because it’s far below the execution target, and it violates the storage growth target but only for a tiny fraction of the one day period that that constraint cares about–we could get back to a daily average of 25 Mgas/sec by using 24.97 Mgas/sec for the rest of the day. Yet if we’re stuck with the Frankenstein constraint (25 Mgas/sec, 102 seconds) this event multiplies the price by more than 3.2 (that is, e^\frac{(30-25)\cdot 600}{25\cdot 102}).

Multi-constraint pricing

If we know the gas target and adjustment time we want for each resource, we can build a pricer that tracks all of our constraints, and raises the price only in response to the actual constraints (if any) that are being exceeded.

We’ll have a set of k constraints, each with a target T_i and adjustment time A_i. We’ll track k backlogs, B_i, as follows:

  • If a transaction uses g gas, then for all i set B_i \gets B_i + g
  • If s seconds elapse on the clock, then for all i set B_i \gets \max(0, B_i-sT_i)

The price is then

P = P_\mathrm{min}\cdot e^{\sum_{i=0}^{k-1} \frac{B_i}{A_iT_i}}

This approach allows each constraint to have its own target and adjustment time. Where the current approach has a single backlog with its coefficient in the exponent, the multi-constraint one has a linear combination of the per-constraint backlogs.

To see why this is good, consider the example from above. With 30 Mgas/sec used over 10 minutes, the execution constraint will keep its backlog at zero, and the state growth constraint will build up a backlog of 5 Mgas/sec times 600 seconds = 3000 Mgas. The price will multiply by a factor of

e^\frac{3000}{25\cdot 86400} \approx 1.001

which is much better.

If the usage persists at the 30 Mgas/second rate, the gas price will eventually rise more, but with a one-day adjustment window the pricer can afford to be very patient.

That said, if usage goes above 50 Mgas/sec, the execution constraint will kick in, and it is much less patient–which makes sense because if a node falls behind the tip of the chain, we don’t want to wait hours before it can catch up. In that case the price would increase with more urgency, allowing the node to catch up quickly.

This is the beauty of multi-constraint pricing. The pricer can be patient when addressing longer-term needs, and more aggressive when protecting shorter-term requirements.

Constraint ladders

Multi-constraints are an improvement, but simulations suggest that we can make one more improvement. When two constraints are farther apart, like our (25Mgas/sec, 1 day) and (50 Mgas/sec, 102 sec) constraints, it seems to be beneficial to introduce a set of intermediate constraints between the two. We call this a constraint ladder.

For example, we might decide to include four more intermediate constraints between the original two. We could space those so that neighboring runs of the ladder are at a constant ratio in gas target, and a constant ratio in adjustment time. If we tag the 25M constraint as constraint number zero, and the 50M constraint as constraint number 5, then we have for i \in \{0, 1, \ldots, 5\},
A_i = A_0\cdot(\frac{A_5}{A_0})^\frac{i}{5}
and
T_i = T_0\cdot(\frac{T_5}{T_0})^\frac{i}{5}

The effect of this is to create a more evenly spaced set of constraints, causing the pricer to more gradually lose patience, and increase urgency, as the usage gets farther from the long-term constraint and closer to the short-term one.

I’m planning to write more about constraint ladders in a later post.

2 Likes

Thanks for the proposal, super helpful framing.

I recast this proposal as a discrete-time controller, derive the stability condition, and show why a single constraint overreacts to short bursts. Then I analyze a constraint ladder (rung spacing, effective gain, executor-slack for the fast rung, small proportional damping) with implementation and migration notes. Full write-up with simulations: A Control-Theoretic View of Arbitrum’s Constraint Ladder Gas Pricer - Dare to Know

1 Like

Thanks for your post, @0xnagu. Highly recommended for those who want to deep dive on the theory.

2 Likes