Time boost: a new transaction ordering policy proposal

[cross-posted from Medium]

tl; dr: We’re proposing a modified transaction ordering policy for the Arbitrum sequencer, adding a “time boost” to the current first-come, first-served policy, where a transaction could pay a priority fee to get a small advantage or “time boost” in the ordering. This shouldn’t affect most users, but can provide a better way to manage “latency racing” behavior.

The Arbitrum sequencer receives transactions from users and publishes an ordered sequence, which serves as input to the execution stage of Arbitrum. Currently, the sequencer follows a first-come, first-served (FCFS) sequencing policy.

There’s a lot to like about FCFS. It’s simple and easy to explain. It seems intuitively fair. It minimizes latency because the policy allows each transaction to be appended to the sequence immediately on arrival.

Other rollup protocols use FCFS as well. Optimism does, and based on descriptions of other systems they seem to do so as well.

But FCFS has some disadvantages, mainly that it can induce “latency racing” behavior, where sophisticated actors spend money and effort in a wasteful arms race to get closer to the sequencer, so they can get their transactions in slightly ahead of their competitors.

We think there is a way to do better, by adopting a modified version of FCFS, which we’ll describe below.

Goals

We want a transaction ordering policy to have these properties:

  • secret mempool: Submitted transactions are not visible to anyone other than the sequencer, until they are included in the published sequence. This prevents parties from front-running or sandwiching others’ transactions. (The sequencer is trusted not to engage in such tactics, although this trust requirement would be removed with the move to decentralized sequencing.)
  • low latency: Every transaction that arrives at the sequencer is included in the sequence within a short time limit, perhaps 0.5 seconds.
  • short-term bidding: Over short time intervals, transactions can gain an advantage in the ordering by bidding for position. This is meant to induce parties who are racing for early placement to do so by bidding, rather than spending resources (e.g., for latency reduction) to deliver their transactions to the sequencer faster.
  • compatible with decentralization: The policy can be adapted to work with a decentralized sequencer, so we don’t have to abandon the policy when the sequencer is decentralized.

Time boost ordering policy

The new policy is a modified first-come, first-served. The mempool is still secret, to prevent front-running.

Every transaction is timestamped when it arrives at the sequencer. A transaction can choose to pay an extra priority fee, which will give it a time boost — making its timestamp slightly earlier, by up to 0.5 seconds.

Why 0.5 seconds? This parameter reflects a tradeoff. We want transaction senders to be able to buy a large enough boost that they have incentive to buy a boost rather than trying to engineer latency — which makes us want to have a larger maximum boost. But also we want to minimize the impact on the latency of non-boosted transactions, which will potentially have to wait for this period in case a boosted transaction might arrive after them — which makes us want a smaller minimum boost. We think 0.5 seconds balances the tradeoff reasonably, although others might have good arguments for a slightly smaller or larger value.

We expect that most transactions won’t buy a time boost, so no changes are needed in wallets or application UX. Sophisticated parties who are already constructing Arbitrum transactions programmatically can decide whether to buy a time boost (and how much to buy), based on whether they’re competing with others for early position and how much they value being first.

If a transaction’s priority fee is F, it will get a time boost (i.e., a subtraction from its timestamp) computed by this formula:

where is the g maximum time boost available (planned for 0.5 seconds in production), and c is a constant to be determined. In exchange for this time boost, the L2 gas price paid by the transaction will be increased by F.

Here’s a graph of the formula, assuming the maximum boost g is 0.5 seconds. By design, a lower fee buys a good chunk of the available boost, with diminishing returns as the fee goes up. The boost approaches 0.5 seconds if the fee gets very large.

Implementation

Here’s an approach to implementing the policy. Legacy transaction types should continue to work as before, so that no changes are necessary for most users and developers. As always, Arbitrum would charge gas fees on L2 but would ignore the “priority fee” fields in legacy transaction formats. (Many existing transactions send a non-zero priority fee to Arbitrum. Arbitrum does not collect that priority fee, and that shouldn’t change.)

Transactions that want a time boost would use a new L2 transaction type, which would be the same format as a legacy transaction (other than having a different transaction type label). For the new transaction type, the Ethereum priority fee field would be interpreted as a time boost fee, and would be collected by the Arbitrum chain.

The Arbitrum sequencer would apply the time boost formula, adjust timestamps accordingly, and sequence transactions in increasing order of the adjusted timestamp.

One consequence of the policy would be that transactions that didn’t pay for time boost would experience an extra 0.5 seconds of latency, because the sequencer would need to wait to see if any transactions coming soon after had a boost. But no transaction would ever need to be held for longer than 0.5 seconds.

46 Likes

Pretty interesting! A couple questions:

  1. This policy doesn’t necessarily get rid of the incentive to colocate with the sequencer. Just so this reply doesn’t get to long, I’ll call b(f) the time boost for fee f and f(b) its the fee for boost b. For example, say Alice has achieved a 0.05s latency decrease through colocating. If Alice and Bob are competing for the same MEV opportunity worth k, to break even, Bob would have to get the opportunity (before Alice) and pay f(b(k)). To frontrun Bob (using her latency advantage and a time boost), Alice would have to pay f(b(k) - 0.049) which could be much less than what Bob needs to pay based on the concavity of the boost vs. fee curve. As long as the cost of colocation amortized among the transactions added to the fee payed for the slightly higher boost Alice gets is lower the Bob’s expected fees, the incentive to colocate is still present. Is this realistic? Does the team have any thoughts on it? Searchers likely care less about being first with respect to time and care more about being first with respect to other searchers, and it seems like a “continuously” growing ledger like Arbitrum’s doesn’t get some of the nice levers in this respect that a “discontinously” growing ledger like Ethereum does (for example running auctions between blocks at a delay beyond which colocation would matter).
  2. Any fun plans for what to do with the time boost fees? Seems like an interesting place to introduce redistribution to the protocol…

Overall, seems like an interesting, useful addition to the protocol. Thanks for sharing.

12 Likes

This doesn’t remove the latency game, just shifts a portion of the cost from hardware/engineering into tx fee bidding. Though because all bids pay in this auction, you will probably find it doesn’t maximize revenue to the sequencer as much has it could.

Have you considered applying the latency to block dissemination rather than transaction inclusion. This would:

  1. Not burden regular users with any additional latency (sequencer could still give out confirmations to users immediately if we assume no private order flow selling).
  2. Reduce reverts for searchers if you sell a single slot at each latency hierachy, e.g.:
bidder 1: 0ms delay
bidder 2: 50ms delay
bidder 3: 100ms delay
bidder 4: 150ms delay
bidder 5: 200ms delay
bidder 6-25: 250ms delay
9 Likes

We have a longer paper in the works that includes an analysis of how time boost affects spending on latency. The bottom line is that it reduces but doesn’t eliminate the incentive to engineer for low latency. The details depend on what assumptions you make about things like the cost curve of latency.

Regarding what to. do with the revenue, that would need to be discussed with the community before any mechanism like this is deployed. We wanted to have a conversation at this point, here in the research forum, about the pros and cons of this as a mechanism for transaction ordering.

5 Likes

Applying this on the dissemination side is an interesting suggestion. That would be effective in use cases involving backrunning of transactions on the same chain. It wouldn’t impact activity that responds to events in the outside world or another chain.

7 Likes

What’s the rationale for this time boost formula as opposed to just running an explicit auction for the top of the block? Could still keep txs private to avoid front-running. This seems overly complicated, and as noted doesn’t fully remove latency incentives anyway.

Separately as you note the privacy will be lost if you try to decentralize the sequencer, so what’s the plan there? Layer on threshold encryption (personally skeptical of this), or something else?

6 Likes

Batching orders in blocks and auctioning positions makes sense if your mental model is one where you already have discretized time into fixed intervals. But defacto arbitrum receives a continuous order flow. And batching would introduce a discontinuity between the end of an old batch and the start of the new one. With the boost formula you can operate in a world with a moving time window. So you don’t have the situation that a low bid bidder gets in front of a high bid bidder, simply because he is lucky enough to end in the earlier batch.

6 Likes

Understood that the de facto for Arbitrum is continuous time rather than discretized buckets, I’m just quite unclear as to why that is viewed as so preferable vs. something like @sxysun FBA-FCFS proposal?

Agreed with @bbuddha as noted above as well:

Searchers likely care less about being first with respect to time and care more about being first with respect to other searchers, and it seems like a “continuously” growing ledger like Arbitrum’s doesn’t get some of the nice levers in this respect that a “discontinously” growing ledger like Ethereum does (for example running auctions between blocks at a delay beyond which colocation would matter).

Discretizing time into small buckets with privacy maintained seems to still achieve the strong notion of fairness you’re after while providing the flexibility to better express preferences and minimize racing.

5 Likes

Could we have the reverting txs discarded like with Flashbots?

Otherwise this is just a cash grab and brings no real benefit to searchers.

4 Likes

I’ll admit I’m unsure, but an idea is because:

It’s difficult (maybe impossible) to determine the time a competing tx was received => it’s difficult/impossible to calculate time boost fee for position x (estimations still viable) => PGAs are minimized

was this part of the rationale?

6 Likes

The proposed mechanism is actually taking care of different races interacting with each other, as it generates a global order, according to which transactions are sorted. If you look at it from this angle, it is certainly not “overly complicated”, but rather easy. One does not need to win the first place in the ordering, one just needs to defeat direct competitors.
It was not designed to defeat any particular auction format in mind, but rather having some set of goals mechanism needed to achieve.

6 Likes

I’m not clear on why it would be better to divide time into 0.5 second slots and sort by bids within a slot, as opposed to what we proposed. With rigid slot boundaries like that, whether you are ahead or behind some other party depends on the accident of slot boundaries, and not just on the relative arrival time and bid.

In terms of complexity, there’s not much difference between the two schemes. In either case you need to know when transactions arrived and what they bid, then release them in sorted order based on a simple combination of those two factors.

6 Likes

Overall I think this is clearly a directionally positive proposal, but discretizing time + more explicit auctions seems to allow for better expressivity in bids and further reduce the latency advantage. As noted by @bbuddha:

This policy doesn’t necessarily get rid of the incentive to colocate with the sequencer.

Searchers likely care less about being first with respect to time and care more about being first with respect to other searchers, and it seems like a “continuously” growing ledger like Arbitrum’s doesn’t get some of the nice levers in this respect that a “discontinously” growing ledger like Ethereum does (for example running auctions between blocks at a delay beyond which colocation would matter).

There’s still an explicit advantage here to faster transaction submission, encouraging centralization, co-location, latency spending etc. (albeit reduced). That would be further removed if the bid’s position were to be entirely unlinked from submission/receipt (i.e., just based on fee within the bucket). Seems helpful to have more expressive auctions as noted in @sxysun follow-up comments to the FBA-FCFS post. Conditional payments like coinbase.transfer, reverts, etc.

Regarding your plans to make this forward compatible with decentralized sequencing:

  • How would you then plan to cycle sequencers if time is treated as continuous? Cycle leader after X # of transactions sequenced, with every node reporting their perceived exact score for every transaction? And the leader sequences based on the averages of all the scores submitted to them and average of times submitted? Seems a bit tougher, I’m a bit unclear on the plan here.

  • It seems easier for the FBA-FCFS style individual nodes reporting a partial ordering to the leader who then aggregates these into a weak ordering of unordered batches in it, then have the leader just resolve the intra-batch ordering via auctions wouldn’t it?.

I’m also not tied to the 500ms fwiw, probably fine to be longer.

8 Likes

I don’t see how having discrete slot boundaries makes latency irrelevant. Reducing latency could still cause a transaction to be in an earlier slot, if its arrival time would have been near a slot boundary. Having a continuous model doesn’t change the dependence, it just smooths it out, which seems like a better property.

7 Likes

Yea the remaining latency advantage in batching isn’t 100% gone, it’s just no longer explicitly part of the bid. It’s still relevant for the implicit advantage as you note in that scenario when you get a little more time at last look. Increasing batch times further decrease the relevance of that latency edge, reduce centralization pressure etc, but can also have other effects on MEV/staler prices etc.

Would be helpful to see the deeper analysis of the expected latency benefits here as you noted in time boost, and compare that to the expected latency advantage in different discrete time scenarios (such as with longer slots).

We have a longer paper in the works that includes an analysis of how time boost affects spending on latency. The bottom line is that it reduces but doesn’t eliminate the incentive to engineer for low latency. The details depend on what assumptions you make about things like the cost curve of latency.

  • Time boost - Lower latency is always advantageous (lower latency = can always have lower bid on average).

  • Batching - Lower latency is sometimes advantageous for when that small delta between “slow” and “fast” searchers reveals relevant information (and decreasingly relevant as a % as batch times increase). Less frequent occurrences, but could reveal meaningful opportunities at times.

Do you have any loose estimates? My guess is benefit would be lower in the latter, but haven’t run.

How would you think about decentralizing the time boost proposal? Seems trickier as above.

The auction expressiveness is the other thing, seems easier in discrete but could also be tuned even in the time boost case somewhat. Would you consider making it more expressive with conditional payments/reverts etc. as opposed to the simple boost? At least reverts on the priorities would of course help ensure fuller bidding.

7 Likes

The 0.5 second time limit is important because the scheme would be adding that much latency for ordinary users’ transactions, so making it longer has UX implications for most users. If anything, the general user community might want it to be shorter.

It’s important to remember that this design is intended for the benefit of the chain’s users generally and isn’t aiming to privilege one category, like MEV searchers, over others.

One design principle is that parties who use more of the sequencer’s resources should pay for that, because the resources are limited and others want to use them too. Unlike in Ethereum, which has much longer block times and doesn’t run the chain at anything close to the speed of a node’s computation, Arbitrum scales to higher throughput today, and we expect to widen the throughput gap going forward–so sequencer capacity is a resource that needs to be conserved. That’s not to say that it never makes sense to add new functionalities, just that there is much less slack available for supporting new features than on Ethereum.

5 Likes

The 0.5 second time limit is important because the scheme would be adding that much latency for ordinary users’ transactions, so making it longer has UX implications for most users. If anything, the general user community might want it to be shorter.

Agreed, and just to clarify I’m not talking about something like 12s block times here. But something like 500ms → 1s I don’t think has much of a practical UX difference. If it turned out to have significantly better incentives when analyzing closely, just saying it’s something to be considered (I haven’t crunched the numbers here specifically, not a strong opinion).

It’s important to remember that this design is intended for the benefit of the chain’s users generally and isn’t aiming to privilege one category, like MEV searchers, over others.

Fully agreed as well, the chain isn’t made for searchers. Just noting that misaligned incentives for parties like searchers can of course indirectly end up hurting the product offered to users. The incentives of those parties should just managed and aligned in the way that maximizes the product offered to users.

One design principle is that parties who use more of the sequencer’s resources should pay for that, because the resources are limited and others want to use them too. Unlike in Ethereum, which has much longer block times and doesn’t run the chain at anything close to the speed of a node’s computation, Arbitrum scales to higher throughput today, and we expect to widen the throughput gap going forward–so sequencer capacity is a resource that needs to be conserved. That’s not to say that it never makes sense to add new functionalities, just that there is much less slack available for supporting new features than on Ethereum.

Agreed again! It’s just about structuring the best way for people to pay for those resources in the most incentive aligned way that benefits the entire ecosystem. (Insert John Adler Wait, It’s All Resource Pricing).

Glad to see you guys are being thoughtful about all of this and welcoming feedback, appreciate your thoughts!

4 Likes

This is a good question and shows the main economic difference between transaction ordering with blocks (sometimes called batch auctions) and continuous transaction ordering as the proposal at hand.
In the first, if the transaction sender does not like its position in the block, it can drop out and pay zero. This sounds reasonable. On the other hand, if the arbitrage value is x ETH, transaction bidder will bid any number below x to be the first among opportunity takers. In a (nearly) perfect market, the winning bid will be very close to the value of the opportunity, say 0.99x. In the continuous ordering, with time boost fee, strategic players buy time boost, no matter what the outcome. If it is a race to exploit an opportunity as described above, players will propose much less, say 0.4x, if they expect few players (and less if they expect more players), to compensate for the case in which they lose. In the equilibrium, they will bid so that in the expectation they are not worse off staying out of competition (payoff 0). Arbitrage opportunity taking is usually not a single shot game, theorefore, average analysis makes sense.
The rationale of the time boost fee mechanism is that it moves expenditure spent on improving latency to time boost fees. In the latency race, if a player spends resources on a low latency and still loses to competitors (maybe because they spent more), this is also a waste.
For more details, there is a paper that analyzes all concerns in this thread (and more). It will be available online soon.

9 Likes

My initial reaction… I love this proposal!

I can imagine 95% of arbers profits going on priority fees.
Anything over 50% mitigates spam and sybil attacks completely because even a consistently winning arber cannot afford to send more than 1 tx per opportunity.
Intuitively, it won’t get rid of all hardware/networking spend, and it doesn’t need to, it just needs to cap excessive spend, which I think it will (depending on the parameters chosen).
Latency spend is a greatly diminishing return, and when you are engaged in a priority fee battle the amount it is worth putting into these wasteful activities will be severely capped.

The biggest issue I see with it is that the additional 0.5sec latency will come at a cost to users, because it will create arbitrage opportunities that would not otherwise exist against CEXs.
As a result, this must be kept to a minium. 500ms seems like a reasonable starting point, but it may need to go lower.
It does help that ordinary traders can also use priority fees (unlike with co-location advantages), so I don’t want to overplay this.
Also, I read some research somewhere that manual traders react with a latency of around 2 secs, so again, 0.5 secs is not such a problem for most users.

In terms of what to do with the income from priority fees, please consider using them to subsidize base user fees as directly as possible.

This will:

  • compensate users for the value they will lose to the additional latency
  • may reduce regulatory risk if fees are kept in the user pool in this way (I’m no lawyer- not legal advice!)
  • will also make Arbitrum more competitive in it’s pricing, which is good for the network

Anyway- great work. Can’t wait to hear more.

6 Likes

that you @Pmcgoohan ? I like the discussion of batch auctions :slight_smile:

5 Likes