Solutions to Delay Attacks on Rollups

[This is cross-posted from Solutions to Delay Attacks on Rollups | by Offchain Labs | Dec, 2022 | Medium, to enable discussion here.]

One of the subtle issues for designers of rollup protocols is how to deal with delay attacks. In this post I’ll talk about what they are and how Arbitrum protects against them, leading up to some exciting new developments.

Delay attacks are malicious actions that try to prevent a rollup protocol from making progress. They don’t attack the safety of the protocol, that is, they don’t try to force confirmation of an incorrect result. Instead, they attack the liveness of the protocol by trying to prevent or delay confirmation of any results at all.

These issues can be subtle, and to be honest, protocol designers often don’t like to talk about them. But every Layer 2 system, whether optimistic, ZK, or other, needs to cope with questions about delay and protocol progress.

This article digs into the delay attack problem, and discusses how it was handled in various versions of the Arbitrum rollup protocol.

What are delay attacks?

In a delay attack, a malicious party (or group of parties) act within the rollup protocol, following a strategy that is designed to prevent or delay the confirmation of results back to the L1 chain.

This differs from a denial of service attack, in which the attacker tries to prevent any action from being taken in the protocol. By contrast, in a delay attack actions continue to occur, but the attacker acts in ways that prevent or delay the confirmation of results (that is, delay withdrawals of assets to L1) and force honest validators to burn gas.

Any plausible rollup protocol will require actors to stake, so that a delay attacker will necessarily lose one or more stakes. We will assume here that the attacker is willing to sacrifice stakes, within some limits, in pursuit of their attack.

We will also assume, conservatively, that the attacker has an advantage in getting their transactions on-chain, so that whenever the attacker is racing against honest parties to get a transaction on-chain first, the attacker will always win.

Finally, we will assume that an attacker can censor access to the underlying L1 chain to exclude rollup transactions, but only for a limited period, which we call the “challenge period.” In particular, the attacker can enable and disable a notional “censorship mode.” When censorship mode is enabled, the attacker has complete control over which transactions can reach the L1. However, the attacker can only have censorship mode enabled for a total of one challenge period. (We assume that any set of censorship-mode periods adding up to more than one challenge period will trigger a social response from the L1 community to thwart the censorship attempt.)

Evaluating protocols against delay attack

In evaluating protocols, we can ask six questions:

  1. Does the protocol have a fraud proof mechanism at all? (If not, delay attacks are moot because participants can’t delay confirmation of any results — not even fraudulent ones.)
  2. Is there a central operator or prover who can prevent progress by simply stopping, or by withholding data? If so, that party can cause an infinite delay.
  3. Does the protocol provide a trustless guarantee of eventual progress? In other words, is it the case that a single honest participant can force eventual progress, regardless of what attackers do?
  4. If the protocol guarantees trustless progress, what is the upper bound on delay that an attacker can cause?
  5. How does the attacker’s cost scale with the length of delay caused?
  6. How does the total cost of response by honest parties scale?

With those criteria established, let’s evaluate two historical versions of the Arbitrum rollup protocol.

Protocol 1: Academic paper protocol

The 2018 academic paper about Arbitrum used roughly the following protocol (ignoring a unanimous mode that isn’t relevant here). Any staker could assert a proposed result, which we’ll call an assertion. During a time window, any subset of the other stakers could challenge the assertion, and the asserter would have to defend their assertion against each challenger, one challenger at a time. At the end of each challenge, the losing party would forfeit their stake.

(Note that it is necessary to allow multiple stakers to oppose the assertion, and to give every challenger its own opportunity to defeat the assertion. This is necessary because a malicious party can deliberately lose a challenge that they “should” have won. Giving every challenger a separate challenge ensures that one honest challenger can defeat an incorrect assertion, no matter how many malicious parties deliberately lose challenges.)

If there was no challenge, or if the asserter won every challenge, then the assertion would be confirmed and the protocol would move forward. But if the asserter lost any of the challenges, its assertion would be rejected and the protocol state would roll back to the state before the assertion was made.

Evaluation

This protocol has working fraud proofs. This protocol does not guarantee progress, because a malicious participant can endlessly make incorrect assertions, sacrificing a stake each time but leading to an endless cycle of the same assertion being made and rejected, leading to continuous rollbacks and lack of progress.

Protocol 2: fork-and-prune

The current Arbitrum protocol — which has been deployed on every version of Arbitrum since 2020— improves the previous protocol by introducing branching. The idea is to allow multiple stakers to make competing assertions, and to treat competing assertions as forking the chain. A series of challenges then pits the branches against each other and eventually prunes off all but one branch, allowing the one remaining branch to be confirmed.

Timekeeping works as follows. Each rollup block in the chain tracks the timestamp when its first child (i.e., successor) was created by a staker. Other stakers can create additional children. Each child assertion implicitly claims that all of its older siblings are incorrect.

The staker who creates an assertion is required to stake on it, and other stakers have the option of also staking on it.

If two stakers are staked on assertions that are siblings, and neither of the two stakers is already in a challenge, then a challenge can be initiated between the two stakers, where the staker on the older of the two siblings is defending the correctness of that older sibling, while the other staker is challenging that correctness. The staker who loses the challenge forfeits their stake and is removed from the set of stakers.

This protocol includes deadlines for action. First, the deadline for creating child assertions of a parent is one challenge period after the first child has been created. Second, the deadline for staking on the an assertion is one challenge period after that assertion is created.

If the deadline for staking on an assertion has passed, and there are no stakers staked on that assertion, the assertion is pruned off. (One or more stakers must have been staked there in the past, but all of them must have lost challenges.). Any children, grandchildren, or other descendants of the pruned assertion are also pruned away at the same time.

If an assertion is older than one challenge period and has no unpruned siblings, that assertion can be confirmed, representing progress in the protocol.

Evaluation

This protocol has working fraud proofs. There is no central operator or prover who can prevent progress, so any participant can force progress. To force progress, an honest staker can post a correct child, if one does not already exist. After that, a limited time will pass before a deadline ensures that no more siblings can be created and no more stakers can stake on the siblings that exist. From that point on, the honest staker will engage in a series of challenges, defeating and removing parties staked incorrectly, one party at a time. (If multiple honest parties are staked, they can defeat adversaries in parallel.) Once all such parties are removed, the correct child assertion can be confirmed.

The most effective delay attack against this protocol has one malicious staker stake on an incorrect sibling, and N-1 malicious stakers stake on the correct sibling. No matter how many honest stakers are staked on the correct sibling, the attacker will be able to arrange that the incorrectly staked staker engages in challenges against its confederates before engaging in challenges against honest parties. (This assumes, pessimistically, that the attacker can always get in transactions before the honest parties can.) The confederates will also deliberately lose their challenges, taking as long as possible to do so. (Due to the delay rules, this could take as long as one or two challenge periods per confederate.). Only after all N-1 confederates had sacrificed themselves would the incorrectly staked staker be required to fight a challenge against an honest party, which the honest party would win, finally eliminating the incorrectly staked staker.

This last attack achieves a delay of roughly N challenge periods, at a cost to the attacker of N stakes.

The cost to honest parties, against this attack, is linear in the number of honest parties, because every honest party would need to stake before the staking deadline.

Summary:

  • guarantees progress
  • cost of attack is linear in delay caused
  • cost to honest parties is linear in number of honest parties

Permissioned validation

The delay attack characteristics of Protocol 2, which is currently deployed on Arbitrum, are the reason we have chosen to temporarily limit the validator role to a permissioned set of parties rather than making validation fully permissionless. Taking the last step to permissionless validation requires a protocol version that is maximally resistant to delay attacks. And you’ve probably guessed already that such a protocol version exists and we’re currently working on implementing it.

Coming soon: Protocol 3

This post is already too long, so I’m not going to give the full download on the new protocol — that will be a future post.

But here’s the gist. The next version of the Arbitrum protocol makes some small but (I think) elegant changes to how assertions and challenges work, so that the worst-case delay attack can cause only one challenge period of delay, no matter how many stakes the attackers are willing to lose.

This is based on a technical breakthrough from the Arbitrum research team that makes all-against-all challenges feasible and efficient. This allows a single honest staker to efficiently defeat an army of attackers who have posted a forest of malicious branching assertions.

The new protocol is fully specified and is being implemented now. Of course we won’t roll it out on mainnet until it’s thoroughly tested and the code is audited.

In a future post I’ll deep-dive into the new protocol.

147 Likes

Will the protocol 3 rely on a centralized sequencer design? Or will it support a network of decentralized sequencers?

42 Likes

The sequencer design is an orthogonal issue – it’s a separate part of the overall protocol, any of these assertion protocols could work with a centralized sequencer or a decentralized one. Our work to decentralize the sequencer is happening on a separate track.

In the overall decentralization roadmap, we think of making validation permissionless as more urgent than decentralizing the sequencer, because the sequencer is only trusted for transaction ordering,and nothing else.

33 Likes

That was a very nice write-up on the problem, as always. I’m excited to see the solution you designed.

We’ve (at Cartesi) been exploring different ways to deal with this on our end as well and recently published a proposal. You might find it interesting:

Permissionless Refereed Tournaments

In sum, the solution allows a single honest party with log(x) funds to defend themselves against a team of dishonest agents with x funds.

We’d love to collaborate more on this (and other) topics!

26 Likes

Thanks for posting your paper. Do you have an evaluation of the worst-case delay that an adversary could cause, if your proposal were implemented with a challenge delay period of C?

19 Likes

No worries :slight_smile:
Yeah, in the single-stage variant, dispute time grows logarithmically on the number of dishonest claims. An attacker that can afford to make x claims can delay finality by C * log(x). The base of this logarithm can be made higher by adding parallelization, with the tradeoff of increased computational requirements from the honest validator.

18 Likes

It seems to me that if you’re using two-level challenges, then the adversary could have sqrt(x) teams at top level, and each team would include sqrt(x) instances. So the top-level tournament would have log(sqrt(x)) = log(x) / 2 rounds, and each challenge in that tournament would have within in another tournament at the second level with sqrt(x) competitors requiring the same number of rounds.

It would follow that an adversary using that strategy could get C * log^2(x) / 4 delay. Or am I missing something?

21 Likes

Thanks for the comment! You are right in your analysis, but let me take the liberty to give an extended answer.

First of all, in the single level setup, the number of transactions that the honest party has to send grows with log(x), where x is the number of transactions required by the dishonest team. So from a theoretical perspective the maximum delay that can be introduced by an attacker in this scenario in only log(x).

But for very intense computations, it can be too time consuming to build a computation-hash that includes the state of every single step during the processing. This is why we included the possibility of performing the dispute in layers.

First let us agree on some notation for the two level setup. We write the total size of the computation as T = T1 * T2, where T2 is the number of steps inside a frame and T1 is the number of frames.Using our algorithm, the honest party is forced to send

log(x) * [ log(T1) + log(T2) log(x) ]

many transactions, if the dishonest party is willing to send x^2 of those. So indeed, there is a term in log^2(x). Therefore, anyone designing a dispute resolution using our approach would be advised to take that into consideration, comparing the computational gains obtained by a sparse computation-hash against the potential of an extended delay in finality of order

log(T1) + log(T2)log(x)
-----------------------
   log(T1) + log(T2)

which can be relevant or not, depending on the choices of these parameters.

Recall also that another alternative to speed up the calculation of the computation-hash is to use parallelism instead of a second layer. But that is a story for another time.

Sorry for the long text, I hope this helps :slight_smile:

24 Likes

I would expect the adversary to vary their tactics in different pairwise challenges, to maximize the cost when the opponent is the honest team, and minimize the cost when the opponent is another adversary. This brings the adversary’s cost down because the majority of challenges involve two adversary teams.

If the adversary has some control over the teams’ placement in the tournament, they can also have more members in teams that will face the honest team, and only one member in teams that will only face other malicious teams.

In general there is a fairly large strategy space for the adversary that needs to be explored.

10 Likes

Our central proposal is the computation hash, which is as a new primitive that enables a range of permissionless dispute settlement algorithms.

In its purest form (using a single level) an adversary with x funds can delay finality by at most log(x). This is the key contribution, which has a variety of cool properties we could discuss.

Now, when the number of steps in the computation is extremely large, it may be useful to explore alternative strategies to speed up the generation of the computation hash. The two-layer proposal in our draft trades a faster generation of the computation hash by a delay in finality.

If that trade off is not appealing to one’s use case, a trivial alternative would be to use parallelism. Without any impact on finality, one can obtain a linear speed up in the generation of computation hashes relative to the number of available processors.

As happened with the work of Sakamoto (or with the work of Feige and Kilian on interactive verification games), we expect that others will embrace, stress-test, improve and adapt these ideas to their own use cases.

Thanks for your comments, I am looking forward to seeing Arbitrum’s solution as well!

13 Likes

The idea of merkle hashing the sequence of state roots, then bisecting over the result, goes back to the work of Canetti et al in 2011 (https://www.cs.tau.ac.il/~canetti/CRR11.pdf). Their proposal works in theory but is too inefficient in practice because of the cost of hashing the machine state after every instruction.

In practice you need to use something else, to avoid all of that hashing, either just bisecting over the state hashes (rather than a tree of them) like in the original Arbitrum protocol, or a multi-level approach like what you suggest, or some other tricks.

9 Likes

We have read and cited both articles by Canetti, Riva and Rothblum, including the one that you’ve mentioned. In fact, their terminology of “Refereed Delegation of Computation” was part of the inspiration for the title of our article.

Canetti’s paper is inherently permissioned, being a proposal for an any-trust “pay-per-use Cloud Computing” system, as explained in more details below.

First of all, the article you’ve mentioned proposes two very distinct solutions. The first one, called Protocol I, is based a circuit representation of the computation. Therefore, this protocol is much closer in spirit to Zero Knowledge solutions (although still interactive). Moreover, their computational complexity depends on the depth of the circuit and verification grows poly-logarithmically in its size, see end of subsection 3.1.

Canetti et al also introduce Protocol II, which is much closer to what we present in our article.

Protocol II greatly improves on the original algorithm by Feige and Kilian by introducing a Merkle Hash Tree of the state of the Turing Machine, see the beginning of subsection 4.1. This is perfectly analogous to the STATE-HASH(S) that appears in Section A of our article. With this simple change, they have managed to reduce the computational complexity required by the referee to a logarithm on the maximum size of the tape (which they bound by the running time of the computation). This is done of course under a non-collision hypothesis.

But even though this is a great improvement (and indeed it is achieved by Merklelizing the state of the machine), there are two very important observations that that are in place:

First or all, they do not introduce a Merkelized version of the whole computation where each leaf is the machine hash at a given time, as we have introduced in Section E of our article. Please correct us if we are missing something.

Secondly, but more crucially, if one inadvertently uses Protocol II of Canetti et al in a permissionless environment, an adversary with x funds will be able to force honest parties to spend x funds to defend the correct claim, while also introducing a delay of order x in the finality of the protocol. See their Subsection 2.2 and their last remark concerning “More than two servers”.

We will detail our citations to make the above more explicit.

22 Likes

Big tease, any more details on “protocol 3”?


So as claimed, all to all challenges are solved. So what is the budget requirement of honest actors ? Suppose I myself must be the 1 of n honest parties, how much liquidity must a honest validators maintain to force correct execution and challenge malicious assertions?

According to the current protocol docs,

However, if the chain has been slow to make progress in confirming RBlocks, the stake requirement will escalate temporarily. Specifically, the base stake amount is multiplied by a factor that is exponential in the time since the deadline of the first unresolved RBlock passed.

This seems like a bad solution, forces a larger liquidity requirement on honest actors.

5 Likes

In the context of the current protocol, we expect that honest parties will stake earlier, and late stakes are more likely to be posted by malicious parties. Increasing the stake after non-progress tends to hurt attackers more than it hurts honest parties.

That said, the new protocol – and yes, we’ll be talking a lot more about it soon – achieves nice delay properties without needing a stake escalation system.

15 Likes

@felipeargento @augusto We at the Kleros Cooperative have also had some thoughts on these topics that conceivably could be of interest to you: research-docs/interactivecomputationgamesnotes.pdf at master · kleros/research-docs · GitHub (We are not rollup-builders; Kleros is an application layer project, so these questions aren’t our main research focuses. Correspondingly the linked notes are pretty drafty.)

If the attacker has x resources, then the total number of periods that are required for the game to resolve in our proposal are O((logx)^3), so asymptotically it is worse than the proposal in your work. We don’t have an implementation of this, but I would be curious to see how it compares practically. We don’t use the hashing idea you do, so players are able to strategically lie about the intermediate states of their calculation. Instead, our approach is to have a matching algorithm that places different players into many-vs-1 games against each other, and if the answers of the many diverge they get rematched into subgames. The matching algorithm itself can be implemented optimistically so that every time the verifier needs to match participants together, you need to wait for a challenge period. Though the specific optimistic game of verifying matches doesn’t need to be interactive. The on-chain verifier can compare each new submission it gets to whatever its current best submission is and if the new submission is better it becomes the pending match. Then, in the notes linked above, we provide an analysis of how many periods an attacker with x resources can delay, for example, by forcing the match function to be called.

I also look forward to seeing more information about “protocol 3”!

9 Likes

Hey Ed!

Quantstamp is currently working on a rollup security framework, as seen here.
Do you have any WIP information available regarding the “all-against-all challenges”? This would help us in developing our security framework. We’re happy to provide any feedback as well to support secure development.

1 Like