Arbitrum returns a hardcoded value for block.difficulty
why not bridge the L1 randao opcode which replaces the block.difficulty?
Many projects are using randao as a ‘free’ source of randomness. Exposing this op-code, would give a ‘free’ source of randomness for dapps on arbitrum. Even if the op-code is outdated a few blocks, dapps could use a block-lookahead value to ensure the randao opcode is not known at the time randomness is requested.
There’s an interesting challenge associated with something like PREVRANDDAO, or anything that allows L2 results to depend on L1 state: What happens if the L1 chain reorgs?
One approach would be to reorg the L2 chain whenever the L1 reorgs. But we don’t want to do that, because users really like the fact that the Arbitrum chain almost never reorgs.
If we want to maintain the property that the L2 almost never reorgs, we would need to delay making any L1 state visible at L2, until the L1 state has finality on L1. That would add a longer delay than is comfortable for many purposes.
There are some interesting protocol design questions around this, which we’re still looking at. A broader discussion of these tradeoffs in the community would be great.
If L1 reorgs, then wouldn’t the arbitrum inbox and sequence of batches be effected? So nothing on L2 is final until there is L1 finality anyways. If I send my txn and the sequencer accepts it, that’s a soft guarantee, but I still need to wait 2 epochs to be certain my txn is included on L2.
What happens when 1/3 of eth L1 validators are offline, preventing finality all together and the inactivity leak starts? Is L2 “paused”?
The difference is in what happens when the sequencer is honest. The sequencer provides a soft guarantee of transaction results, and we want those guarantees to always be correct if the sequencer is honest. In other words, we want the sequencer to have the power to keep its promises–even if the L1 reorgs.
If the sequencer were to provide a transaction result to a user and that transaction result depended on the L1 history in a way that might change with an L1 reorg, then the sequencer’s promised transaction result might not be correct if the L1 reorged. So the sequencer’s promise would not be kept, even though the sequencer did its best to be honest. That’s the situation we want to avoid.
In practice, many Arbitrum users choose to rely on the sequencer’s feed, because they find the fast response time attractive enough that they’re willing to risk the possibility of a dishonest sequencer. For now, they’re willing to trust Offchain Labs to run the sequencer honestly. After the sequencer is decentralized, I’d guess many people will choose to trust that the sequencer committee has enough honest members.
Relying on the sequencer feed isn’t recommended for higher-value transactions, but some people choose to do so anyway, mostly for lower-value transactions when the user values low latency.
Okay I see the point about L2 determinism with a trusted sequencer.
After the sequencer is decentralized
Perhaps a trusted sequencer is fine as long as I can force txns into a batch through the delayed inbox, 24 hours is way too long. What is the risk of decreasing the delay period from 24 hours to say 2 epochs? And the sequencer is forced to include all the delayed txns (let’s say these delayed txns are represented by an incremental merkle tree and the sequencer includes the root for a constant cost per batch) appended to the batch?
Unfortunately this forum requires a lot of moderation to keep the signal-to-noise ratio reasonable. We try to minimize the side-effects experienced by legitimate users, but sometime there is some collateral damage. Sorry about that.
There’s another interesting challenge around the inclusion delay. If the L1 basefee gets large, the batch poster (which is often the sequencer) will wait longer to post batches, in the hope that the L1 basefee will go down. There’s a tradeoff between posting quickly to reduce finality time, versus waiting a bit to reduce L1 costs (which reduces L2 transaction fees).
The algorithm we use for doing this is based on some advanced modeling and machine learning based on historical L1 basefee data. But it does sometimes wait longer than a few finality periods, if the L1 basefee is very large. (This happened earlier this week.)
Another issue to consider with RANDAO is that the last proposers of an Epoch have a few bits of randomness that they control. While this is still safe to decide the proposal scheduling, it may not be safe for Arbitrum to rely on this as the stake of a few validators on L1 may not be worth as much as what can be obtained from those bits on L2.
No worries, understandable about maintaining moderation and high signal:noise ratio on the forum.
This comment is a bit divergent, but what are the risks of decreasing the force inclusion time on the delayed inbox from 24 hours to a value much lower. Is it about L1 reorgs ? so the delayed inbox is definitely finalized?
And point taken, PREVRANDAO is not a perfect source of randomness, and maybe more trouble than it’s worth to include on L2.