FRIDA: Data-Availability Sampling from FRI
on

Introduction

Recently Ethereum deployed EIP-4844 (Proto-Danksharding) to lower the cost of “data-availability” on Ethereum. The high-level goal of data-availability is, well, to ensure that a piece of data is available to everyone who wants it.

Data-availability is primarily a concern in relation to roll-ups, where the problem it solves is the following: even if a roll-up proves honest execution using a SNARK, what happens if the roll-up operator stops telling people which transactions they are executing? In this case L1 (Ethereum) would only have a commitment (state root) to the roll-up state and problems arise if the operator, for instance, simply stops executing transactions. In this scenario, users would be unable to exit the roll-up because they cannot issue state proofs (e.g. Merkle paths) against the state root; because they don’t know the state of the roll-up. The result? Their funds are stuck until the operator provides the entire state of the roll-up.

Data-availability solves this by forcing the roll-up operator to post the underlying data (transactions) publicly. This in turns enables anyone to recompute the full state of the roll-up and thus exit the roll-up if e.g. the operator goes offline.

Up until EIP-4844, the cost of data-availability was high, because data-availability was achieved by posting the data on-chain as Ethereum calldata. Posting every transaction you execute on your fancy L2 directly on Ethereum. Ouch.

The problem with calldata is that it needs to be kept around by everyone forever and so it needs to be expensive: it is required to recompute the state of Ethereum from genesis. To improve solutions to the data-availability problem, we need to avoid everyone having to keep the data around forever. EIP-4844 was the first step in this direction, by allocating space in blocks to so-called blobs which can only be accessed indirectly from within the EVM via a cryptographic commitment.

You can’t the access contents of blobs from the EVM, the only operation allowed is to check polynomial evaluations proofs against the blob via the point_evaluation_precompile precompile: the caller provides a (blob) commitment, a (KZG) opening proof, an evaluation point $z$ and the evaluation $y$. The precompile then verifies the KZG opening proof.

The beauty of this is that we do not need the blob contents to compute the state of Ethereum anymore: we only need the blob commitment and the KZG opening proofs. This enables Ethereum to throw away the blob itself after a certain time, roughly 2 weeks, without losing the ability to recompute the state of Ethereum from genesis. If you were interested in the blob contents, you should make sure to download them before the full nodes throw them away.

This solves the forever part of the data-availability problem. But it doesn’t solve the everyone part: even with EIP-4844, every full node still needs to download every blob in a block to ensure that it’s available before signing off on the block.

The next step is to shard the data in such a way that each node in the network only needs to download a small fraction of the blob. This idea has been dupped “data availability sampling” within the Ethereum community and it relates, but does not coincide, with several existing notions in the literature, discussed at length in the Foundation of Data Availability paper by Hall-Andersen (zkSecurity), Simkin (Ethereum), and Wagner (Ethereum).

The most immediate way to go from Proto-Danksharding to Danksharding (Data Availability Sampling) is to extend the existing KZG-based scheme to allow nodes sampling from an erasure-coded version of the blob. This scheme along with several others are also covered in the Foundation of Data Availability paper.

But what about using FRI for data-availability sampling?

FRIDA: Data-Availability Sampling from FRI

In an upcoming Crypto 2024 paper by Hall-Andersen (zkSecurity), Simkin (Ethereum), and Wagner (Ethereum) we propose a new scheme for data-availability sampling from FRI. The central observation is essentially that the FRI proximity test actually satisfies additional properties: if a prover can make a commitment pass the FRI proximity test, then every additional position passing the query phase of FRI must lie on the closest codeword. This means that after running the FRI proximity test, you can view the FRI commitment as a vector commitment to a Reed-Solomon codeword and any position can be opened with a cost of $O(\log^2(n))$.

This is much cheaper that executing the FRI proximity test for every position, e.g. opening using the “quotienting trick”, which has $O(n)$ computation and $O(\lambda \log^2(n))$ communication.

We propose that this observation could be applied to data-availability sampling. Essentially every node downloads the FRI commitment and verifies the FRI proximity test, this ensures that the commitment is “well-formed”: the committed vector is close to the Reed-Solomon code. Then, every node samples random position in the vector and downloads the corresponding opening proof from the vector commitment (without any additional FRI proximity test).

If you are interested in additional details, the paper is also available (no pun intended) on eprint.