#43114 [BC-Critical] attackers can cause total shutdown network by exploiting missing of blob size check in da lightnode
#43114 [BC-Critical] Attackers can cause total shutdown network by exploiting missing of blob size check in DA Lightnode
Submitted on Apr 2nd 2025 at 10:14:19 UTC by @perseverance for Attackathon | Movement Labs
Report ID: #43114
Report Type: Blockchain/DLT
Report severity: Critical
Target: https://github.com/immunefi-team/attackathon-movement/tree/main/protocol-units/da/movement/
Impacts:
Network not being able to confirm new transactions (total network shutdown)
Description
Short summary
Attackers can cause total shutdown network by exploiting missing of blob size check in DA Lightnode. The Blob Size Limit of Celestia is having a hard cap of 2 Mb. If submit blob size is greater than 2 Mb, then blobs are rejected. DA Lightnode does not check the size of Blob when submit to Celestia RPC Client. So when the Blob is rejected, then all the transactions of users are not executed, just causing total network shutdown. Since the attacker's transactions are not executed as well, so there is no cost for the attackers. Just the small cost of sending the spam transactions, thus make the attack threat is really high.
Background Information
The transactions passed validation from the mempool ==> included in the mempool ==> submit (or batch_write) to DA Lightnode
Step 1: Transactions from users are sent to Sender of a Multi Producer Single Consumer Channel.
Step 2: Transactions are read from Receiver of a Multi Producer Single Consumer Channel. This is FIFO Channel. (Ref: https://doc.rust-lang.org/std/sync/mpsc/index.html)
Step 3: Transactions is pushed to transactions vector
Step 4: Call da_light_node_client.batch_write to batch_write the transactions to the DA Lightnode
So notice that when transactions are read from Receiver, it is removed from the FIFO Channel.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/networks/movement/movement-full-node/src/node/tasks/transaction_ingress.rs#L38-L129
Notice that the loop in spawn_write_next_transaction_batch is batching all transactions in time of block_building_parameters. This parameter is configurable and the default value is 1000 ms that is 1 second.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/sequencing/memseq/util/src/lib.rs#L22
So the DA Lightnode now can work in 2 modes: blocky or sequencer mode:
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/protocol/light-node/README.md#L1-L6
Blocky Mode of DA Lightnode
If the DA Lightnode works in Blocky mode, then block of transactions received from mempool are signed to create Blob data and send to Celestia.
The code to send the blob to Celestia is here:
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/protocol/light-node/src/passthrough.rs#L229-L248
Since the DA Lightnode is sending to Celestia, it is using Celestia RPC Client
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/providers/celestia/src/da/mod.rs#L59-L80
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/providers/celestia/src/da/mod.rs#L42-L52
Notice that there is no check or ways to control the size of blob before calling blob_submit.
If the function blob_submit return error, then the error is returned back, but there is no mechanism to retry or rebuild the block.
Celestia Hard Cap Block Size to 2 MB
https://docs.celestia.org/how-to-guides/mainnet#transaction-size-limit
As specified in CIP-28, there is a 2 MiB (2,097,152 bytes) limit on individual transaction size. This limit was implemented to maintain network stability and provide clear expectations for users and developers, even as block sizes may be larger.
So if the Blob Data exceed the limit, the transaction will be rejected.
This is done in the code of Celestia
https://github.com/celestiaorg/celestia-node/blob/e9026800ed67859deb0a4f31e832a4586bee1d45/blob/blob.go#L93-L95
The current default RPC that Movement using is Lumina RPC
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/Cargo.toml#L231
I checked the code and found that there is one test case also confirm this. So if the blob size is bigger than size limit, the function default_client.blob_submit will return error.
https://github.com/eigerco/lumina/blob/main/rpc/tests/blob.rs#L162-L170
Sequencer Mode of DA Lightnode
If the DA Lightnode works in "Sequencer" mode, then it is more complicated, but the same issue that blob size is not controlled.
Blob Data is received from Mempool and then write to RockDB Mempool.
Then the transactions are pop from RockDB Mempool to build block. This is happening during the tick.
In the Sequencer, the transactions in a block are limited to block_size that can be configured. The default value is default max block size is 2048 as noticed above.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/sequencing/memseq/sequencer/src/lib.rs#L101-L131
Then the block is sent to "Sender" of Multi Producer Single Receiver Channel.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/protocol/light-node/src/sequencer.rs#L142-L162
Then the block will be sent to Celestia through the same code as Passthrough.rs that I analyzed above.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/protocol/light-node/src/sequencer.rs#L219-L241
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/protocol/light-node/src/sequencer.rs#L164-L173
To summarize, the users transactions are batched into a blob during block_building_parameters in Blocky mode of DA Lightnode.
In Sequencer mode of DA Lightnode, the transactions in a block are also limited to block_size that can be configured. The building time of block is also limited to block_building_parameters. The default value is default max block size is 2048 as noticed above.
The bug is there is no mechanism to control the size of a blob. So if the blob size exceeds the size limit of Celestia Transaction size, then all user transactions in block are lost, not executed.
Attack Scenario
The transaction size limit of Movement now is 64 Kb per transactions. This is hard coded in the code as below.
https://github.com/immunefi-team/attackathon-movement-aptos-core/blob/627b4f9e0b63c33746fa5dae6cd672cbee3d8631/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs#L69-L73
So an attacker can create 1000 valid transactions with the maximum size possible.
For example, an attacker can create 1000 accounts. Each account submit a transaction to deploy a big smart contract. Each transaction is 63 Kbytes.
So the total size of all transactions would be: 63 * 1024 * 1000 = 64_512_000
Notice that now the movement allow one user to create maximum 32 transactions (= Sequence tolerance), so it would be easier for the attacker. He just need to create 32 accounts. But this is optional as this is not needed for this attack.
To bypass rate limit, the attacker can submit the transactions from different machines, IPs, ...
Since this is valid transactions, the movement node should accept these transactions.
All 1000 transactions in 1 seconds will be batched into the mempool and into a single blob.
The blob compress before sending and compress ratio
Actually before sending to Celestia, the blob data is compressed using zstd library.
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/protocol-units/da/movement/providers/celestia/src/blob/ir.rs#L26-L49
The Library is zstd, in this case is: https://github.com/gyscos/zstd-rs
https://github.com/immunefi-team/attackathon-movement/blob/a2790c6ac17b7cf02a69aea172c2b38d2be8ce00/Cargo.toml#L345
The compress level is 0 that is default level of 3.
https://github.com/gyscos/zstd-rs/blob/229054099aa73f7e861762f687d7e07cac1d9b3b/src/stream/functions.rs#L27-L36
I researched about the compress level and found that with level 3 the output size is 2.5 to 3.5 smaller than input size.
For example: https://github.com/klauspost/compress/blob/master/zstd/README.md
Or I run the benchmark.rs (https://github.com/gyscos/zstd-rs/blob/main/examples/benchmark.rs) and give me result the compress ratio is that the output size about 3 times smaller than input
So with the compress ratio of about 3 times. The actual compress ratio depends on the data. Let's assume the compress ratio is 4 times
With the size of above input 64_512_000 would result in more than 16_128_000 that is still much bigger than 2 MB cap.
So if the attacker send this transactions for every block, then all valid user transactions are not executed and dropped. Since the attackers' transactions are dropped also, so no cost for him to execute the attack. There is only a small cost to send the requests and some money to prepare all 1000 accounts for the attack.
The capital needed: 1000 * 1 MOVE = 1000 * 0.4 = 400 USD (Price: https://coinmarketcap.com/currencies/movement/)
1 MOVE is more than enough to pay gas for a big smart contract deployment. The actual cost probably is lower.
Severity Assessment
Bug Severity: Critical
Impact:
Network not being able to confirm new transactions (total network shutdown)
Likelihood:
High as No special privileges required
Can be executed by any attacker with very cheap cost of attack.
The cost of an attack is 0
The capital is < 400 USD
Recommendation
Implement mechanism to control blob size that after compress should have size less then Celestia Blob Size limit of 2 MB.
Proof of Concept
Proof of concept
The following attack scenario demonstrate the bug.
Preparation: An attacker can create 1000 accounts. Each account has 1 MOVE.
Step 1: So an attacker can create 1000 valid transactions with the maximum size possible and send in every seconds.
Each account submit a transaction to deploy a big smart contract. Each transaction is 63 Kbytes.
So the total size of all transactions would be: 63 * 1024 * 1000 = 64_512_000
To bypass rate limit, the attacker can submit the transactions from different machines, IPs, ...
Since this is valid transactions, the movement node should accept these transactions.
All 1000 transactions in 1 seconds will be batched into the mempool and into a single blob.
Expected output: All user transactions including attacker transactions are dropped.
The attached image or sequence diagram below illustrates the attack
Was this helpful?