3

Checkpoints prevented an attack where a node could mine many many low-difficulty blocks from an early point in the blockchain and serve those blocks to syncing nodes who would see their disks fill up. This is described in https://en.bitcoin.it/wiki/Bitcoin_Core_0.11_(ch_5):_Initial_Block_Download#Checkpoints.

Now since checkpoints are removed, we would be vulnerable to this attack. But it is stated in (https://bitcoin.stackexchange.com/a/75735/69518) that this is prevented by the newish headers-first synchronization mechanism.

How does the headers-first synchronization prevent the fill-disk attack?

A syncing node downloads all headers from a single peer. If that peer only sends headers from its malicious low-difficulty branch, won't the syncing node try to download all those blocks, and get it's disk filled?

Kalle Rosenbaum
  • 382
  • 1
  • 10

1 Answers1

2

since checkpoints are removed, we would be vulnerable to this attack

Where did you see that checkpoints are removed? They're still in src/chainparams.cpp as of current master (commit f8bcef38f).

How does the headers-first synchronization prevent the fill-disk attack?

Headers-first prevents a related attack where the client would accept "orphan" blocks---block with no known parent---and store them until their parent was received. This prevented wasted bandwidth and sped up validation in the normal case but made disk-fill attacks easier in the pathological case.

Orphan versus Stale

Headers first allows Bitcoin Core to discover the most proof of work block chain (headers chain) that its peers know about before downloading any blocks, which allows it to ensure any blocks it receives are on that chain. This, in turn, means that it never needs to download or store orphan blocks.

A syncing node downloads all headers from a single peer. If that peer only sends headers from its malicious low-difficulty branch, won't the syncing node try to download all those blocks, and get it's disk filled?

That's correct, and that's why checkpoints are still used in the code as far as I know. My understanding is that, for the ultimate removal of checkpoints, three things were desired:

  • Minimum chainwork: a feature coded into a node that tells it the legitimate chain must have at least X amount of chainwork, with X being set to the value for a recent block near the time of a software release. This replaces the original use of checkpoints in preventing network-level attackers from feeding clients long, low-PoW chains containing valid blocks but which aren't the consensus best block chain. This was deployed in Bicoin Core 0.13.2.

  • Assumed Valid Blocks: a feature designed to replace the secondary use of checkpoints for (optionally) speeding up Initial Block Download (IBD) by skipping validation of signatures in old blocks. This was deployed in Bitcoin Core 0.14

  • A minimum difficult soft fork: a change to directly address the block-fill (or header-fill) attack you described by raising the minimum difficulty at various epochs in the block chain to correspond roughly with the actual observed increases in difficulty. This would make it more expensive for an attacker to feed fake blocks to a node. To the best of my knowledge this has not yet reached the BIP stage and I'm not sure it's currently being actively championed.

For reference, this topic was discussed in the 2017-03-02 Bitcoin Core developer meeting: https://bitcoincore.org/en/meetings/2017/03/02/#discussion

sr_gi
  • 3,087
  • 1
  • 11
  • 36
David A. Harding
  • 11,626
  • 2
  • 44
  • 71
  • 1
    Yes, seems I was wrong in assuming that the old checkpoints had been removed. So without the minimum difficulty soft fork, we're still vulnerable to the disk-fill attack from the last checkpoint, which is at height 295000. The difficulty has increased about 1000x since then, so the attack may be feasible. I also see a line ' // The best chain should have at least this much work. consensus.nMinimumChainWork = uint256S("0x000000000000000000000000000000000000000000f91c579d57cad4bc5278cc"); ' Which seems to require any chain to have a lot of PoW to be considered ok. – Kalle Rosenbaum Jun 07 '18 at 12:33
  • I see there's a `nMinimumChainWork` parameter, that's used in https://github.com/bitcoin/bitcoin/blob/f8bcef38fb9be48f0f5908a6c4c0cbe8c5a729d6/src/net_processing.cpp#L1511. That should cover our asses? – Kalle Rosenbaum Jun 07 '18 at 12:44
  • AFAIK, that doesn't prevent header-fill attacks. An exclusive peer can still feed you a header chain with 5 trillion diff-1 headers for about the same cost of producing a block at the tip. 80 bytes per header times 5 trillion is 400 terabytes, though probably something else breaks before then (like block header time rolls over). – David A. Harding Jun 07 '18 at 12:50
  • 1
    Right. But let's see how many headers we need before consensus rules break. We need montonically increasing MTP. which means that the block timestamps need to increase at least one second per block, right? So we can't have move block headers than there are seconds since genesis block (+/- a few hours). That's about 315,360,000 blocks. 315 million block headers is "just" 25GB. That's fine, since the node should have 200 GB to spare at header sync. This of course relies on 1) Monotone MTP is verified on header sync and 2) Blocks from the future are not allowed during header sync. – Kalle Rosenbaum Jun 07 '18 at 13:06
  • @Kalle: an attacker can produce multiple low-difficulty branches if there was no protection against flooding by low-difficulty headers. Also, blocks only need a timestamp that is strictly larger than the median of the past 11 ones - that translates to incrementing the timestamp every 6 blocks AFAIK. – Pieter Wuille Jun 07 '18 at 15:37
  • Summarizing the comments gathered: You can't quickly produce a long branch of low-difficulty headers from the last checkpoint with minimal increasing timestamp, because that would force your difficulty to go up. You're better off producing many 1-block branches from last checkpoint. The current difficulty is 1000x that of the last checkpoint, so if the whole network's hash rate cooperated, they couldn't currently produce more than ~6000*80=480kB of headers per hour. I think I can relax for now. Thanks for clarifications @Pieter, I should've consulted my own book before posting about MTP :-) – Kalle Rosenbaum Jun 08 '18 at 08:06