3

In a hypothetical future where a quantum computer is able to break the cryptography protecting the bitcoin private key, one of the solution would be to move our coins to quantum safe resistant addresses before it happens.

How could this switch look like, does every UTXO have to be published on the blockchain on which new quantum resistant address it is moving to, and what would these addresses look like?

Glorfindel
  • 529
  • 3
  • 7
  • 19
Saxtheowl
  • 2,760
  • 8
  • 16
  • 34

1 Answers1

1

There has been some literature discussing this and a migration strategy:

The 2nd referenced paper describes a commit-reveal scheme that would avoid having your funds stolen when you want to migrate them to some new, quantum-resistant address.

The new addresses would pretty much look the same: a string of characters, with maybe few bits of difference in starting characters to encode the use of some new scheme. If collision resistance is required they'd also have to be a little longer (384 bits). Quantum preimage resistance is already achieved with SegWit 256-bit addresses (even though the underlying key is vulnerable).

In fact, assuming some upgrades to Bitcoin Script opcodes to have QC-resistant signature opcodes, then for many applications quantum-resistant addresses could look exactly the same as current P2WSH addresses.

So addresses wouldn't be affected much. However, transactions would get much bigger, since the input script would then have to include a bigger public key and signature. For example, SPHINICS uses 1KB keys and 41KB signatures so in that case the address would have to be about 7 times longer and signature about 645 times longer! Note that, with SegWit, the big signature data wouldn't count against the legacy 1MB blocksize limit but it would count against the 4MB SegWit block weight limit so we could fit much less transactions. To allow the same number of transactions, some protocol upgrade would have to be rolled out to make room for bigger signature data.

bca-0353f40e
  • 520
  • 12
  • 1
    Any signature scheme with public keys P and signatures S can always be transformed into a scheme with public keys H(P) and signatures P+S. A post-quantum hash function H may need to be 384-512 bits instead of 256, but that still only means addresses that are 1.5x to 2x longer than P2WSH/P2TR addresses today. – Pieter Wuille Oct 31 '22 at 12:50
  • You're right, the keys will be about 1KB but shouldn't addresses then be 256 or 384 bits, depending on whether you need collision-resistance or no. Why would you need 512? I'll fix the answer. – bca-0353f40e Oct 31 '22 at 12:53
  • You generally need collision resistance from addresses that involve more than a single party (the attack is: you and me construct 2-of-2 multisig together; you give me key A, I have key B; grind key C such that H(A and C) = H(B and C); I give you my key C; coins get deposited in it; I take coins by revealing preimage of H(B and C)). This is equivalent in complexity to a preimage attack, and it is the reason why P2WSH (and P2TR, indirectly) use a 256-bit hash rather than a 160-bit one. – Pieter Wuille Oct 31 '22 at 13:01
  • Yes, but QC offers a cubic speedup, so 384 bits is sufficient for collision resistance. For preimage resistance - SegWit 256-bit addresses are good enough in most scenarios, and would be a quantum equivalent of old 160-bit ones. – bca-0353f40e Oct 31 '22 at 13:03
  • This attack can be avoided using more interactivity in the address generation protocol, where participants commit and reveal their keys in separate rounds, preventing one from computing in function of the other. But the ability to generate keys non-interactively is very useful, and probably a property lots of network users rely on even without knowing. – Pieter Wuille Oct 31 '22 at 13:03
  • 2
    Yeah, 384 bits is sufficient according to that reasoning, when collision security matters. I just prefer not to speculate about what security levels we need in the presence of as-of-yet hypothetical machines. Maybe the quantum speedup comes with a significant constant factor slowdown, and we don't actually need 128-bit security. Or maybe they end up being faster, or they help exploit weaknesses in hash functions beyond what Grover's can do. – Pieter Wuille Oct 31 '22 at 13:10
  • 1
    There's [some literature](https://www.cs.sfu.ca/~meamy/Papers/breakingsha.pdf) suggesting it'd be costly, but sure, it's all speculative. At least the lower bounds have been proven, so if the hash function itself is not weakened then you'd get full 128 bits of security with 384-bit outputs. I have updated my answer accordingly. – bca-0353f40e Oct 31 '22 at 13:15
  • "Good news is that with SegWit it wouldn't count against the "hard" blocksize limit, and that data could be later pruned." ↦ It would still count against segwit's blockweight limit, though, so that doesn't seem right to me. – Murch Oct 31 '22 at 15:00
  • Oh that's right, forgot about that. So, if you wanted to make room for QC keys&sigs you'd have to either HF to change the 4MB limit, or do another SF and specify some new place for QC signature data? – bca-0353f40e Oct 31 '22 at 15:22
  • 1
    @bca-0353f40e: Yeah, exactly. – Murch Oct 31 '22 at 19:38