So how many gates are we talking to factor some "cryptographically useful" number? Is there some pathway that makes quantum computers useful this century?
> So how many gates are we talking to factor some "cryptographically useful" number?
Table 5 of [1] estimates 7 billion Toffoli gates to factor 2048 bit RSA integers.
> Is there some pathway that makes quantum computers useful this century?
The pathway to doing billions of gates is quantum error correction. [1] estimates distance 25 surface codes would be sufficient for those 7 billion gates (given the physical assumptions it lists). This amplifies the qubit count from 1400 logical qubits to a million physical noisy qubits.
Samuel Jacques had a pretty good talk at PQCrypto this year, and he speculates about timelines in it [2].
It's not just quantum error correction that is required, it's also hard to make devices small enough due to cooling, to allow thousands of qubits let alone billions.
How do you go from 7 billion Toffoli gates (which consist of 42 billion "two-qubit entangling gates", per the blog post) to merely 1400 logical qubits?
Same way you go from 2 billion CPU operations per second to only ~4000 register bits (not counting ZMM SIMD registers) on a conventional CPU.
The operations all consist of saying, connect these 3 bits and do a reversible operation on them all together. Same as assembly, "add these two registers and store the sum in the first one..." You didn't need to introduce any new bits.
You only need to introduce new bits for steps that cannot be reversibly done, in assembly you get around this by being able to overwrite a register: in quantum, that requires an explicit measurement in the computational basis to figure out how you want to do stuff to zero it; zeroing a bit is not a unitary operation. But if you can encode the circuit in Toffoli gates which are perfectly reversible, you don't have to delete any bits after that encoding (but you may have to introduce extra bits to get to that encoding, like using Toffoli to build “x AND y” requires an extra z bit that effectively gets discarded at the end of the computation when everything is done and nobody cares what that bit holds, but it holds the information you would need to reverse that logical AND).
But yeah it's just number of operations that you need to run the algorithm, versus the number of registers that you need to run the algorithm, they're just two different numbers.
> So how many gates are we talking to factor some "cryptographically useful" number?
That is a hard question to answer for two reasons. First, there is no bright line that delineates "cryptographically useful". And second, the exact design of a QC that could do such a calculation is not yet known. It's kind of like trying to estimate how many traditional gates would be needed to build a "semantically useful" neural network back in 1985.
But the answer is almost certainly in the millions.
[UPDATE] There is a third reason this is hard to predict: for quantum error correction, there is a tradeoff between the error rate in the raw qbit and the number of gates needed to build a reliable error-corrected virtual qbit. The lower the error rate in the raw qbit, the fewer gates are needed. And there is no way to know at this point what kind of raw error rates can be achieved.
> Is there some pathway that makes quantum computers useful this century?
This century has 75 years left in it, and that is an eternity in tech-time. 75 years ago the state of the art in classical computers was (I'll be generous here) the Univac [1]. Figuring out how much less powerful it was than a modern computer makes an interesting exercise, especially if you do it in terms of ops/watt. I haven't done the math, but it's many, many, many orders of magnitude. If the same progress can be achieved in quantum computing, then pre-quantum encryption is definitely toast by 2100. And it pretty much took only one breakthrough, the transistor, to achieve the improvement in classical computing that we enjoy today. We still don't have the equivalent of that for QC, but who knows when or if it will happen. Everything seems impossible until someone figures it out for the first time.
It's not an eternity because QC is a low-headroom tech which is already pushing its limits.
What made computing-at-scale possible wasn't the transistor, it was the precursor technologies that made transistor manufacturing possible - precise control of semiconductor doping, and precision optical lithography.
Without those the transistor would have remained a lab curiosity.
QC has no hint of any equivalent breakthrough tech waiting to kick start a revolution. There are plenty of maybe-perhaps technologies like Diamond Defects and Photonics, but packing density and connectivity are always going to be huge problems, in addition to noise and error rate issues.
Basically you need high densities to do anything truly useful, but error rates have to go down as packing densities go up - which is stretching optimism a little.
Silicon is a very forgiving technology in comparison. As long as your logic levels have a decent headroom over the noise floor, and you allow for switching transients (...the hard part) your circuit will be deterministic and you can keep packing more and more circuitry into smaller and smaller spaces. (Subject to lithography precision.)
Of course it's not that simple, but it is basically just extremely complex and sophisticated plumbing of electron flows.
Current takes on QC are the opposite. There's a lot more noise than signal, and adding more complexity makes the problem worse in non-linear ways.
I'm sympathetic to this argument, but nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible. I myself left my career as an AI researcher about 20 years ago because I was convinced the field was moribund and there would be no major breakthroughs in my lifetime. That was about as well-informed a prediction as you could hope to find at the time and it was obviously very wrong. It is in the nature of breakthroughs that they are rare and unpredictable. Nothing you say is wrong. I would bet against QC is 5 years (and even then I would not stake my life savings) but not 75.
No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.
> It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.
Meanwhile, even after the infamous LK-99 fiasco (which gripped this forum almost more than anywhere else) was exposed as an overblown nothingburger, I still had seemingly-intelligent people telling me with all seriousness that the superconductor breakthrough had a 50% chance of happening within the next year. People are absolutely, terminally terrible at estimating the odds of future events that are surrounded by hype.
I thought Waymo was much more ML than logical rules based subsumption? I’m not sure it’s possible to do more than simple robotics without jumping into ML, I guess maybe if you had high level rules prioritized via subsumption but manipulating complex ML-trained sensors and actuators.
Yes, that's right. The ostensible idea behind subsumption is dead (because it was wrong). But what subsumption did was open up the possibility of putting the AI into the run-time feedback loop rather than the deliberative planning, and that is what all robotic control architectures do today.
> no one knew how to even convincingly fake a natural language interaction.
There was some decent attempts at the turing test given limited subject matter long before LLM’s. As in people looking at the conversation where unsure if one of the parties was a computer. It’s really interesting to read some of those transcripts.
LLM’s actually score worse one some of those tests. Of course they do a huge range of other things, but it’s worth understanding both their strengths and many weaknesses.
> nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible
Indeed, and at the same the breakthroughs are vastly outnumbered by ideas which had plausible sounding counterarguments which turned out to be correct. Which is to say, the burden of proof is on the people making claims that something implausible-sounding is plausible.
Yet it has been 53 years since we have been able to send a manned mission to the moon . No other program has or likely to come close in the next 13 years including the current US one. By 2038 the moon landings would be closer to Wright brothers than future us.
The curve of progress is only smooth and exponential when you squint hard .
It is a narrow few decades of exponential growth hardly can reasonably be expected to last for 100+ years .
It is for the same reason you cannot keep doubling grains on a chess board just because you did it 10-20 steps quickly.
Fusion power, quantum computing are all always two decades away for a reason despite the money being spent . AI has gone through 3-4 golden ages in living memory and yet too many keep believing this one would last.
Reality is when the conditions are right, I.e. all the ground work has been done for decades or centuries before there can be rapid innovation for a short(few decades at best) time
> No other program has or likely to come close in the next 13 years including the current US one.
The Chinese are planning manned lunar landings in 2029-2030, and this is not a pipe dream, they've been systematically working at this for several decades now. They have already completed 6 out of 8 preparatory missions plus placed comms satellites in lunar orbit, and the final two are scheduled for 2026 and 2028.
It does not look like CMSA is planning any human orbital missions or a human lander(lanyue) return flight test before attempting to land with humans in 2030 just two missions from now, that is very ambitious.
Perhaps milestones are being set to be competing with Artemis. When NASA gets delayed or reduced in scope, CNSA might reset to more achievable date.
That is just engineering risk on dates, there are other class of risks in geopolitics or economics etc.
Bottom line I am skeptical that a successful landing and return can be attempted in 2030. 2035 is a more realistic target I think.
> Yet it has been 53 years since we have been able to send a manned mission to the moon
A near total lack of demand explains that impressive stall.
Even if the shuttle had worked out as well as its designers hoped, was envisioned as a major retreat, while sucking all the dollars out of the room.
And today, the market for lunar landings is still very small.
I think what it shows is that many technologies might have come earlier from a research and development standpoint, if we had enough money to burn. But that was an unusual situation.
Yes, Economics is a key factor for innovation. However it alone is not sufficient. At times you simply need other foundational breakthroughs to happen and they will have to be in sequence, i.e. one breakthrough has to happen and become widespread before work on next one can progress, before you can achieve meaningful progress on the end goal.
It is not like Fusion or Quantum Computing has lacked serious or continuous funding over the last 20-30 years.
Foundational model development is a classic current example. The returns are diminishing significantly, despite the tens of billions each quarter being thrown at the problem.
No other R&D effort in our history has this much resources being allocated to it, perhaps including even the Moon landings.
However the ability to allocate resources has limits. Big tech can spend few hundred billion a year a number that would have been unimaginable even a decade ago, but even they cannot spend few trillion dollars a year.
My great grandmother, who was born in 1891, asserted ca. 1990 that her favorite invention was large print novels. More importantly: the social right to read trashy novels. But, yeah, computers, planes, starships, nuclear power, etc etc.
The comment you're responding to specified heaver-than-air. (And it should have been even more constrained: the real milestone was heavier-than-air powered flight.)
And that was before Epoch (1969, unix time started in 1970). We went from calculator to AI in 55 years, which is, actually, extremely long. It took exactly the time to miniaturize CPUs enough that you would hold as many gates in a GPU as neurones in a human’s brain. The moment we could give enough transistors to a single program, AI appeared. It’s like it’s just an emergent behavior.
> We went from calculator to AI in 55 years, which is, actually, extremely long.
I think it is insanely fast.
Think about it: that planet has been here for billions of years. Modern humanity has been here for 200,000 years, give or take. It took 199700 years and change to get to a working steam engine. 266 years later men were walking on the moon and another 55 years and we had a good facsimile of what an AI looks like in practice. That's insane progress. The next 75 years are going to be very interesting, assuming we don't fuck it all up, the chances of which are right now probably 50/50 or so.
If this is what AI is going to look like in practice it's a big letdown.
Science fiction has been predicting what an AI would be like for over a hundred years, there was even one in a movie in 1927. We're so far from what we dream that, to me, it feels like a mere leaf blowing in the wind compared to the Wright Flyer.
It's not what it can do today (which is already pretty impressive) it is what it can do in another century, which too is a relatively short time.
The Wright Flyer was a complete aircraft but small, awkward and not very practical. But it had all of the parts and that was the bit that mattered.
LLMs are not a 'complete AI' at all, they are just a very slick imitation of one through a completely different pathway. Useful, but not AI (at least, not to me). Meanwhile, a very large fraction of the users of OpenAI, Claude etc all think that AI has arrived and from that perspective it is mostly the tech crowd that is disappointed. For the rest of the people the thing is nothing short of magic compared to what they were able to do with a computer not so long ago. And for people like translators it is a massive threat to their jobs, assuming they still have one.
It is both revolutionary and a letdown, depending on your viewpoint and expectations.
This rhymes with “we were promised The Jetsons and all we got was Facebook.”
Sci-fi is fanciful and doesn’t take into account psychology. What we got is the local maxima of what entrepreneurs think they can build and what people are willing to pay for.
Sci-fi is not a prediction. It is a hypothetical vision for what humanity could be in a distant future. The writer doesn’t have to grapple with limitations of physics (note FTL travel is frequently a plot device, not a plausible technology) or limitations about what product-market-fit the market will adopt.
And, of course, sci-fi dates are rarely close or accurate. That’s probably by design (most Star Trek space technologies would be unbelievable if the timeline was 2030, but more easily believable if you add a few thousand years for innovation).
And yet, a mobile phone is quite close to a Star Trek communicator and in many ways already much more powerful. Ok, you can ask to be beamed up by your friend Scotty and it likely won't happen (call me if it does) but other than that it is an impressive feat of engineering.
Well... the 200K is so loosely defined that it could well be 210000 or 190000 (or even further out) so I figured it would be funny to be exact. But you're right that that doesn't carry well online.
It just seems that way because people had been researching neural networks from before the time they had floating point units in processors. So there were all these ideas people were waiting tp try when we finally had the speed. Then it was a matter of trying them all to see which worked the best. But yes, there is the point that even a bad ai model can learn most anything if you give it enough parameters. So the emergent property isn't far off either.
How is it that we need to build logical qubits out of physical qubits for error correction purposes, but then still need to blow out our logic qubit numbers for error correction purposes, again? It seems like there's something missing from this explanation, at least.
For RSA 4096 10^7 qubits with 10^-4 error rate (order of magnitude).
You can do useful and valuable quantum chemistry calculations already with few 100s of qubits with that low error rates, while post-quantum algorithms are becoming more common everyday removing incentives to build crypto cracking quantum computers.
I think the quantum computing will advance fastest in directions that are not easy to use in cryptography.
Honestly, if all quantum computers manage to pull off is quantum chemistry, I feel like that'll be enough. It would be a massive boon to the field of material sciences at any rate, which underlies so much of current infrastructure.
The general idea is that with N fault-tolerant qubits, you can find the ground-state energy of an electronic system with N spin orbitals. 100 spin orbitals is the practical upper limit of current computers, so when you get into several hundred qubits, you can start seeing gains.
In some special problems hybrid methods start giving gains in 100 qubits or below.
Latest numbers are about 1e6 qubits with 1e-4 error rate: https://arxiv.org/abs/2505.15917. Gates (in the sense the OP means) is harder to quantify in the error corrected context once you compile to the operations that are native to your code. Total compute time of about a week assuming a 1MHz "clock" (code cycle time, for the experts). In some ways this is the harder metric to meet than the qubit numbers.
Note that the magic of quantum error correction (exponential improvement in the error rate goes both ways): if you could get another 9 in qubit fidelity, you get a much larger improvement in qubit numbers. On the other hand, if you need to split your computation over several systems, things get much worse.
given the correct state of gate noise progress, it seems likely that we might get an extra order of magnitude of error before we get the 3 orders of magnitude in gates.
Not sure about the gate count, but if you look at the number of logical qubits required, we are still very far away from factoring numbers that traditional computing has already factored like the 829-bit RSA-250 number.
Ah - this helped me understand the numbers in quantum computing a little more clearly. I had been under the impression (based on my naive interpretation of the naming) that the number of qubits in a quantum processor might be something analogous to the number of bits of register state in a regular CPU; that qubits should be thought of more as analogous to transistors or maybe even gates makes it a little clearer why you need so many more qubits to perform more complex operations.
the difference is that you need millions of 1 qbits to factor rsa 4096, but you only need 10s of millions to factor rsa 32k. qbits and quantum time scale almost linearly with factor size, but super-polynomially for regular computers
What does this mean about the size (and thus feasibility) of a circuit required to factor a cryptographically interesting number, say, to be generous, RSA1024?
Estimates of the cost of RSA1024 use explicit circuit constructions at the target size, rather than extrapolating from the 4 bit case. So they implicitly account for the discontinuity being pointed out in the post. So this post has no impact on those costs.
"(Quick aside: the amount of optimization that has gone into this factoring-21 circuit is probably unrepresentative of what would be possible when factoring big numbers. I think a more plausible amount of optimization would produce a circuit with 500x the cost of the factoring-15 circuit… but a 100x overhead is sufficient to make my point. Regardless, special thanks to Noah Shutty for running expensive computer searches to find the conditional-multiplication-by-4-mod-21 subroutine used by this circuit.)"
Off topic, but are cryptographers convinced that on the new gigawatt data centers RSA1024 is infeasible to factor? I gather that the fastest known algorithms are still too slow to factor it in reasonable time. But is consensus that there will not be improvements to these algorithms in near future?
Number Field Sieves are still the best method, and the techniques are three or more decades old with only incremental improvements. (Of course there might be an incredible breakthrough tomorrow.)
In addition, selling information to a government on how to break either system would be more valuable than the amount of bitcoin you would able to sell before exchanges stop accepting deposits or the price crashes.
> In addition, selling information to a government on how to break either system would be more valuable
Honest question because one can find such claims very often on forums like HN:
Does there really exist a "feasible" way how some "lone hacker" could sell such information to some government and become insanely rich?
I know that people who apparently have some deep knowledge about how exploit markets work claimed on HN that "if you have to ask how/where to solve your exploit (i.e. you have the respective contacts), you are very likely not able to".
This latter observation seems a lot more plausible to me than the claim often found on HN that some "lone individual" would be able to monetize on it if he found a way how to break ECDSA or RSA by selling it to some government.
True, we can never know what state actors know that we don't, and my cryptography professor at university taught us that NSA likely had 20 years of mathematical advance over the academic crypto community.
That being said, NFS is almost thirty years old so maybe the NSA doesn't have anything better still.
Fiction, but interesting nonetheless: the 1992 film "Sneakers", with Robert Redford among others. The MacGuffin is a mathematician who discovers another way to factor large numbers.
I could reasons for them to build datacenters for AI or collecting encrypted messages to decrypt later, but not for brute force attacks on encrypted messages.
I've seen pretty credible evidence that factoring large semiprime numbers is effectively a solved problem, even without considering quantum computing or gigawatt-scale computing. I'm not able to share specifics, but I would personally not trust RSA.
I wonder when quantum computers will be able to target post-quantum RSA [1]. Normal RSA operations (key generation, encryption, decryption) have an asymptotic advantage over Shor's algorithm, so it is not unreasonable to just use large enough keys. The advantage is similar to Merkle's puzzles [2], with the added bonus that the attacker also needs to run their attack on a quantum computer.
A while ago I generated a gigabit RSA public key. It is available at [3]. From what I remember, the format is: 4-byte little-endian key size in bytes, then little-endian key, then little-endian inverse of key mod 256**bytes. The public exponent is 3.
Post-Quantum RSA is clearly a joke from djb, to have a solid reply when people ask "can't we just use bigger keys"?. It has a 1-terabyte RSA key taking 100 hours to perform a single encryption. And by design it should be beyond the reach of quantum computers.
Quantum mechanics is "true" insofar as it's useful for some purpose. Until then it's a theory and the jury is still out.
Randomness is something which I feel is a pretty weird phenomenon. I am definitely one of those 'God doesn't play with dice' types.
Randomness is also something that we call things when actually it's random from a subjective perspective. If we knew more about a system the randomness just falls away. E.g. if we knew the exact physical properties of a dice roll we could probably predict it better than random.
What if it's the case that quantum mechanics is similar. I.e. that what we think of as randomness isn't really randomness but only appears that way to the best of what we can observe. If this is the case, and if our algorithms rely on some sort of genuine randomness inherent in the universe, then doesn't that suggest there's a problem? Perhaps part of the errors we see in quantum mechanics arise from just something fundamental to the universe being different to our model.
I don't think this is that far fetched given the large holes that our current understanding of physics have as to predicting the universe. It just seems that in the realm of quantum mechanics this isn't the case, apparently because experiments have verified things. However, I think there really is something in the proof being in the pudding (provide a practical use case).
the 'no randomness = no free will' argument is pretty common, but how does randomness ensure free will? It's still something out of our control, just it can't be predicited. Why is it any better to be a random automaton than a predictable one?
Yes, Schrodinger’s equation is entirely deterministic. There is no randomness inherent in quantum mechanics. The perceived randomness only arises when we have incomplete information. (It is another matter that quantum mechanics to some extent forbids us from having perfect information.)
I mean no disrespect, but I don’t think it’s a particularly useful activity to speculate on physics if you don’t know the basic equations.
The point was that the amount of optimization work that went into making the circuit for factoring 21 only ~100x larger than the one for factoring 15 is not feasible for larger numbers. So, the general rule should be that you need ~500x more gates for a similar increase in the number to be factored, not just ~100x.
The more plausible amount of optimization is less optimization. Or, more accurately, the benefits of optimization at large sizes is expected to be less beneficial than it was for the N=21 circuit.
I think the idea is the minimal implementation will be unstable and unreliable. I don’t know the details, but there’s much work and thought on quantum error correcting qubits - where you hook up N qubits in a clever way to function as one very stable qubit. Terms such as “decoherence time” make appearances. You can imagine this quickly expands into an awful lot of qubits.
The article explains the gate cost mostly comes from O(n) multiply-under-mod ops on O(n)-bit numbers. Each op could be naively spelled out as O(n^2) gates. So the whole thing looks cubic to me at worst.
It’s worth noting that the reason we are deploying PQ crypto is not that we are 100% convinced QC is coming soon. It may or may not depending on how development goes.
The goal of cryptography is to make something as close to theoretically unbreakable as possible. That means even theoretical vulnerabilities are taken seriously.
For ECC and RSA and related algorithms we have a theoretical and physically plausible pathway toward a practical machine that could break them. That means many cryptographers consider them theoretically broken even if such a machine does not exist and may not exist for a long time. The math works even if we can’t build it yet.
So it’s considered prudent to go ahead and upgrade now while no QC exists. That way if some major advance does arrive we are ready.
Nobody’s talking seriously about replacing SHA2, AES, ChaCha, etc because there is no physically plausible theoretically valid path to a machine that can break these in, say, less than many millions of years. AFAIK there is no proof that such a path does not exist but nobody has found one, hence they are considered unbroken.
Note that cryptography is not the only or even the most useful application of QC. Things like physical stimulation of quantum systems, protein folding, machine learning, etc. could be more useful. Like digital computers there’s probably a ton of uses we don’t know about because we need to tinker with the machine to figure them out.
The predictions don't tell us anything about why the answer is what it is. There is probably important (useful) fundamental scientific knowledge in being able to know that vs. just being able to predict the result.
There’s a difference between good AI predictions and theoretically perfect QC computations. The AI estimates while the QC will give you the answer, full stop. The latter could be relied upon more strongly. It could also generate infinite training data to make much better models.
QC might be directly applicable to AI training too. It may be possible to compute the optimal model over a data set in linear time. It could allow training that is faster and consumes a tiny fraction of the energy current brute force methods need.
For the symmetric cryptography (so obviously AES and ChaCha, but also in effect the SHA-2 family) we can hand wave the quantum attacks as halving key length by enabling a sort of meet-in-the-middle attack (this attack is why it was 3DES not 2DES when they strengthened DES). There's a lot of hand waving involved. Your real Quantum Computer won't in fact be equivalent cost to the non-quantum computer, or as fast, or as small, the speed-ups aren't quite halving, and so on. But it's enough to say OK, what if AES-128 was as weak as a hypothetical AES-64, and that's fine because we have AES-256 anyway.
However, the main focus is on Key Exchange. Why? Well, Key Exchange is the clever bit where we don't say our secrets out loud. Using a KEX two parties Alice and Bob agree a secret but neither of them utters it. Break that and you can learn the secret, which was used to encrypt everything else - for any conversation, including conversations which you recorded any time in the past, such as today.
If future bad guys did have a Quantum Computer the Key Exchange lets them read existing conversations they've tapped but today can't read, whereas breaking say the signing algorithm wouldn't let them somehow go back in time and sign things now because that's not how time works. So that's why the focus on KEX. Once such a thing exists or clearly is soon to deliver it's important to solve a lot of other problems such as signing, but for KEX that's already too late.
> Third, notice that the only remaining multiplication is a multiplication by 4. Because 15 is one less than a power of 2, multiplying by 2 modulo 15 can be implemented using a circular shift. A multiplication by 4 is just two multiplications by 2, so it can also be implemented by a circular shift. This is a very rare property for a modular multiplication to have, and here it reduces what should be an expensive operation into a pair of conditional swaps.
> Aside: multiplication by 16 mod 21 is the inverse of multiplying by 4 mod 21, and the circuits are reversible, so multiplying by 16 uses the same number of Toffolis as multiplying by 4.
I couldn't really find anything explaining the significance of this. The only info I found said that "4 mod 21 = 4" (but I don't know if it was AI slop or not).
Is "multiplying by 4 mod 21" something distinct to quantum computing?
It is phrasing mathematicians sometimes use. In this sentence 'mod 4' does not indicate an operator but denotes what number system you are in.
For instance the following are equivalent:
2 = 6 mod 4
6 = 2 mod 4
This 'mod 4' can also appear in parentheses or in some other way, but it must appear at the end. Like I said it is not an operator rather it denotes that the entire preceding statement takes place in the appropriate quotient space.
So it is not (multiplying by (4 mod 21)) but ((multiplying by 4) mod 21)
In many applications you are concerned about data you encrypt today still being secure 20 years from now. Especially if your threat model includes a malicious actor holding on to data in hopes of decrypting it later.
From the article it sounds like we will still be safe for 20+ years. On the other hand 15 was just extraordinarily easy, progress after 21 will be much quicker. And we never know which breakthroughs might come in the next decades that speed up progress.
The article lists three reasons why 21 is so much harder than 15. Mostly that with 15 most of the required conditional multiplications are multiplications by 1 and the remaining multiplication can be done with cheap shifts because it's a multiplication by a power of 2 modulo 15 (which is one less than a power of two).
But 22 and 24 are in the same boat as 21 here. All three of them require computing only factors that are not one, all three are not one less than a factor of 2. You need slightly more multiplications (and thus more gates) as the numbers get larger, but that only grows linearly. Maybe the conditional multiplications required get slightly more expensive to implement, but I wouldn't expect a 100x cost blowup from that. Error correction is still an issue, potentially making a linear complexity increase quadratic, but qubit counts in quantum computers also increase at an exponential rate
I think the idea is that it doesn’t matter if it takes billions of gates to achieve, the point is that it can do it very fast. If we thought a table sized FPGA could do it, a state actor would most definitely build one.
The practical problem is that ‘noise’ between gates seems to increase exponentially, so practically it may actually be impossible to construct anything with more than a handful of gates for the foreseeable (possibly indefinite?) future.
You brought up gate noise, there has been quite a bit of progress on that problem.
> so we should totally be able to factor 21 (or larger numbers)…. When?
Just because we solve one problem doesn't imply all the problems in QC are also instantly solved. I guess it does if you assume noise is the only problem and once is it solved the engineering is trivial. That is not the case. Even assuming all foundational problems have been solved, figuring out how actually engineer and also mass produce large numbers of gates, will take a while.
As the article pointed out, going from 15 to 21 requires a 100x increase in gates.
As the article that you posted under says:
"Because of the large cost of quantum factoring numbers (that aren’t 15), factoring isn’t yet a good benchmark for tracking the progress of quantum computers. If you want to stay abreast of progress in quantum computing, you should be paying attention to the arrival quantum error correction (such as surface codes getting more reliable as their size is increased) and to architectures solving core scaling challenges (such as lost neutral atoms being continuously replaced)."
I believe you are thinking of "Supersingular isogeny Diffie–Hellman key exchange" or SIKE which is a post quantum encryption algorithm that was spectacularly broken a couple years ago. The math involves elliptical curves but it's different from the elliptical curve cryptography used in your browser.
Which assumptions? ECDLP is still considered computationally hard, and ECC considered secure. There are invalid curve attacks and small subgroup attacks but that's a problem with key selection, not a fundamental problem with ECC.
I have a strong belief that new mathematical tools and methods can be developed that can make it "easy" to break a lot of modern cryptography primitives without ever using a quantum computer.
The potential is there, we haven't made it yet. It's the same with AI, AGI and all that. Imagine if you'd read a response from GPT-2 back in 2019, you'd also be like "and these are the same models that will eventually give us AGI".
Zero? Aren't you a little bit overconfident on that?
Transformer LLMs already gave us the most general AI as of yet, by far - and they keep getting developed further, with a number of recent breakthroughs and milestones.
It turns out that error correction was easy on digital computers, and was essentially a solved problem early in their development. In fact, "noise immunity" is arguably the defining feature of a digital system. And error correction can happen at each gate, since there's no reason to propagate an indeterminate number.
The current best one- and two-gate errors are well below 0.01% and going down with every generation of chips. See: https://ianreppel.org/quantum.html
There are no theoretical reasons QEC can't exist. In fact it already does. Is it already good enough for universal fault tolerance? No. But then again no one said it would. We are slowly getting closer every year.
In his book, Dyakonov offers zero solid reasons other than "it's hard" and thus likely not possible. That's just an opinion.
I took a QC course, and have done some reading, but am hardly an expert. But my impression has been: "This is analog computation." To reinforce the similarity, the error level of analog computers can be improved by running many of them in parallel.
So how many gates are we talking to factor some "cryptographically useful" number? Is there some pathway that makes quantum computers useful this century?
> So how many gates are we talking to factor some "cryptographically useful" number?
Table 5 of [1] estimates 7 billion Toffoli gates to factor 2048 bit RSA integers.
> Is there some pathway that makes quantum computers useful this century?
The pathway to doing billions of gates is quantum error correction. [1] estimates distance 25 surface codes would be sufficient for those 7 billion gates (given the physical assumptions it lists). This amplifies the qubit count from 1400 logical qubits to a million physical noisy qubits.
Samuel Jacques had a pretty good talk at PQCrypto this year, and he speculates about timelines in it [2].
(I'm the author of this blog post and of [1].)
[1]: https://arxiv.org/pdf/2505.15917
[2]: https://www.youtube.com/watch?v=nJxENYdsB6c
Is there a qubit-speed tradeoff, where you could factor 2048 bit RSA integers with fewer qubits but it would take more time? Or is it all or nothing?
From the talk of Samuel Jacques: Timeline for RSA-2048 at about 2088 (conservative extrapolation) or ~2052 (Moore’s‑law‑style growth)
It's not just quantum error correction that is required, it's also hard to make devices small enough due to cooling, to allow thousands of qubits let alone billions.
not all implementations of QC require cryogenic cooling
How do you go from 7 billion Toffoli gates (which consist of 42 billion "two-qubit entangling gates", per the blog post) to merely 1400 logical qubits?
Same way you go from 2 billion CPU operations per second to only ~4000 register bits (not counting ZMM SIMD registers) on a conventional CPU.
The operations all consist of saying, connect these 3 bits and do a reversible operation on them all together. Same as assembly, "add these two registers and store the sum in the first one..." You didn't need to introduce any new bits.
You only need to introduce new bits for steps that cannot be reversibly done, in assembly you get around this by being able to overwrite a register: in quantum, that requires an explicit measurement in the computational basis to figure out how you want to do stuff to zero it; zeroing a bit is not a unitary operation. But if you can encode the circuit in Toffoli gates which are perfectly reversible, you don't have to delete any bits after that encoding (but you may have to introduce extra bits to get to that encoding, like using Toffoli to build “x AND y” requires an extra z bit that effectively gets discarded at the end of the computation when everything is done and nobody cares what that bit holds, but it holds the information you would need to reverse that logical AND).
But yeah it's just number of operations that you need to run the algorithm, versus the number of registers that you need to run the algorithm, they're just two different numbers.
> So how many gates are we talking to factor some "cryptographically useful" number?
That is a hard question to answer for two reasons. First, there is no bright line that delineates "cryptographically useful". And second, the exact design of a QC that could do such a calculation is not yet known. It's kind of like trying to estimate how many traditional gates would be needed to build a "semantically useful" neural network back in 1985.
But the answer is almost certainly in the millions.
[UPDATE] There is a third reason this is hard to predict: for quantum error correction, there is a tradeoff between the error rate in the raw qbit and the number of gates needed to build a reliable error-corrected virtual qbit. The lower the error rate in the raw qbit, the fewer gates are needed. And there is no way to know at this point what kind of raw error rates can be achieved.
> Is there some pathway that makes quantum computers useful this century?
This century has 75 years left in it, and that is an eternity in tech-time. 75 years ago the state of the art in classical computers was (I'll be generous here) the Univac [1]. Figuring out how much less powerful it was than a modern computer makes an interesting exercise, especially if you do it in terms of ops/watt. I haven't done the math, but it's many, many, many orders of magnitude. If the same progress can be achieved in quantum computing, then pre-quantum encryption is definitely toast by 2100. And it pretty much took only one breakthrough, the transistor, to achieve the improvement in classical computing that we enjoy today. We still don't have the equivalent of that for QC, but who knows when or if it will happen. Everything seems impossible until someone figures it out for the first time.
---
[1] https://en.wikipedia.org/wiki/UNIVAC_I#Technical_description
It's not an eternity because QC is a low-headroom tech which is already pushing its limits.
What made computing-at-scale possible wasn't the transistor, it was the precursor technologies that made transistor manufacturing possible - precise control of semiconductor doping, and precision optical lithography.
Without those the transistor would have remained a lab curiosity.
QC has no hint of any equivalent breakthrough tech waiting to kick start a revolution. There are plenty of maybe-perhaps technologies like Diamond Defects and Photonics, but packing density and connectivity are always going to be huge problems, in addition to noise and error rate issues.
Basically you need high densities to do anything truly useful, but error rates have to go down as packing densities go up - which is stretching optimism a little.
Silicon is a very forgiving technology in comparison. As long as your logic levels have a decent headroom over the noise floor, and you allow for switching transients (...the hard part) your circuit will be deterministic and you can keep packing more and more circuitry into smaller and smaller spaces. (Subject to lithography precision.)
Of course it's not that simple, but it is basically just extremely complex and sophisticated plumbing of electron flows.
Current takes on QC are the opposite. There's a lot more noise than signal, and adding more complexity makes the problem worse in non-linear ways.
I'm sympathetic to this argument, but nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible. I myself left my career as an AI researcher about 20 years ago because I was convinced the field was moribund and there would be no major breakthroughs in my lifetime. That was about as well-informed a prediction as you could hope to find at the time and it was obviously very wrong. It is in the nature of breakthroughs that they are rare and unpredictable. Nothing you say is wrong. I would bet against QC is 5 years (and even then I would not stake my life savings) but not 75.
In fairness, the biggest breakthrough in AI has been calling more and more things “AI.” Before LLMs it was content based collaborative filtering.
No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.
> It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.
Meanwhile, even after the infamous LK-99 fiasco (which gripped this forum almost more than anywhere else) was exposed as an overblown nothingburger, I still had seemingly-intelligent people telling me with all seriousness that the superconductor breakthrough had a 50% chance of happening within the next year. People are absolutely, terminally terrible at estimating the odds of future events that are surrounded by hype.
> clear path to AGI
What are the steps?
I thought Waymo was much more ML than logical rules based subsumption? I’m not sure it’s possible to do more than simple robotics without jumping into ML, I guess maybe if you had high level rules prioritized via subsumption but manipulating complex ML-trained sensors and actuators.
Yes, that's right. The ostensible idea behind subsumption is dead (because it was wrong). But what subsumption did was open up the possibility of putting the AI into the run-time feedback loop rather than the deliberative planning, and that is what all robotic control architectures do today.
> no one knew how to even convincingly fake a natural language interaction.
There was some decent attempts at the turing test given limited subject matter long before LLM’s. As in people looking at the conversation where unsure if one of the parties was a computer. It’s really interesting to read some of those transcripts.
LLM’s actually score worse one some of those tests. Of course they do a huge range of other things, but it’s worth understanding both their strengths and many weaknesses.
> nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible
Indeed, and at the same the breakthroughs are vastly outnumbered by ideas which had plausible sounding counterarguments which turned out to be correct. Which is to say, the burden of proof is on the people making claims that something implausible-sounding is plausible.
But QC is quite plausible. There is no theoretical constraint that makes it impossible. It really is just an engineering problem at this point.
>> Is there some pathway that makes quantum computers useful this century?
> This century has 75 years left in it, and that is an eternity in tech-time.
As a comparison, we went from first heavier than air flight to man walking on the moon in only 66 years.
> walking on the moon in only 66 years.
Yet it has been 53 years since we have been able to send a manned mission to the moon . No other program has or likely to come close in the next 13 years including the current US one. By 2038 the moon landings would be closer to Wright brothers than future us.
The curve of progress is only smooth and exponential when you squint hard .
It is a narrow few decades of exponential growth hardly can reasonably be expected to last for 100+ years .
It is for the same reason you cannot keep doubling grains on a chess board just because you did it 10-20 steps quickly.
Fusion power, quantum computing are all always two decades away for a reason despite the money being spent . AI has gone through 3-4 golden ages in living memory and yet too many keep believing this one would last.
Reality is when the conditions are right, I.e. all the ground work has been done for decades or centuries before there can be rapid innovation for a short(few decades at best) time
> No other program has or likely to come close in the next 13 years including the current US one.
The Chinese are planning manned lunar landings in 2029-2030, and this is not a pipe dream, they've been systematically working at this for several decades now. They have already completed 6 out of 8 preparatory missions plus placed comms satellites in lunar orbit, and the final two are scheduled for 2026 and 2028.
https://en.wikipedia.org/wiki/Chinese_Lunar_Exploration_Prog...
It does not look like CMSA is planning any human orbital missions or a human lander(lanyue) return flight test before attempting to land with humans in 2030 just two missions from now, that is very ambitious.
Perhaps milestones are being set to be competing with Artemis. When NASA gets delayed or reduced in scope, CNSA might reset to more achievable date.
That is just engineering risk on dates, there are other class of risks in geopolitics or economics etc.
Bottom line I am skeptical that a successful landing and return can be attempted in 2030. 2035 is a more realistic target I think.
> Yet it has been 53 years since we have been able to send a manned mission to the moon
A near total lack of demand explains that impressive stall.
Even if the shuttle had worked out as well as its designers hoped, was envisioned as a major retreat, while sucking all the dollars out of the room.
And today, the market for lunar landings is still very small.
I think what it shows is that many technologies might have come earlier from a research and development standpoint, if we had enough money to burn. But that was an unusual situation.
Yes, Economics is a key factor for innovation. However it alone is not sufficient. At times you simply need other foundational breakthroughs to happen and they will have to be in sequence, i.e. one breakthrough has to happen and become widespread before work on next one can progress, before you can achieve meaningful progress on the end goal.
It is not like Fusion or Quantum Computing has lacked serious or continuous funding over the last 20-30 years.
Foundational model development is a classic current example. The returns are diminishing significantly, despite the tens of billions each quarter being thrown at the problem.
No other R&D effort in our history has this much resources being allocated to it, perhaps including even the Moon landings.
However the ability to allocate resources has limits. Big tech can spend few hundred billion a year a number that would have been unimaginable even a decade ago, but even they cannot spend few trillion dollars a year.
My great grandmother, who was born in 1891, asserted ca. 1990 that her favorite invention was large print novels. More importantly: the social right to read trashy novels. But, yeah, computers, planes, starships, nuclear power, etc etc.
Amara’s Law – “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Related to your observation: A piece of the original Wright Flyer was landed on Mars just a bit over 117 years after the first flight.
That's only true if you totally ignore hot air balloons, the actual first manned flight was in 1783.
The comment you're responding to specified heaver-than-air. (And it should have been even more constrained: the real milestone was heavier-than-air powered flight.)
We went from neutron being discovered to nuclear weapons in just over a decade.
> to man walking on the moon in only 66 years
And that was before Epoch (1969, unix time started in 1970). We went from calculator to AI in 55 years, which is, actually, extremely long. It took exactly the time to miniaturize CPUs enough that you would hold as many gates in a GPU as neurones in a human’s brain. The moment we could give enough transistors to a single program, AI appeared. It’s like it’s just an emergent behavior.
> We went from calculator to AI in 55 years, which is, actually, extremely long.
I think it is insanely fast.
Think about it: that planet has been here for billions of years. Modern humanity has been here for 200,000 years, give or take. It took 199700 years and change to get to a working steam engine. 266 years later men were walking on the moon and another 55 years and we had a good facsimile of what an AI looks like in practice. That's insane progress. The next 75 years are going to be very interesting, assuming we don't fuck it all up, the chances of which are right now probably 50/50 or so.
If this is what AI is going to look like in practice it's a big letdown.
Science fiction has been predicting what an AI would be like for over a hundred years, there was even one in a movie in 1927. We're so far from what we dream that, to me, it feels like a mere leaf blowing in the wind compared to the Wright Flyer.
It's not what it can do today (which is already pretty impressive) it is what it can do in another century, which too is a relatively short time.
The Wright Flyer was a complete aircraft but small, awkward and not very practical. But it had all of the parts and that was the bit that mattered.
LLMs are not a 'complete AI' at all, they are just a very slick imitation of one through a completely different pathway. Useful, but not AI (at least, not to me). Meanwhile, a very large fraction of the users of OpenAI, Claude etc all think that AI has arrived and from that perspective it is mostly the tech crowd that is disappointed. For the rest of the people the thing is nothing short of magic compared to what they were able to do with a computer not so long ago. And for people like translators it is a massive threat to their jobs, assuming they still have one.
It is both revolutionary and a letdown, depending on your viewpoint and expectations.
This rhymes with “we were promised The Jetsons and all we got was Facebook.”
Sci-fi is fanciful and doesn’t take into account psychology. What we got is the local maxima of what entrepreneurs think they can build and what people are willing to pay for.
Sci-fi is not a prediction. It is a hypothetical vision for what humanity could be in a distant future. The writer doesn’t have to grapple with limitations of physics (note FTL travel is frequently a plot device, not a plausible technology) or limitations about what product-market-fit the market will adopt.
And, of course, sci-fi dates are rarely close or accurate. That’s probably by design (most Star Trek space technologies would be unbelievable if the timeline was 2030, but more easily believable if you add a few thousand years for innovation).
And yet, a mobile phone is quite close to a Star Trek communicator and in many ways already much more powerful. Ok, you can ask to be beamed up by your friend Scotty and it likely won't happen (call me if it does) but other than that it is an impressive feat of engineering.
>(call me if it does)
They can just tell you in person!
I agree with everything you say but I'm still exceptionally triggered by you going 2x10^5 - 300 = 199700 and change.
Well... the 200K is so loosely defined that it could well be 210000 or 190000 (or even further out) so I figured it would be funny to be exact. But you're right that that doesn't carry well online.
It just seems that way because people had been researching neural networks from before the time they had floating point units in processors. So there were all these ideas people were waiting tp try when we finally had the speed. Then it was a matter of trying them all to see which worked the best. But yes, there is the point that even a bad ai model can learn most anything if you give it enough parameters. So the emergent property isn't far off either.
Good reminder on the time scale.
On the other hand, the Univac could do more useful work than current quantum computers.
Saw this earlier: https://x.com/adamscochran/status/1962148452072124879?s=46
As a layman the pathway seems to exist behind multiple massive materials science breakthroughs
How is it that we need to build logical qubits out of physical qubits for error correction purposes, but then still need to blow out our logic qubit numbers for error correction purposes, again? It seems like there's something missing from this explanation, at least.
the blow-up is in physical qbits
For RSA 4096 10^7 qubits with 10^-4 error rate (order of magnitude).
You can do useful and valuable quantum chemistry calculations already with few 100s of qubits with that low error rates, while post-quantum algorithms are becoming more common everyday removing incentives to build crypto cracking quantum computers.
I think the quantum computing will advance fastest in directions that are not easy to use in cryptography.
Honestly, if all quantum computers manage to pull off is quantum chemistry, I feel like that'll be enough. It would be a massive boon to the field of material sciences at any rate, which underlies so much of current infrastructure.
Which valuable quantum chemistry calculations you can do with few 100s of qubits with that low error rates?
The general idea is that with N fault-tolerant qubits, you can find the ground-state energy of an electronic system with N spin orbitals. 100 spin orbitals is the practical upper limit of current computers, so when you get into several hundred qubits, you can start seeing gains.
In some special problems hybrid methods start giving gains in 100 qubits or below.
Gate count estimates for performing quantum chemistry on small quantum computers https://arxiv.org/pdf/1312.1695
A Perspective on Quantum Computing Applications in Quantum Chemistry using 25--100 Logical Qubits https://arxiv.org/pdf/2506.19337
Latest numbers are about 1e6 qubits with 1e-4 error rate: https://arxiv.org/abs/2505.15917. Gates (in the sense the OP means) is harder to quantify in the error corrected context once you compile to the operations that are native to your code. Total compute time of about a week assuming a 1MHz "clock" (code cycle time, for the experts). In some ways this is the harder metric to meet than the qubit numbers.
Note that the magic of quantum error correction (exponential improvement in the error rate goes both ways): if you could get another 9 in qubit fidelity, you get a much larger improvement in qubit numbers. On the other hand, if you need to split your computation over several systems, things get much worse.
given the correct state of gate noise progress, it seems likely that we might get an extra order of magnitude of error before we get the 3 orders of magnitude in gates.
Not sure about the gate count, but if you look at the number of logical qubits required, we are still very far away from factoring numbers that traditional computing has already factored like the 829-bit RSA-250 number.
Realistically, you want millions to billions of qubits to compete with classical computers that already have trillions of transistors.
Ah - this helped me understand the numbers in quantum computing a little more clearly. I had been under the impression (based on my naive interpretation of the naming) that the number of qubits in a quantum processor might be something analogous to the number of bits of register state in a regular CPU; that qubits should be thought of more as analogous to transistors or maybe even gates makes it a little clearer why you need so many more qubits to perform more complex operations.
the difference is that you need millions of 1 qbits to factor rsa 4096, but you only need 10s of millions to factor rsa 32k. qbits and quantum time scale almost linearly with factor size, but super-polynomially for regular computers
The linked paper "Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog" is a good read!
https://eprint.iacr.org/2025/1237.pdf
What does this mean about the size (and thus feasibility) of a circuit required to factor a cryptographically interesting number, say, to be generous, RSA1024?
Estimates of the cost of RSA1024 use explicit circuit constructions at the target size, rather than extrapolating from the 4 bit case. So they implicitly account for the discontinuity being pointed out in the post. So this post has no impact on those costs.
"(Quick aside: the amount of optimization that has gone into this factoring-21 circuit is probably unrepresentative of what would be possible when factoring big numbers. I think a more plausible amount of optimization would produce a circuit with 500x the cost of the factoring-15 circuit… but a 100x overhead is sufficient to make my point. Regardless, special thanks to Noah Shutty for running expensive computer searches to find the conditional-multiplication-by-4-mod-21 subroutine used by this circuit.)"
Off topic, but are cryptographers convinced that on the new gigawatt data centers RSA1024 is infeasible to factor? I gather that the fastest known algorithms are still too slow to factor it in reasonable time. But is consensus that there will not be improvements to these algorithms in near future?
Number Field Sieves are still the best method, and the techniques are three or more decades old with only incremental improvements. (Of course there might be an incredible breakthrough tomorrow.)
best published method
Are the bitcoins in the first wallets gone? No? I will assume it's still the best method without any irrefutable evidence.
Bitcoin uses ECDSA to sign transactions, not RSA.
In addition, selling information to a government on how to break either system would be more valuable than the amount of bitcoin you would able to sell before exchanges stop accepting deposits or the price crashes.
> In addition, selling information to a government on how to break either system would be more valuable
Honest question because one can find such claims very often on forums like HN:
Does there really exist a "feasible" way how some "lone hacker" could sell such information to some government and become insanely rich?
I know that people who apparently have some deep knowledge about how exploit markets work claimed on HN that "if you have to ask how/where to solve your exploit (i.e. you have the respective contacts), you are very likely not able to".
This latter observation seems a lot more plausible to me than the claim often found on HN that some "lone individual" would be able to monetize on it if he found a way how to break ECDSA or RSA by selling it to some government.
Yes. Start what's known as "a company".
A method to efficiently factor large numbers will also break the ECDSA.
No, ECDSA relies on the hardness of the discrete logarithm problem. Nothing to do with factoring, at least not in the classical sense.
On a quantum computer, my understanding is that Shor's algorithm could potentially target both problems, though.
Both systems are an example of a hidden Abelian subgroup problem. That is also why Shor's algorithm equally applies to both: https://en.m.wikipedia.org/wiki/Shor%27s_algorithm#Shor's_al...
So a hypothetical classic algorithm that breaks the RSA is also highly likely to break the ECDSA.
If a government knows you have such information they’ll take it not buy it.
So your best bet would probably be to try to sell as many BTC as possible then give away the solution for free to your/a government.
> If a government knows you have such information they’ll take it not buy it.
They would probably kill you so you couldn't tell others.
If a government can break crypto, that's worth more than money. Especially if it can remain peerless and undetected.
Well, this discussion is about prime number factorisation, and bitcoins use elliptic curve...
True, we can never know what state actors know that we don't, and my cryptography professor at university taught us that NSA likely had 20 years of mathematical advance over the academic crypto community.
That being said, NFS is almost thirty years old so maybe the NSA doesn't have anything better still.
Fiction, but interesting nonetheless: the 1992 film "Sneakers", with Robert Redford among others. The MacGuffin is a mathematician who discovers another way to factor large numbers.
It recently occurred to me that now would be the best time ever for state actors to build out massive data centers without anyone noticing.
I could reasons for them to build datacenters for AI or collecting encrypted messages to decrypt later, but not for brute force attacks on encrypted messages.
I've seen pretty credible evidence that factoring large semiprime numbers is effectively a solved problem, even without considering quantum computing or gigawatt-scale computing. I'm not able to share specifics, but I would personally not trust RSA.
People who have seen this evidence don’t go around on the internet bragging they’ve seen this evidence.
I wonder when quantum computers will be able to target post-quantum RSA [1]. Normal RSA operations (key generation, encryption, decryption) have an asymptotic advantage over Shor's algorithm, so it is not unreasonable to just use large enough keys. The advantage is similar to Merkle's puzzles [2], with the added bonus that the attacker also needs to run their attack on a quantum computer.
A while ago I generated a gigabit RSA public key. It is available at [3]. From what I remember, the format is: 4-byte little-endian key size in bytes, then little-endian key, then little-endian inverse of key mod 256**bytes. The public exponent is 3.
[1] https://eprint.iacr.org/2017/351.pdf
[2] https://dl.acm.org/doi/pdf/10.1145/359460.359473
[3] https://hristo.venev.name/pqrsa.pub
Post-Quantum RSA is clearly a joke from djb, to have a solid reply when people ask "can't we just use bigger keys"?. It has a 1-terabyte RSA key taking 100 hours to perform a single encryption. And by design it should be beyond the reach of quantum computers.
I enjoyed the typo in "error corection" (sic).
Quantum mechanics is "true" insofar as it's useful for some purpose. Until then it's a theory and the jury is still out.
Randomness is something which I feel is a pretty weird phenomenon. I am definitely one of those 'God doesn't play with dice' types.
Randomness is also something that we call things when actually it's random from a subjective perspective. If we knew more about a system the randomness just falls away. E.g. if we knew the exact physical properties of a dice roll we could probably predict it better than random.
What if it's the case that quantum mechanics is similar. I.e. that what we think of as randomness isn't really randomness but only appears that way to the best of what we can observe. If this is the case, and if our algorithms rely on some sort of genuine randomness inherent in the universe, then doesn't that suggest there's a problem? Perhaps part of the errors we see in quantum mechanics arise from just something fundamental to the universe being different to our model.
I don't think this is that far fetched given the large holes that our current understanding of physics have as to predicting the universe. It just seems that in the realm of quantum mechanics this isn't the case, apparently because experiments have verified things. However, I think there really is something in the proof being in the pudding (provide a practical use case).
This has been considered, see https://en.m.wikipedia.org/wiki/Hidden-variable_theory
Most people don’t want to go down this hole. In the end, it means we might not have free will if there isn’t any randomness…
But if you are up for an existential crisis, just google “hidden variable theories”
the 'no randomness = no free will' argument is pretty common, but how does randomness ensure free will? It's still something out of our control, just it can't be predicited. Why is it any better to be a random automaton than a predictable one?
Turns out we only have "free as in free beer" will.
Sorry, I was destined to make this joke before I was even born.
Yes, Schrodinger’s equation is entirely deterministic. There is no randomness inherent in quantum mechanics. The perceived randomness only arises when we have incomplete information. (It is another matter that quantum mechanics to some extent forbids us from having perfect information.)
I mean no disrespect, but I don’t think it’s a particularly useful activity to speculate on physics if you don’t know the basic equations.
> I think a more plausible amount of optimization would produce a circuit with 500x the cost of the factoring-15 circuit
I don't get this part
If author already produced "115x", how can optimizations make it worse?
The point was that the amount of optimization work that went into making the circuit for factoring 21 only ~100x larger than the one for factoring 15 is not feasible for larger numbers. So, the general rule should be that you need ~500x more gates for a similar increase in the number to be factored, not just ~100x.
The more plausible amount of optimization is less optimization. Or, more accurately, the benefits of optimization at large sizes is expected to be less beneficial than it was for the N=21 circuit.
I think the idea is the minimal implementation will be unstable and unreliable. I don’t know the details, but there’s much work and thought on quantum error correcting qubits - where you hook up N qubits in a clever way to function as one very stable qubit. Terms such as “decoherence time” make appearances. You can imagine this quickly expands into an awful lot of qubits.
The 115x was obtained by doing a (less plausable) large amount of optimization...
Does this essentially mean that the Big-O "circuitry requirements" of factoring integers using Schorr's is super polynomial?
The article explains the gate cost mostly comes from O(n) multiply-under-mod ops on O(n)-bit numbers. Each op could be naively spelled out as O(n^2) gates. So the whole thing looks cubic to me at worst.
Obligatory reference: https://scottlocklin.wordpress.com/2019/01/15/quantum-comput...
Obligatory how? It's sort of funny but comes across as uninformed, over the top, and lacking substance.
It’s worth noting that the reason we are deploying PQ crypto is not that we are 100% convinced QC is coming soon. It may or may not depending on how development goes.
The goal of cryptography is to make something as close to theoretically unbreakable as possible. That means even theoretical vulnerabilities are taken seriously.
For ECC and RSA and related algorithms we have a theoretical and physically plausible pathway toward a practical machine that could break them. That means many cryptographers consider them theoretically broken even if such a machine does not exist and may not exist for a long time. The math works even if we can’t build it yet.
So it’s considered prudent to go ahead and upgrade now while no QC exists. That way if some major advance does arrive we are ready.
Nobody’s talking seriously about replacing SHA2, AES, ChaCha, etc because there is no physically plausible theoretically valid path to a machine that can break these in, say, less than many millions of years. AFAIK there is no proof that such a path does not exist but nobody has found one, hence they are considered unbroken.
Note that cryptography is not the only or even the most useful application of QC. Things like physical stimulation of quantum systems, protein folding, machine learning, etc. could be more useful. Like digital computers there’s probably a ton of uses we don’t know about because we need to tinker with the machine to figure them out.
> Things like physical stimulation of quantum systems, protein folding, machine learning, etc. could be more useful
is there still more to do in protein folding after AlphaFold?
https://www.isomorphiclabs.com/articles/alphafold-3-predicts...
The predictions don't tell us anything about why the answer is what it is. There is probably important (useful) fundamental scientific knowledge in being able to know that vs. just being able to predict the result.
There’s a difference between good AI predictions and theoretically perfect QC computations. The AI estimates while the QC will give you the answer, full stop. The latter could be relied upon more strongly. It could also generate infinite training data to make much better models.
QC might be directly applicable to AI training too. It may be possible to compute the optimal model over a data set in linear time. It could allow training that is faster and consumes a tiny fraction of the energy current brute force methods need.
There have in fact been some results on quantum speedups for machine learning: https://www.quantamagazine.org/ai-gets-a-quantum-computing-s...
I would expect this to become relevant later than crypto, though, because you need larger data sizes for things to get interesting.
Is there any known quantum exponential speedup for gradient descent?
For the symmetric cryptography (so obviously AES and ChaCha, but also in effect the SHA-2 family) we can hand wave the quantum attacks as halving key length by enabling a sort of meet-in-the-middle attack (this attack is why it was 3DES not 2DES when they strengthened DES). There's a lot of hand waving involved. Your real Quantum Computer won't in fact be equivalent cost to the non-quantum computer, or as fast, or as small, the speed-ups aren't quite halving, and so on. But it's enough to say OK, what if AES-128 was as weak as a hypothetical AES-64, and that's fine because we have AES-256 anyway.
However, the main focus is on Key Exchange. Why? Well, Key Exchange is the clever bit where we don't say our secrets out loud. Using a KEX two parties Alice and Bob agree a secret but neither of them utters it. Break that and you can learn the secret, which was used to encrypt everything else - for any conversation, including conversations which you recorded any time in the past, such as today.
If future bad guys did have a Quantum Computer the Key Exchange lets them read existing conversations they've tapped but today can't read, whereas breaking say the signing algorithm wouldn't let them somehow go back in time and sign things now because that's not how time works. So that's why the focus on KEX. Once such a thing exists or clearly is soon to deliver it's important to solve a lot of other problems such as signing, but for KEX that's already too late.
> Third, notice that the only remaining multiplication is a multiplication by 4. Because 15 is one less than a power of 2, multiplying by 2 modulo 15 can be implemented using a circular shift. A multiplication by 4 is just two multiplications by 2, so it can also be implemented by a circular shift. This is a very rare property for a modular multiplication to have, and here it reduces what should be an expensive operation into a pair of conditional swaps.
> Aside: multiplication by 16 mod 21 is the inverse of multiplying by 4 mod 21, and the circuits are reversible, so multiplying by 16 uses the same number of Toffolis as multiplying by 4.
I couldn't really find anything explaining the significance of this. The only info I found said that "4 mod 21 = 4" (but I don't know if it was AI slop or not).
Is "multiplying by 4 mod 21" something distinct to quantum computing?
It is phrasing mathematicians sometimes use. In this sentence 'mod 4' does not indicate an operator but denotes what number system you are in.
For instance the following are equivalent:
2 = 6 mod 4
6 = 2 mod 4
This 'mod 4' can also appear in parentheses or in some other way, but it must appear at the end. Like I said it is not an operator rather it denotes that the entire preceding statement takes place in the appropriate quotient space.
So it is not (multiplying by (4 mod 21)) but ((multiplying by 4) mod 21)
See https://en.wikipedia.org/wiki/Modular_arithmetic. It means that the multiplication is performed modulo 21.
Thank you!
Fractions are under any modulus are actually representable as whole numbers (provided they don’t share factors with the modulus).
For example under mod 21 a half can actually be represented by 11. Try it. Times any even number by 11 and you’ll see you halved it.
Take any number that’s a multiple of 4 and times it by 16 under mod 21. You now have that number divided by 4.
Etc.
Absolutely nothing to do with quantum computers.
I mean 4 is equivalent to 4 mod 21. That part's not AI slop.
I think, in fact, for every prime number p at least 5, 4 mod p is (congruent to) 4.
Even without the restriction to primes, that feels like a pretty good guess!
Or for less than 5.
lol good point.
And these are the same quantum computers that will eventually break ecliptic curve cryptography? Now I’m very confused.
If we can build a machine with enough coherent qubits, then it'll be able to break ECC.
As it turns out, that's a big if, but the bigness of the if is about hardware implementation. The theory behind it is just basic quantum mechanics
Article: it takes 2405 entangling gates to factor the number 21.
In many applications you are concerned about data you encrypt today still being secure 20 years from now. Especially if your threat model includes a malicious actor holding on to data in hopes of decrypting it later.
From the article it sounds like we will still be safe for 20+ years. On the other hand 15 was just extraordinarily easy, progress after 21 will be much quicker. And we never know which breakthroughs might come in the next decades that speed up progress.
"progress after 21 will be much quicker"
Can you provide a quick verification for that?
The article lists three reasons why 21 is so much harder than 15. Mostly that with 15 most of the required conditional multiplications are multiplications by 1 and the remaining multiplication can be done with cheap shifts because it's a multiplication by a power of 2 modulo 15 (which is one less than a power of two).
But 22 and 24 are in the same boat as 21 here. All three of them require computing only factors that are not one, all three are not one less than a factor of 2. You need slightly more multiplications (and thus more gates) as the numbers get larger, but that only grows linearly. Maybe the conditional multiplications required get slightly more expensive to implement, but I wouldn't expect a 100x cost blowup from that. Error correction is still an issue, potentially making a linear complexity increase quadratic, but qubit counts in quantum computers also increase at an exponential rate
24 has 3 as a factor. 3 is one less than a power of 2.
That's a bit off, but more importantly using information about the answer to make the circuit cheaper is cheating.
Well n=21 too but the solution for n=15 used that 15 is one less than a power of 2, not its divisors, because we are living in modulo n.
Thanks. I don't understand quantum computing at all.
23 and 29 are prime.
That's what I get for not enough coffee. Same holds for 22 and 24 though
I think the idea is that it doesn’t matter if it takes billions of gates to achieve, the point is that it can do it very fast. If we thought a table sized FPGA could do it, a state actor would most definitely build one.
theoretically
The practical problem is that ‘noise’ between gates seems to increase exponentially, so practically it may actually be impossible to construct anything with more than a handful of gates for the foreseeable (possibly indefinite?) future.
It’s essentially the crypto version of Fusion.
Quantum error correction addresses this problem and we now have tech demos showing that we can build scalable QCs with surface codes [0].
[0] https://scottaaronson.blog/?p=8525
This, like all other purported advancements in quantum error correction, is a meme with zero practical impact.
any sources? I'd be interested to read a critique of quantum error correction
Cool, so we should totally be able to factor 21 (or larger numbers)…. When?
You brought up gate noise, there has been quite a bit of progress on that problem.
> so we should totally be able to factor 21 (or larger numbers)…. When?
Just because we solve one problem doesn't imply all the problems in QC are also instantly solved. I guess it does if you assume noise is the only problem and once is it solved the engineering is trivial. That is not the case. Even assuming all foundational problems have been solved, figuring out how actually engineer and also mass produce large numbers of gates, will take a while.
As the article pointed out, going from 15 to 21 requires a 100x increase in gates.
As the article that you posted under says:
"Because of the large cost of quantum factoring numbers (that aren’t 15), factoring isn’t yet a good benchmark for tracking the progress of quantum computers. If you want to stay abreast of progress in quantum computing, you should be paying attention to the arrival quantum error correction (such as surface codes getting more reliable as their size is increased) and to architectures solving core scaling challenges (such as lost neutral atoms being continuously replaced)."
Hasn't classical already severely crippled ECC because of some mathematical Assumptions that somebody came back in 2022 and Proved were wrong?
I believe you are thinking of "Supersingular isogeny Diffie–Hellman key exchange" or SIKE which is a post quantum encryption algorithm that was spectacularly broken a couple years ago. The math involves elliptical curves but it's different from the elliptical curve cryptography used in your browser.
Which assumptions? ECDLP is still considered computationally hard, and ECC considered secure. There are invalid curve attacks and small subgroup attacks but that's a problem with key selection, not a fundamental problem with ECC.
Do you have a citation?
Could you link to any more information about this?
Not in general, no. It is still secure and used. There are of course attacks but those were not completely breaking
This is the reality a million clickbait fearmongering articles won’t tell you. And pr machines for quantum computing companies won’t either.
"Is this what you can conjure Saruman?"
I have a strong belief that new mathematical tools and methods can be developed that can make it "easy" to break a lot of modern cryptography primitives without ever using a quantum computer.
The potential is there, we haven't made it yet. It's the same with AI, AGI and all that. Imagine if you'd read a response from GPT-2 back in 2019, you'd also be like "and these are the same models that will eventually give us AGI".
Not a great analogy, since there’s zero chance the kinds of model involved in GPT-2 will give us AGI.
Zero? Aren't you a little bit overconfident on that?
Transformer LLMs already gave us the most general AI as of yet, by far - and they keep getting developed further, with a number of recent breakthroughs and milestones.
Imagine if you'd read a response from GPT-5 in 2025, you'd also be like "and these are the same models that will eventually give us AGI".
My belief in achieving actual quantum computing is going down as noise in qbits goes up
Digital computers were much easier than that. Make it smaller, make a larger number of it, and you're set.
Quantum computers complexity goes up with ~ n^2 (or possibly ~ e^n) where n is the number of qbits
At the same time, things like d-wave might be the most 'quantum' we might get in the practical sense
It turns out that error correction was easy on digital computers, and was essentially a solved problem early in their development. In fact, "noise immunity" is arguably the defining feature of a digital system. And error correction can happen at each gate, since there's no reason to propagate an indeterminate number.
Except quantum error correction algorithms that are good don’t exist and probably theoretically never can exist: https://spectrum.ieee.org/the-case-against-quantum-computing
The current best one- and two-gate errors are well below 0.01% and going down with every generation of chips. See: https://ianreppel.org/quantum.html
There are no theoretical reasons QEC can't exist. In fact it already does. Is it already good enough for universal fault tolerance? No. But then again no one said it would. We are slowly getting closer every year.
In his book, Dyakonov offers zero solid reasons other than "it's hard" and thus likely not possible. That's just an opinion.
I took a QC course, and have done some reading, but am hardly an expert. But my impression has been: "This is analog computation." To reinforce the similarity, the error level of analog computers can be improved by running many of them in parallel.
What? I thought quantum computing was going to be factoring billion digit prime numbers in 5 years?
Sounds like a you problem
> Why haven't quantum computers factored 21 yet?
They tried. But because they know exactly the question, they cannot give a precise answer. /s
That's ok as long as the VCs don't know the answer either /s