12 comments

  • amluto 3 minutes ago

    On a quick read of the paper, it's incoherent (pun intended). It seems to conflate quantum states with classical vectors, which thoroughly loses both the source of the exponential speedup in Shor's algorithm and the difficulty of quantum algorithm design.

    The paper doesn't actually give a clear description of its own algorithm, and there are two specific problems that are apparent even without much of a description:

    1. It confuses quantum state vectors with classical vectors or vectors of values. Classically, or on a quantum computer, you can have n values stored in registers or in memory or on a piece of paper or whatever and you have an n-element vector. But on a quantum computer, if you have n qubits and write down their state, you have a 2^n-element vector of complex numbers. These are not the same thing.

    So you can have the quantum Fourier transform, which Fourier transforms the coefficients in the state vector of n qubits, which is not at all the same thing as taking 2^n logical numbers and Fourier transforming the numbers.

    But this paper very glibly discusses how the QNTT (Quantum Number Theoretic Transform) is nicer than the QFT, but as far as I can tell, the "QNTT" is described in one single paper, doesn't really have much to say for itself, and is actually just an algorithm, supposedly optimized to run on current quantum hardware, that transforms n numbers stored in n registers. (And if a paper wants to claim to number-theoretic-transform the coefficients of a quantum state vector, it should start by explaining how the coefficients of said state vector are to be viewed as elements of a finite ring or field, which these papers do not even pretend to do.)

    I think they're using the QNTT to optimize modular exponentiation, which is at least vaguely plausible, but that's using the QNTT for a purpose completely unrelated to what Shor's algorithm uses the QFT for.

    2. The replacement of quantum modular exponentiation with classical modular exponentiation is just weird and is completely missing an explanation. Modular exponentiation is just a classical function, like f(r) = 2^r mod p. You can make it reversible (where all operation have inverses) by instead doing somthing like (z, r) -> (z + 2^r mod p, r) -- if you start with z = 0, you get the answer, and if you start with z != 0, you get z added to the answer.

    Quantum computers can evaluate quantum functions where the input is qubits instead of classical bits, and they do it by running reversible calculations as above, and many algorithms require doing exactly this while carefully avoiding entangling the inputs or outputs with anything else. So if you start with two quantum registers, you can write the state as complex number times each possible input state (all 2^b of them where b is the total number of bits in all the registers) and you get those same complex numbers times the output states. [0]

    The paper claims, with no explanation that I can see, that somehow you can instead do the modular exponentiation on a regular computer and encode those exponents into the quantum circuit. If you are willing to do all 2^b of them, then fine [1], but remember, b is larger than 2048, and this isn't going to work. So maybe they're approximating the modular exponentiation by somehow extrapolating it from a very, very spare set of samples? If that works, that would be quite nifty, but again the paper doesn't appear to so much as acknowledge any complication here. On the other hand, I can easily imagine factoring a number like 15 this way, since the number of samples needed to completely capture the function is rather small.

    (I hope I did an okay job of making this both correct and somewhat accessible.)

    [0] The calculation is reversible, each input state maps to exactly one output state and vice versa, so each coefficient appears in front of a different logical output state, which makes the math work.

    [1] Not fine, because the resulting circuit will be so large that you will never finish running it. But mathematically fine in the sense that you'll get the right answer.

  • adrian_b an hour ago

    A skeptical assessment:

    https://postquantum.com/security-pqc/cybersecurity-apocalyps...

    which concludes that the chances of this prediction being true are very slim.

  • emil-lp 3 hours ago

    could carrying a lot of weight here.

  • djoldman 4 hours ago

    Paper pdf:

    https://www.preprints.org/manuscript/202510.1649

    As usual with anything quantum computer: I'll wait for Scott Aaronson's take.

    • lucasfin000 3 hours ago

      Even if the algorithm holds up we are still years out from a actual quantum computer that can run at scale. But that's kind of the point. NIST finalized ML-KEM in 2024 because you don't need to wait until the house is on fire to buy insurance. Harvest now decrypt later attacks are already happening today so thee migration window is closing regardless of whether quantum ever delivers.

  • ineiti 3 hours ago

    https://postquantum.com/security-pqc/cybersecurity-apocalyps...

    TLDR: No

    Longer explanation: guy behind a company selling PQ algorithms from a new "University" which span up 4 months ago says they're better than everybody else and writes paper about it.

  • iberator 3 hours ago

    Quantum computing is a scam. So far 0 working computers that actually have proven to do things faster and correctly. Scientists are using some weak crazy adjusted algorithms to fool the investors.

    I remember an article that even google's lab could not make final verdict if they work or not. Forgot the computer name

    • umvi 3 hours ago

      As a complete layman, quantum computing seems like it could be like AI. AI for the longest time was a scam. Like, it was clearly improving, but only in marginal increments. The bar was so low though... even the state of the art was garbage - https://en.wikipedia.org/wiki/Tay_(chatbot), for example. Eventually it was starting to seem like narrow applications of CNNs/machine learning would be the future of AI, but that general purpose AI would be garbage forever. It took the attention/transformer breakthrough (and someone to realize how to use it) before we hit the explosive improvement in general purpose AI that we see today. Quantum computing could still be in the "Tay" phase right now.

      • integralid an hour ago

        >AI for the longest time was a scam. Like, it was clearly improving, but only in marginal increments.

        AI was improving a lot, we just kept moving the goalposts. AI is not just chatbots. But nitpicking aside, I get what you're saying and I agree.

  • metalliqaz 3 hours ago

    oh merely 5k qubits?

    • adrian_b an hour ago

      Much more physical qubits, even if that extrapolation were true, which is doubtful.

  • mrbluecoat 3 hours ago

    Reminds me of a recent article.. title was something like "Cloudflare currently breaks RSA-2048 encryption with MitM certs." /s