The case against Google's claims of "quantum supremacy"

(gilkalai.wordpress.com)

158 points | by nsoonhui 19 hours ago ago

99 comments

  • GilKalai 13 hours ago

    Hi everybody, my post summarizes an on-going 5-year research project and four papers about the 2019 Google experiment. The timing of the post was indeed related to Google's Willow announcement and the fantastic septillion assertion. It is not clear why Google added to the announcement of nice published results about quantum error correction a hyped undocumented fantastic claim. I think that our work on Google's 2019 experiment provides useful information for evaluating Google's scientific conduct.

    • amirhirsch 8 hours ago

      Welcome to Hacker News Gil! I’m a big fan of your work in complexity theory and have thought long and hard on the entropy-influence conjecture revisiting it again recently after Hao Huang’s marvelous proof of the sensitivity conjecture.

      To answer your question on why the hyped fantastic claim, as you must know, the people who provide the funds for quantum computing research almost certainly do not understand the research they are funding, and need as feedback a steady stream of fantastic “breakthroughs” to justify writing the checks.

      This has made QC research ripe for applied physicists who are skilled in the art of bullshitting about Hilbert Spaces. While I don’t doubt the integrity of a plurality of the scientists involved, I can say with certainty that approximately all of the people working on Quantum Computing Research would not take me up on my bet of $2048 that RSA2048 will not be factored by 2048 —- and would happily accept $204,800,000 to make arrays of quantum-related artifacts. Investors require breakthroughs or the physicists will lose their budget for liquid gases — certainly exceeding $2048.

      While there might be interesting science discovered along the way, I think of QC a little like alchemy: the promise of unlimited gold attracted both bullshitters and serious physicists (Newton included) for centuries, but the physical laws eventually emerged that it is not scalable to turn lead into gold. Similarly it would be useful to determine the scaling laws for Quantum Computers. How big of an RSA key is needed before even a QC exceeds the total number of particles in the universe to factor it in reasonable time? Is 2048 good enough that we can shelf all the peripheral number-theory research in post-quantum-cryptography? Let’s not forget the mathematicians too!

      • tmvphil 4 hours ago

        You are too certain. Polls of working quantum researchers put significant credence on breaking RSA within 20 years. https://globalriskinstitute.org/publication/2023-quantum-thr...

        • light_hue_1 2 hours ago

          You should not put any weight on surveys like this.

          I'm an ML/AI researcher. I get similar surveys regularly. I don't reply, neither do my colleagues. The people who reply are a self selected group who are heavily biased toward thinking that AGI will happen soon. Or they have a financial interest in creating hype.

          Most of the experts from that report have a direct financial benefit from claiming that this will happen really soon now.

          • dxbydt 13 minutes ago

            > Most of the experts from that report have a direct financial benefit from claiming that this will happen

            Rigetti $RGTI is up 400% this month. 135% this week alone. He’s right.

      • sgt101 7 hours ago

        I think Shor's scales lineally, if it's possible to do the fine grain control of the most significant bits. Some people don't think that's a problem, but if it is then growing keys will be an effective defense.

        • amirhirsch 7 hours ago

          The argument is that error correction doesn’t scale.

          • sgt101 6 hours ago

            There's a specific view (as I understand it) that QFT's don't scale https://arxiv.org/abs/2306.10072 but some folks seem to dismiss this for reasons I just don't grok at all.

          • adgjlsfhk1 6 hours ago

            that's a tough argument given that there are already known algorithms to scale it. it's possibly QM is just broken, but it it's not, it's hard to see how error correction wouldn't work

            • amirhirsch 6 hours ago

              But we're literally having this discussion with Gil Kalai in the next room.

    • EvgeniyZh 2 hours ago

      In 2019 you asserted [1] that attempts of creating distance-5 surface code will fail. Do you think you were wrong? If so, what was your mistake and why do you think you made it? If not, what's the problem with the Google's results? Have your estimations of feasibility of quantum computers changed in light of this publication?

      [1] https://arxiv.org/abs/1908.02499

    • WhitneyLand 9 hours ago

      Would be interested to hear your response to Scott Aaronson’s comment:

      “Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results.”

      • GilKalai 4 hours ago

        I think that I responded over Scott's blog but I can respond again perhaps from a different angle. I think it that is important to scrutinize one (major) experiment at a time.

        We studied the Google 2019 claims and on the way we also developed tools that can be applied for further work and we identified methodological problems that could be relevant in other cases (or better could be avoided in newer experiments). Of course, other researchers can study other papers.

        I don't see in what sense the new results by Google, Quantinuum, QuEra, and USTC are more inarguable and I don't know what experiment by IBM Scott refers to. And also I don't see why it matters regarding our study.

        Actually in our fourth paper there is a section about quantum circuits experiments that deserves to be scrutinized (that can now be supplemented with a few more), and I think we relate to all examples given by Scott (except IBM) and more. (Correction: we mention IBM's 127 qubit experiment, I forgot.)

      • supernewton 6 hours ago

        You know he's been responding directly on Scott Aaronson's blog, right?

        • VirusNewbie 5 hours ago

          he stopped responding when Scott told him how his prediction was wrong.

        • fluoridation 6 hours ago

          You know people sometimes don't know things, right?

    • sampo 6 hours ago

      > It is not clear why Google added [...] a hyped undocumented fantastic claim.

      I think it's clear.

    • noqc 7 hours ago

      I have a silly question, and I'm going to shamelessly use HN to ask it.

      In Kitaev's construction of the high purity approximation to a magic state, he starts with the assumption that we start with a state which can be represented as the tensor product of n mixed states which are "close enough". I don't understand where this separability property comes from. My (very) naive assumption would be that there is some big joint state which you have a piece of, and the information that I have about this piece are n of its partial traces, which are indeed n copies of the "poor man's" magic state.

      Can I know more than that? There's lots of stuff in the preimage of these partial traces. Why am I allowed to assert that I have the nicest one?

      • ziofill 4 hours ago

        Can it be that he assumes you have some device that produces somewhat bad magic states and then you distill them into a better one? That would be the typical situation in practice.

    • sgt101 7 hours ago

      Thank you for your work and perspective - it's important that science is carefully reviewed and that doing the review is well regarded.

    • vitus 11 hours ago

      Hi! I am under the impression that you're one of the better-known skeptics of the practicality of QEC. And to my untrained eye, the recent QEC claim is the more interesting one of the two.

      (I am inclined to ignore the claims about quantum supremacy, especially when they're based on random circuit sampling which as you pointed out made assertions that were orders of magnitude off because nobody cares about this problem classically, and so there is not much research effort into finding better classical algorithms. And of course, there's a problem with efficient verification, as Aaronson mentions in his recent post.)

      I've seen a few comments of yours where you mentioned that this is indeed a nice result (predicated on the assumption that it's true) [0, 1]. I worry a bit that you're moving the goalposts with this blog post, even as I can't fault any of your skepticism.

      I work at Google, but not anywhere close to quantum computing, and I don't know any of the authors or anyone who works on this. But I'm in a space where I feel impacts of the push for post-quantum crypto (e.g. bloat in TLS handshakes) and have historically pooh-poohed the "store now, decrypt later" threat model that Google has adopted -- I have assumed that any realistic attacks are at a minimum decades away (if they ever come to fruition), and very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

      If I take the Willow announcement at face value (in particular, the QEC claims), should I update my priors? In particular, how much further progress would need to be made for you to abandon your previously-stated skepticism about the general ability of QEC to continue to scale exponentially? I see a mention of one-in-a-thousand error rates on distance-7 codes which seems tantalizingly close to what's claimed by Willow, but I would like to hear your take.

      [0] https://gilkalai.wordpress.com/2024/08/21/five-perspectives-...

      [1] https://quantumcomputing.stackexchange.com/questions/30197/#...

      • marcinzm 8 hours ago

        > very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

        30 year old skeletons in people’s closets can be great blackmail to gain leverage with.

        edit: As I understand it this is a popular way for state actors to "flip" people. Threaten them with blackmail unless they provide confidential information or do some actions.

        • dboreham 4 hours ago

          I'm confused by this line of thinking. Are there really actors who a) do have the entire internet traffic since forever stored, but b) lack the resources to just go get whatever targets' sensitive data they want, right now?

          • 542354234235 4 hours ago

            The data this actor wants may be in an air gapped secure facility, for example Iran's nuclear facilities. Decrypting old social media messages that show that a scientist at that facility had a homosexual relationship while he was in college in Europe would give you access in a way you didn't have before.

            That is an extreme example but high value information is often stored and secured in ways that are very resistant to theft. Using less secure and/or historical data to gain leverage over those with access to that data is exactly how spies have been doing things for centuries.

      • pera 10 hours ago

        > If I take the Willow announcement at face value (in particular, the QEC claims) [...]

        Considering that Google's 2019 claim of quantum supremacy was, at the very least, severely overestimated (https://doi.org/10.48550/arXiv.1910.09534) I would wait a little bit before making any decisions based on the Willow announcement.

      • giancarlostoro 10 hours ago

        > very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

        The NSA is most likely interested in all data let's be honest. At a bare minimum, in all foreign actor data.

    • nickpsecurity 8 hours ago

      I appreciate you doing peer review of their claims. I have a few questions about the supremacy claim.

      Are there good write-ups on the random, sampling problem that would help implement its get started?

      What are the top classical algorithms, esp working implementations, for this problem?

      Have the classical implementations been similarly peer reviewed to assess their performance?

  • gnabgib 18 hours ago

    Strange timing given the claim was in 2019, guess this post was generated because of Willow.

    Google claims to have proved its supremacy with new quantum computer (256 points, 1 year ago, 229 comments) https://news.ycombinator.com/item?id=36567839

    Quantum computers: amazing progress, but probably false supremacy claims (126 points, 5 years ago, 73 comments) https://news.ycombinator.com/item?id=21167368

    Google Achieves Quantum Supremacy. Is Encryption Safe? (38 points, 5 years ago, 21 comments) https://news.ycombinator.com/item?id=21100983

    Google claims to have reached quantum supremacy (114 points, 5 years ago, 21 comments) https://news.ycombinator.com/item?id=21029598

    Google Engineers Think This 72-Qubit Processor Can Achieve Quantum Supremacy (91 points, 7 years ago, 42 comments) https://news.ycombinator.com/item?id=16543876

    Google plans to reach a Quantum Computing milestone before the year is out (147 points, 8 years ago, 49 comments) https://news.ycombinator.com/item?id=14171992

    • sampo 15 hours ago

      > Strange timing given the claim was in 2019

      In 2024 Google used 67 quantum bits to solve the same/similar random circuit sampling benchmark, that they used 53 bits in 2019. The discussion from 2019 on what is the relevance (if any) of solving random circuit simulation problems, is equally valid today.

      Maybe in 2030 Google or someone will use 200 or 400 bits to solve an even bigger instance of the random circuit simulation benchmark problem, and then we get to have this same discussion once again.

    • bawolff 17 hours ago

      They added a paragraph at the end to address the new experiment.

      Still nothing really new here. I'm doubtful anything will convince the author short of shor's algorithm actually factoring large numbers.

      • thesz 17 hours ago

        https://eprint.iacr.org/2015/1018.pdf

          As pointed out in [57], there has never been a genuine implementation of Shor’s algorithm. The only numbers ever to have been factored by that type of algorithm are 15 and 21, and those factorizations used a simplified version of Shor’s algorithm that requires one to know the factorization in advance.
        • adastra22 16 hours ago

          What’s the point of quoting this?

          • thesz 16 hours ago

            To me it was eye opening on how skewed is the presentation of these quantum computing number factoring advances.

            It is completely in line of the "The Case Against..."

            • rcxdude 12 hours ago

              It feels more like the time to develop the tech has outlived the hype cycle around the tech, perhaps moreso than usual. That doesn't mean it isn't going anywhere, just that it's still slow (and in fact, after the hype cycle, the average press release is more likely to be substantive than during it, even if it's still incremental).

              (From my reading/understanding of it, for a while there's been little point in trying to make a quantum computer big enough to do such work, because the individual parts would not work well enough for it to have any chance of success, while this result primarily is showing that they're now at the cusp of the predicted tipping point where the qubits have a low enough error rate that building a larger system out of them has a hope of working. That's the big news in google's recent announcement, not them pushing up the numbers in this somewhat contrived benchmark)

            • adastra22 16 hours ago

              No one is claiming number factoring advances. That would require much longer lived qubits entanglement than is currently possible. The tech is clearly advancing forward though.

            • bawolff 15 hours ago

              That feels like a straw man. Nobody was claiming otherwise.

          • sampo 15 hours ago

            > What’s the point of quoting this?

            To remind us of the large difference between the public image vs the reality of quantum computing.

            Expectation: From the news, people are getting a feeling that quantum computing revolution is almost behind the corner, and current cryptosystems will soon become irrelevant. People are (here, in Hacker News) asking "is our society ready for this", "what could I do if I had this chip at home", "existing cryptography technology is in danger", "is it time to 100x key length on browsers".

            Reality: Using Shor's algorithm to factor 15 = 5×3 is still far beyond the reach of current quantum computers. We can factor 15 = 5×3 and even 21 = 7×3 if we cheat by eliminating those branches from the calculation that are not on the happy path the correct answer.

            • adastra22 8 hours ago

              It is just around the corner though! We’re not able to make long lived qubits able to survive multiple gate operations necessary for Shor’s algorithm. But the issue here is error rates and decoherence time, and these decreasing at a geometric rate. We are getting very close to the tipping point enabling practical crypto breaks.

              • fluoridation 7 hours ago

                That reminds me of when galvanic batteries were discovered, and it was thought that the corrosion on the electrodes was something that could be worked on and eliminated. If you don't have a prototype that demonstrates all the properties that you're trying to develop, you have not demonstrated the viability of the technology. It could be that it's possible to achieve those properties with a little more work, or that a device that combines all of those properties simultaneously is simply impossible.

            • jppittma 13 hours ago

              What are the chances that the CIA has a manhattan project style quantum computer somewhere? I can’t believe that if the current SOTA could break commonly used encryption methods that it would be widely known.

              • dekhn 2 hours ago

                The NSA maintains a fleet of oddball devices that give them unique capabilities. Usually these are built by big vendors, but sometimes by smaller companies with a unique feature (IBM Harvest, Connection Machine CM-5, Cray Various Models of *MP) It seems reasonable they have some small number of quantum computers that a limited number of researchers use to explore new ideas, but it's harder to imagine their production operations use QM for cryptologic or other computing.

                One of their big needs for production operations is having 24/7 support through christmas break, which favors companies like IBM.

              • GTP 12 hours ago

                I think it's still unlikely. In the case of the Manhattan project, the USA government wasn't racing against private companies but against other governments. Therefore, all advances in the field where highly secret. In this case, the CIA would be racing against private companies as well, which are hiring the best physicists and engineers too. So, while it's possible that a governmental agency is ahead of what it is publicly know, I doubt that they would be dramatically ahead. This is similar to part of the argument I would present if someone asks me why I believe cryptography in general is secure.

                • zelon88 6 hours ago

                  > In this case, the CIA would be racing against private companies as well...

                  You mean like Skunkworks when they designed the SR-71?

                  Or DARPA with Atlas?

                  Or the DoD with the global GPS network?

                  Or the NSA's ANT catalogue?

                  Yeah, the government has never had any issues outpacing the private sector. You only ever find out about it when they want you to. Usually that happens 5 to 30 years after the market has already caught up.

                  • XorNot 3 hours ago

                    All your examples consist of things no company has any commercial incentive to do.

                    Airlines don't want high altitude supersonic spy planes, for example.

                    And GPS is a collective action problem: no one wants to pay for it, but we do all benefit from it being ambiently around.

              • fidotron 12 hours ago

                It is basically a certainty that the CIA and NSA have their own projects, and also keep very close tabs on those outside - it is their exact remit.

                The great quantum computing nightmare, from the NSA point of view, is someone sidestepping out of nowhere with a viable machine that works in an unexpected and easy to reproduce way.

                Edit to add: see also https://en.m.wikipedia.org/wiki/DNA_computing which while bounded in the same sense as conventional machines would still be a game changer.

                • dekhn 2 hours ago

                  I saw a lecture from the author of the first DNA computing paper ( https://en.wikipedia.org/wiki/Leonard_Adleman) in '94. We pulled him into a room after the talk (because he was talking about scaling up the computation significantly) and walked him through the calculations. BEcause the system he designed required a great deal of DNA to randomly bind to other DNA in a highly parallel fashion, you'd need enormous vats of liquid being rapidly stirred.

                  Like other alternate forms of computing, the systems we build on CPUs today are truly hard to beat, partly because people are trained on those systems, partly becaus the high performance libraries are there, and partly because the vendors got good at make stupid codes run stupid fast.

                  At this point I cannot see any specific QC that could be used repeatedly for productive work (material simulations, protein design) that would be more useful than a collection of entirely conventional computing (IE, a cluster with 20K CPUs, 10K GPUs, 100PB of storage, and a fast interconnect). Every time I see one of these "BMW is using a quantum computer to optimize logistics" and look more closely, it's obvious it's a PR toy problem, not something giving them a business edge.

                • Der_Einzige 11 hours ago

                  For similar reasons, if you do anything interesting in AI, you’re very much the type to be targeted with additional targeted surveillance.

                  It’s sad to imagine the amount of smart nerds out there whose only actual experience with women is from the fking honeypots that Langley or ft mead et al use to make sure that some prospective AI talent on discord isn’t about to release a bioweapon. Clearly a lot of AI talent overlaps with incels and adjacent communities (see civit.ai as an example of this), It’s common knowledge that field agents skew female since they’re less suspected by patriarchal idiotic targets (they tout this in recruiting for DEI reasons). It’s probably smart for AI startups to tell their male coworkers to be on the lookout for random attractive women trying to talk to you. The US military and foreign service et al specifically warns its members about this and it is a clear and present danger.

                  And FYI, if the glowies aren’t doing this, they’re not doing their jobs, since the risk of some crazy open source AI person deciding to lone wolf society is rather high, at least according to the less wrong folks (that community I bet is also crawling with spooks). AI is so full of industrial and business espionage that I get scared just being in the space.

                  I know this is happening too because the private version of it, expert networks, are extraordinarily lucrative and rely on basically laundering of material non public information with plausible deniability. The “experts” on an “expert network” are basically private business spooks, ackin to a private investigator targeting a business.

              • naasking 10 hours ago

                > What are the chances that the CIA has a manhattan project style quantum computer somewhere?

                Possible, but I think zero chance they have anything more practical than Google Willow, which is itself completely impractical for anything except quantum computing research.

      • drpossum 12 hours ago

        A big part of all these discussions is "why should you trust any computational result you can't verify?"

        • evandrofisico 11 hours ago

          A big problem in science in general is reproducibility. There are thousands of papers being published every month with peer review, which prunes out the most obvious errors, but the whole system is built on trust.

          Very few labs around the world are tasked with testing results instead of trying to produce new science, and in this specific case, only google has access to the device being built and the computation tested by them impossible to verify with classic computers, unlike Shor's algorithm which is trivial to test with known primes.

        • bawolff 10 hours ago

          Because you can verify it part of the way there.

          The point is to ensure that researchers are actually doing what they think they are doing and not deluding themselves. Its not meant to prevent outright fraud.

    • thesz 17 hours ago

      Yes, it contains "I) Update (Dec. 10): The Wind in the Willow"

  • RivieraKid 11 hours ago

    What's the most significant useful application of quantum computing, in other words, why should we be excited about QC, how will it improve people's lives?

    • mapmeld 7 hours ago

      Chemistry applications, which may be possible using the current or near-future era of quantum computers. It helps to have actual quantum effects to compute with. For example (I am not affiliated with this company and don't know if it works) https://www.quantinuum.com/products-solutions/inquanto

    • n4r9 9 hours ago

      If we extend the definition of "computing" slightly to include "coding and cryptography", then I find superdense coding quite exciting [0]. In a hypothetical quantum internet we could encode and transmit two classical bits in every qubit, which effectively doubles your bandwidth. There's also quantum key distribution [1]. In principle, quantum theory allows Alice and Bob to establish a shared private key which cannot be broken without breaking the laws of physics. You can chat privately with people secure in the knowledge that not even the NSA can possibly be listening.

      [0] https://en.wikipedia.org/wiki/Superdense_coding

      [1] https://en.wikipedia.org/wiki/Quantum_key_distribution

      • fluoridation 7 hours ago

        >In a hypothetical quantum internet we could encode and transmit two classical bits in every qubit, which effectively doubles your bandwidth.

        Hmm... What exactly are you transmitting in such a scenario? What's the physical layer protocol? I.e. what are the wires made of and what flows through them?

        • evandrofisico 6 hours ago

          Photons, most of the current efforts for quantum entanglement pairs transmission is using good old lasers over optical fiber

          • fluoridation 6 hours ago

            Well, fair enough I suppose. Although a superficial reading would suggest this sort of technology is at least fifty years away from being deployable at opposite ends of a submarine cable to double the capacity without doubling the cable-laying. It reads like it's in the very early experimental stage, where they're barely demonstrating the plausibility, let alone the viability.

            • n4r9 2 hours ago

              The question was why is quantum computing exciting, not when can we get it.

              • fluoridation an hour ago

                The "although" wasn't meant to invalidate it as an example, it was a link between two separate ideas.

      • qup 8 hours ago

        > You can chat privately with people secure in the knowledge that not even the NSA can possibly be listening.

        Well, from one possible attack vector, anyway. They can still point lasers at your windows.

      • bgnn 2 hours ago

        Only 2x bandwidth improvement?

    • kuschkufan 9 hours ago

      that one dude that has tried since forever to convince that one city to allow him to dig up that garbage dump for his lost bitcoin wallet password should be able to get the coins without digging then.

      oh and all other passwords/pass phrases/secrets might get broken too, if they're not yet based on quantum save algos.

    • alphager 6 hours ago

      Optimization problems like placing trains on a network become computationally viable. We're currently using unoptimal solutions because calculating an optional solution would require way too much time.

  • bigbacaloa 17 hours ago

    When plate tectonics was first proposed the low quality of the original models of the underlying mechanism led many very good geophysicists and geologists to reject it or to express well founded skepticism of the model. They knew the details and subtleties and the original models didn't deal well with them the proposal was nonetheless correct grosso modo and with time more adequate models of the underlying mechanisms were proposed and a more nuanced view of the model came into place. Now it is considered well established.

    One should consider the possibility that Gil Kalai is a similar sort of skeptic making well founded objections to weak arguments, but that nonetheless in the long run the extraordinary claims will turn out to be more or less correct. It's true that those involved in plate tectonics didn't have Bitcoin to sell you, but they were looking for oil.

    • GTP 14 hours ago

      I still see a difference here: the blog post author isn't claiming that we will never have quantum computers. What he is disputing, is Google's claim that a specific quantum device that they have is already able to perform several orders of magnitude better than classical computers. Arguing that this is false, is different from arguing about what will happen in the future.

    • KK7NIL 16 hours ago

      No theory can stand on steady ground without the critics checking its foundations.

      It's a shame we cast their role in the history of science in such a negative light, as if science was a game to be won or lost instead of a collective process where a clear path forward only appears in hindsight (only to be proven wrong again sometime later) .

      Oh yeah, and that anecdote you told about tectonic plates screams survivorship bias; that's the problem with anecdotes.

      • ilya_m 15 hours ago

        > Oh yeah, and that anecdote you told about tectonic plates screams survivorship bias; that's the problem with anecdotes.

        Very good point. For every tectonic plates theory or heliocentric system or H. pylori causing ulcers there are thousands of claims that are plain wrong. Statistically speaking, knowledgeable critics acting in good faith (eg, not having strong conflict of interests) are correct with the overwhelming probability.

        • Lerc 13 hours ago

          I'd add to this and say the probability being against a theory creates a risk of less than thorough dismissals of claims.

          You get this all the time with perpetual motion machines. The near certainty of the claim being false leads to confident dismissals that go 'blah, blah, laws of physics, blah blah thermodynamics, therefore can't happen'

          The real question to be asking about a claim of a perpetual motion machine is 'Where does the new energy come into being?'.

          Citing laws of physics won't help you because any claim to have made a perpetual motion machine is implicitly claiming to be a proof by counterexample that one of those laws is wrong.

          • nkrisc 12 hours ago

            > Citing laws of physics won't help you because any claim to have made a perpetual motion machine is implicitly claiming to be a proof by counterexample that one of those laws is wrong.

            Citing the laws of physics in this case is the shorthand way to point the overwhelming number of proofs by example that the laws are correct.

            • Lerc 4 hours ago

              It doesn't matter how many examples you have for a law. A single genuine counterexample counts as a disproof.

              If your law is all liquids flow off a ducks back. Water off a ducks back does not prove it. Acid off a ducks back disproves it. https://i.imgflip.com/7waajp.png

              I don't think it is a matter of a shorthand. I think it is because humans have a tendency to express a strong opinion when they intend to express that a weaker opinion is strongly held. Citing laws of physics does not say "Your perpetual motion machine won't work" but rather "I am confident that your perpetual motion machine will be shown to not work".

              • nkrisc 3 hours ago

                Yes, that’s fair and I was sloppy with my phrasing. What I meant was that if you have 1,000 practical applications that function on the assumption the law is true and they behave as predicted, and then you have a single example that appears to disprove, then extraordinary claims require extraordinary evidence.

                A single device, made in some garage, that appears to disprove it is simply not rigorous enough to prove anything and isn’t worth third party investigation until the creator has shown they’ve ruled out possible explanations.

          • nuancebydefault 11 hours ago

            Still, most laws of physics found by humans are wrong, every so many years they get refined by laws that are a bit less wrong.

          • hooverd 8 hours ago

            Nobody wants to seriously engage with the perpetual motion cranks who CC the entire department on a novel length email.

      • IshKebab 16 hours ago

        I think his point was essentially that it's possible to have technically correct nitpicks that are irrelevant. You see this all the time from naysayers.

        • KK7NIL 15 hours ago

          I understood that. My point is that they aren't irrelevant, they help drive the new theory forward.

          Galileo's initial results could not predict many things that the old geo-centric models could predict for centuries.

          This is almost inevitable with a groundbreaking new framework, but the skeptics aren't wrong to point it out, it's up to the supporters to show it can do what old model can do and more, which Galileo was never quite able to show in his lifetime, if I recall correctly.

          Again, it's easy to look back and talk about how these "irrelevant nitpicks" of tectonic plates and helio-centrism were "wrong" or "irrelevant", but that's just not how science works, you don't get to skip over the details when you present a theory that undermines everything we know, that's just crackpot behavior.

          • rcxdude 12 hours ago

            Indeed. Galileo was up against a model that was much more useful and matched better with the available evidence and understanding: Often it's described as Copernicus vs Ptolemy, but while Galileo's observations conclusively destroyed Ptolemy by showing the other planets rotated around the sun, Brahe had a model where that was true, but the sun revolved around the earth, and it was functionally identical to the Copernican model except that it didn't require the earth to rotate and it was easier to calculate because it used fewer epicycles. It was Kepler that actually fixed the problem (before telescopes, even!) with epicycle-based models by making the paths ellipses, and it took the modern understanding of velocity, acceleration, and momentum which Newton perfected (neatly deriving Kepler's ellipses as well), along with experiments by a few other scientists demonstrating the coriolis effect on dropped objects to actually produce direct evidence of the earth's rotation and address the objections of Galileo's detractors (And Foucault hammered it home with his pendulum). Some of Galileo's contemporaries attempted similar experiments but failed because the effect was too small.

        • Veen 16 hours ago

          You also see institutions and researchers exaggerating and confabulating all the time, so the sceptics are a useful corrective, especially if they happen to be right.

  • NanoYohaneTSU 11 hours ago

    All of this stuff is so fake. Quantum computing is a vaporware scam and I've been saying it for at least a decade at this point. Waste of time and money by VCs who wants government money forever.

    • evandrofisico 11 hours ago

      Quantum computing seems to be in the same realm as nuclear fusion as a power source. My Quantum theory professor used to say that in the 1960's, when he was a physics student in the Soviet Union it was "the energy of the future, in 20 years everything will be powered by nuclear fusion".

      When i was an undergrad student 20 years ago I did hear that "soon, quantum computing will change the world", and yet here we are, every year someone builds a new machine but no one has yet to factorize that 21 = 7x3 in a general way.

      • kgwgk 9 hours ago

        Fusion is the energy of the future – and it always will be.

        • short_sells_poo 9 hours ago

          You could also say we are already using fusion power - via solar panels. We have a massive ball of plasma powered by fusion in the sky and we are harvesting the energy it creates at an ever growing scale.

          Whether we'll be able to replicate it profitably at small scale is a question.

          • fluoridation 7 hours ago

            Of course, by that definition all forms of power generation are fusion, more or less removed from the fusion reactor in question.

            • zeroonetwothree 3 hours ago

              Some part of geothermal energy comes from gravity which could be argued is not related to fusion (even if the matter undergoing gravity was created in fusion)

              • fluoridation 3 hours ago

                Geothermal energy is derived from differences in kinetic energy between different layers of the planet (so extracting it saps kinetic energy from the planet's rotation). The energy stored in the Earth's rotation comes from the kinetic energy in the Solar nebula, which itself comes from stellar explosions in the primordial universe, which themselves were caused by fusion.

          • evandrofisico 6 hours ago

            following that definition even fossil fuels would be based on fusion, as most of it is comprised of photosynthetic life forms.

      • red_trumpet 5 hours ago

        Wait, you had a quantum theory Prof in the 60s, but you also were an undergrad 20 years ago? That's a ~40 year period in between, makes me curious to what happened.

        • fluoridation 5 hours ago

          The professor taught GP quantum theory during the '00s, and in the '60s that same professor was a physics student in the USSR, where it was said that by the '80s fusion would power everything.

        • lobsterthief 5 hours ago

          Re-read his comment—his professor was recalling something from the 1960s. OP did not hear this information in the 1960s.

        • riku_iki 5 hours ago

          successful quantum transfer experiment.

  • r33b33 12 hours ago

    Let's talk about things that actually matter - where to invest in post-quantum world?

    I'll keep this short.

    - Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

    - Hypothesis: Accelerating advancements in tech and AI could lead to quantum supremacy arriving sooner than the 2030s, contrary to expert predictions.

    - Legacy banking systems, being centralized, could transition faster to post-quantum-safe encryption by freezing transfers, re-checking processes, and migrating to new protocols in a controlled manner.

    - Decentralized cryptocurrencies face bigger challenges:Hard forks are difficult to coordinate across a decentralized network.

    - Transitioning to quantum-safe algorithms could lead to longer transaction signatures and significantly higher fees, eroding trust in the system.

    - If quantum computers compromise current cryptography, tangible assets (e.g., real estate, stock indices) may retain more value compared to digital assets like crypto.

    Thoughts?

    • GTP 11 hours ago

      > Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

      This is the point that was disputed in the article, and you're instead taking it for granted.

    • evandrofisico 6 hours ago

      About your first point, "outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.", that is the point of the article. They used their computer to model a system that can't be verified with a classic computer algorithm, so no, we are not certain that it is solving anything.

      About the other points, quantum computers are massively different from classic ones, so much that there are very few algorithms for them. For example, GPUs are faster at matrix multiplication because it can be implemented independent parallel threads, but they suck at other problemas. A quantum computer is good for running quantum algorithms [1], of which there are very few at the moment, and most of them are useful for simulating quantum physics. It is not a "faster" classic computer in any way.

      https://en.wikipedia.org/wiki/Quantum_algorithm

    • vishnugupta 11 hours ago

      > Legacy banking systems,

      From what I know about banking world, though second hand, having been working in payment processing systems I can say with confidence that it’s not the compute that’s holding them back.

    • fluoridation 7 hours ago

      It doesn't take billions of years to generate a small sampling of random (or random-looking) numbers.

    • nightowl_games 9 hours ago

      Quantum computing only solves certain classes of tasks faster, not all tasks, not most tasks, only a tiny amount of tasks.

      • zeroonetwothree 3 hours ago

        Actually, Grover’s algorithm would speed up a wide range of tasks. Hardly a tiny amount.

        • tsimionescu 2 hours ago

          Only marginally, and it's going to take a loooooooong time until you'll have a quantum computer the size of today's classical computers to actually see any improvement from Grover's algorithm.

          Shor's is a completely different matter entirely: the difference between exponential and linear time is so huge that even a comparatively tiny QC (only a few million qubits) would significantly outpace the largest classical supercomputers put together on this specific problem.