43 comments

  • gyomu 12 hours ago

    The only thing that matters in the discussion around intelligence is purpose and intent. Intelligence is always applied through intent, and in service of a purpose.

    What is the broader context of OP trying to prove a theorem here? There are multiple layers of purpose and intent involved (so he can derive the satisfaction of proving a result, so he can keep publishing and keep his job, so their university department can be competitive, etc), but they all end up pointing at humans.

    Computers aren’t going to be spinning in the background proving theorems just because. They will do so because humans intend for them to, in service of their own purposes.

    In any discussion about AI surpassing humans in skills/intelligence, the chief concern should be in service of whom.

    Tech leaders (ie the people controlling the computers on which the AIs run) like to say that this is for the benefit of all humanity, and that the rewards will be evenly distributed; but the rewards aren’t evenly distributed today, and the benefits are in the hands of a select few; why should that change at their hands?

    If AI is successful to the extent which pundits predict/desire, it will likely be accompanied with an uprising of human workers that will make past uprisings (you know, the ones that banned child labor and gave us paid holidays) look like child’s play in comparison.

    • laterium 2 hours ago

      Which tech leader said the rewards will be distributed evenly? That sounds more like a rhetorical strawman for you to dunk on to make a point. It would be similar to saying "Most HN commenters argue that all the benefits of AI will go to the billionaires, but actually they're all wrong because some of it will in fact go to average people"

  • kenjackson 9 days ago

    For some reason this sort of thing bothers a lot of people. I think it’s great that we have a new tool in the toolbelt.

    • WhyOhWhyQ 9 days ago

      Gowers wrote about AI in the late 90's. He predicted a short golden age where mathematicians would still be useful to the AI. We are in that golden age now, apparently. The AI will soon eclipse all humans in mathematics and the art form of mathematics will cease in its present form.

      • johnisgood 9 days ago

        Could you elaborate on your last sentence please?

        • siva7 13 hours ago

          Think of it like software development. That art form also deceased due to AI. Remember the famous painters and hackers essay? It's not more relevant.

        • WhyOhWhyQ 8 days ago

          Go read Gowers' essay.

          • estimator7292 8 days ago

            Form your own independent thoughts

            • tekbruh9000 7 hours ago

              Egh this is pretty "use unalived not died".

              "Get out an English thesaurus and recreate Mona Lisa in different words."

              If you really want to be a cognitive maverick, you would encourage them to make up their own creole, syntax and semantics.

              Still, the result is describing the same shared stable bubble of spacetime! But it's a grander feat than merely swapping words with others of the same relative meaning.

              You totally missed the point of "put this in your own words" education. It was to make us aware we're just transpiling the same old ideas/semantics into different syntax.

              Sure, it provides a nice biochemical bump; but it's not breaking new ground.

            • WhyOhWhyQ 8 days ago

              I was sharing Gowers' thoughts. You clearly don't know how to read. It's not surprising considering the intellectual quality of the average commenter here.

              • johnisgood 8 days ago

                I still have no idea what "eclipse all humans in mathematics" and "the art form of mathematics will cease in its present form" mean.

          • 13 hours ago
            [deleted]
          • auggierose 13 hours ago

            How about a link to it?

      • squigz 13 hours ago

        Mathematics in its current form, you say? Like, for example, when we transitioned from doing things manually to using calculators/computers?

    • muldvarp 13 hours ago

      It's because I still need to earn a living and this technology threatens my ability to do so in the near future.

      It also significantly changes my current job to something I didn't sign up to.

      • lacker 10 hours ago

        I personally like AI but it has definitely shifted my job. There is less "writing code", more "reviewing code", and more "writing sentences in English". I can understand people being frustrated.

        To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.

        • muldvarp 10 hours ago

          > To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.

          I didn't want to get into management, because it's boring. Now I got forced into management and don't even get paid more.

        • iwontberude 10 hours ago

          Yes reviewing robot code submitted by other humans. Oh the joy of paying bills.

      • deaux 9 hours ago

        > It's because I still need to earn a living and this technology threatens my ability to do so in the near future.

        That's certainly not the reason most HNers are giving - I'm seeing far more claims that LLMs are entirely meaningless becauzs either "they cannot make something they haven't seen before" or "half the time they hallucinate". The latter even appears as one of the first replies in this post's link, the X thread!

      • mettamage 13 hours ago

        Well yea, but school also tried to educate us for the unforeseen future.

        Or at least my school system tried to (Netherlands).

        This didn’t fully come out of the blue. We have been told to expect the unexpected.

        • muldvarp 10 hours ago

          > This didn’t fully come out of the blue. We have been told to expect the unexpected.

          It absolutely did. Five years ago people would have told you that white collar jobs where mostly un-automatable and software engineering was especially safe due to the complexity.

      • Alex2037 12 hours ago

        given that there had never been a technological advancement that was successfully halted to preserve the jobs it threatened to make obsolete, don't you see the futility of complaining about it? even if there was widespread opposition to AI - and no, there isn't - the capital would disregard it. no ragtag team of quirky rebels are going to blow up this multi-trillion dollar death star.

        • muldvarp 9 hours ago

          > don't you see the futility of complaining about it?

          I'm not complaining to stop this. I'm sure it won't be stopped. I'm explaining why some people who work for a living don't like this technology.

          I'm honestly not sure why others do. It pretty much doesn't matter what work you do for a living. If this technology can replace a non-negligible part of the white collar workforce it will have negative consequences for you. You don't have to like that just because you can't stop it.

    • turzmo 8 hours ago

      The problem with automation is that it can suck the soul out of a job and turn something fulfilling and productive (say, a job as a woodworker) into something much more productive but devoid of fulfillment (say, working a cog in a furniture factory).

      In the past this tradeoff probably was obvious: a farmer's individual fulfillment is less important than feeding a starving community.

      I'm not so sure this tradeoff is obvious now. Will the increased productivity justify the loss of meaning and fulfillment that comes from robbing most jobs of autonomy and dignity? Will we become humans that have literally everything we need except the ability for self-actualization?

      • laterium 2 hours ago

        Humans are risk averse and loss averse. You see the downsides and are fearful but can't yet see the upsides or underestimate them. Why not make the same argument for internet and computers? We would've been better off without them? If AI makes doctors more efficient would you have your child die to make the doctor's life more fulfilling?

    • truculent 13 hours ago

      > For some reason

      > _brief_ but enjoyable era

      • muldvarp 9 hours ago

        It's not even that enjoyable to review AI slop all day. So it's a brief and unejoyable era before the long and miserable era.

  • Tomcollins4 13 hours ago

    Inb4 simonw claiming he has discovered superintelligence in his Altman / Amodei check.

  • marcosdumay 13 hours ago

    Yes, those things are really good search engines for areas you are not completely at home in.

    • diamond559 9 hours ago

      Yeah, he acts like regurgitating Google is going to solve all our problems, he could have had the same results if he was surrounded by a team of competent mathematicians working through the problem instead of sitting in a room by himself endlessly trying to get the llm to output anything useful for hours.

      • marcosdumay an hour ago

        Not to overestimate the LLMs impact, but he would probably not get the same result if he was working with a team. If he isn't familiarized with the theorem's area, the odds are very high that nobody in the team would be familiar either, for the same reasons that apply to him.

        Compiled databases and search engines have completely different capabilities than groups of people.

  • musicale 9 days ago

    > we have entered the brief but enjoyable era where our research is greatly sped up by AI but AI still needs us

    Well that's comforting.

  • diamond559 9 hours ago

    Where is any documentation or proof of this? We just trust the tweets of some bro? Yeah, no, advanced googling doesn't make math an "order of magnitude" easier to solve.

    • yberreby 9 hours ago

      I encourage you to look up the "bro" in question. He's a Fields medalist.

      • diamond559 9 hours ago

        Those MIT guys are just scrubs who know nothing though.

      • diamond559 9 hours ago

        So he could never lie to us huh.

  • bgwalter 13 hours ago

    Gowers, Tao, Aaronson. Are there others hyping "AI"?

    All of these seem to subscribe to "inevitability", have no issues that their research relies on a handful of oligarchs and that all of their thoughts and attempts are recorded and tracked on centralized servers.

    I bet mathematical research hasn't sped up one bit due to "AI".

    • trueismywork 13 hours ago

      I am another one. My work in mathematics has sped up personally due to AI.

      Whenever you start to prove new results, you get a lot of small lemme that are probably true but you need to check them and find a good constant which works with them.

      Checking is by theorem provers and searching is by machines. You still need to figure out what you want to prove (which results are more important).

      But rest can get automated away quite quickly.

      • bgwalter 13 hours ago

        Ok, I expect the Riemann hypothesis to be proven any day now.

        • deaux 9 hours ago

          What a weird non sequitur.

          • diamond559 9 hours ago

            Well, if mathmatics is an "order of magnitude" faster to solve now we expect much more from you. In fact, maybe an order of magnitude of you mathmaticians should be fired bc you are so much more useless now!

        • Tomcollins4 13 hours ago

          [dead]