The Singularity will occur on a Tuesday

(campedersen.com)

1159 points | by ecto 19 hours ago ago

617 comments

  • stego-tech 18 hours ago

    This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

    And, yep! A lot of people absolutely believe it will and are acting accordingly.

    It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

    • nine_k 17 hours ago

      > * enough people believe it will happen and act accordingly*

      Here comes my favorite notion of "epistemic takeover".

      A crude form: make everybody believe that you have already won.

      A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

      • bee_rider 17 hours ago

        This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.

        I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.

        • ElevenLathe 17 hours ago

          IMO this is a symptom of the falling rate of profit, especially in the developed world. If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps. This in turn means you must make ever-escalating claims of future profitability. Now, here we are in a world where multiple brand name entrepreneurs are essentially saying that they are building the last investable technology ever, and getting people to believe it because the alternative is to earn less than inflation on Procter and Gamble stock and never getting to retire.

          If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.

          • huslage 14 hours ago

            Profit is a myth of epistemic collapse at this point. Productivity gains are also mythical and probably just anecdotal in the moment.

            • hyperadvanced 10 hours ago

              Perhaps I’m misunderstanding but a lot of people (ok, well, a few, but you know) make a lot of money on relatively mundane stuff. Technocapitalism’s Accursed Share is sacrificing wealth for myth making about its own future.

          • majormajor 8 hours ago

            >If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps.

            What if profit is dead because wealth is all concentrating in people who don't need it from a marginal consumption standpoint, which means asset prices blow up because everyone rich believes that they need to "invest" that money somewhere... but demand shrivels outside of interestingly-subsidized areas like healthcare because nobody else is making enough to even keep up with the asset price rises?

            And without demand, where would innovation come from?

            • imtringued 3 hours ago

              If you accept that the economy can be in disequilibrium, then it is very easy to see how this happens.

              Rich people learn a habit that makes them consume less than their investment returns. The difference is reinvested, resulting in a net increase of their equity. Even if you say they spend 90% of their returns on consumption, the last 10% still grow in absolute terms. Since returns are paid proportionally based on the quantity of equity, you can clearly see that money is allocated from where it has high marginal utility to places where it has low marginal utility.

              Of course this leads to a contradiction. If money is held by people who have a low marginal utility for consumption, why would investments pay high returns? Your equity is the latent demand that your investments need to pay you the returns in the first place. You'd increasingly be paying yourself. That is equivalent to investing into something that yields a 0% return which in turn is equivalent to doing nothing.

          • measurablefunc 17 hours ago

            What percentage of work would you say deals w/ actual problems these days?

            • andsoitis 7 hours ago

              What’s an example of work that does not deal with actual problems?

              • measurablefunc 7 hours ago

                Online influencers, podcasters, advertisers, social media product managers, political lobbyists, cryptocurrency protocol programmers, digital/NFT artists, most of the media production industry, those people w/ leaf blowers moving dust around, political commentators (e.g. fox & friends), super PACs, most NGOs, "professional" sports, various 3 letter agencies & their associated online "influence" campaigns, think tanks about machine consciousness, autonomous weapon manufacturers, & so on. Just a few off the top of my head but anything to do w/ shuffling numbers in databases is in that category as well. I haven't read "Bullshit Jobs" yet but it's on the list & I'll get to it eventually so I'm sure I can come up w/ a few more after reading it.

                • mlrtime an hour ago

                  Lol, you just listed things you don't like from a very privileged urban (my assumption) living space.

            • nosuchthing 15 hours ago

              In a post-industrial economy there are no more economic problems, only liabilities. Surplus is felt as threat, especially when it's surplus human labor.

              In today's economy disease and prison camps are increasingly profitable.

              How do you think the investor portfolios that hold stocks in deathcare and privatized prison labor camps can further Accelerate their returns?

        • kelseyfrog 17 hours ago

          Or just play into the fact that it's a Keynesian Beauty Contest [1]. Find the leverage in it and exploit it.

          1. https://en.wikipedia.org/wiki/Keynesian_beauty_contest

        • zombot 4 minutes ago

          You could say that AI has become the new religion. Plus the concomitant opiate for the masses.

        • worldsayshi 2 hours ago

          Ultimately it all comes back to the collective action problem doesn't it?

          I believe that is solvable.

        • legulere 15 hours ago

          On the other hand talking about those believes can also lead to real changes. Slavery used to be seen widely a necessary evil, just like for instance war.

          • bee_rider 12 hours ago

            I don’t actually know a ton about the rhetoric around abolitionism. Are you saying they tried to convince people that everybody else thought slavery was evil? I guess I assumed they tried to convince people slavery was in-and-of-itself evil.

          • jibal 7 hours ago

            Slavery used to be seen as a good thing by those who benefited from it (or thought they did).

        • vintermann 3 hours ago

          It's not just exhausting, it's a huge problem. Even if everyone is a complete saint all the time and has the best of intentions, going by beliefs about beliefs can trap us in situations where we're all unhappy.

          The classic situation is the two lovers who both want to do what they think makes their partner happy, to the extent that they don't tell what they actually want, and end up doing something neither wants.

          I think the goal of all sorts of cooperative planning should be to avoid such situations like the plague.

        • awesome_dude 15 hours ago

          The "Silent Majority" - Richard Nixon 1969

          "Quiet Australians" - Scott Morrison 2019

          • XorNot 15 hours ago

            We really need a rule in politics which bans you (if you're an elected representative) from stating anything about the beliefs of the electorate without reference to a poll of the population of adequate size and quality.

            Yes we'd have a lot of lawsuits about it, but it would hardly be a bad use of time to litigate whether a politicians statements about the electorate's beliefs are accurate.

            • skissane 15 hours ago

              The thing is... on both the cited occasions (Nixon in 1968, Morrison in 2019), the politicians claiming the average voter agreed with them actually won that election

              So, obviously their claims were at least partially true – because if they'd completely misjudged the average voter, they wouldn't have won

              • Nevermark 14 hours ago

                People vote for people they don't agree with.

                When there are only two choices, and infinite issues, voters only have two choices: Vote for someone you don't agree with less, or vote for someone you quite hilariously imagine agrees with you.

                EDIT: Not being cynical about voters. But about the centralization of parties, in number and operationally, as a steep barrier for voter choice.

                • albumen 12 hours ago

                  Two options, not two choices. (Unless you have a proportional representation voting system like ireland, in which case you can vote for as many candidates as you like in descending order of preference)

                  Anyway, there’s a third option: spoil your vote. In the recent Irish presidential election, 13% of those polled afterwards said they spoiled their votes, due to a poor selection of candidates from which to choose.

                  https://www.rte.ie/news/analysis-and-comment/2025/1101/15415...

                  • nandomrumber 12 hours ago

                    Please don’t encourage people to waste their vote.

                    Encourage people to vote for the candidate they dislike the least, then try to work out ways to hold government accountable.

                    If you’re in Australia, at least listen to what people like Tony Abbott, the IPA, and Pauline Hanson are actually saying these days.

                    • fouc 5 hours ago

                      A spoiled vote is at least better than not voting at all.

                      Because now that means there's an indication of what percentage of the populace are saying "These candidates don't qualify for my vote"

                • skissane 12 hours ago

                  That’s much more true for Nixon in 1968 than Morrison in 2019

                  Because the US has a “hard” two party system - third party candidates have very little hope, especially at the national level; voting for a third party is indistinguishable from staying home, as far as the outcome goes, with some rather occasional exceptions

                  But Australia is different - Australia has a “soft” two party system - two-and-a-half major parties (I say “and-a-half” because our centre-right is a semipermanent coalition of two parties, one representing rural/regional conservatives, the other more urban in its support base). But third parties and independents are a real political force in our parliament, and sometimes even determine the outcome of national elections

                  This is largely due to (1) we use what Americans call instant-runoff in our federal House of Representatives, and a variation on single-transferable vote in our federal Senate; (2) the parliamentary system-in which the executive is indirectly elected by the legislature-means the choice of executive is less of a simplistic binary, and coalition negotiations involving third party/independent legislators in the lower house can be decisive in determining that outcome in close elections; (3) twelve senators per a state, six elected at a time in an ordinary election, gives more opportunities for minor parties to get into our Senate - of course, 12 senators per a state is feasible when you only have six states (plus four more to represent our two self-governing territories), with 50 states it would produce 600 Senators

                  • nandomrumber 12 hours ago

                    And minor parties receive funding from the Australian Electoral Commission if they receive over certain percentage of votes.

                    It was 5% last time I cared to be informed by may be different now, and they would receive $x for each vote, or what ever it is now.

                    • skissane 9 hours ago

                      Currently minimum 4% of formal first preference votes, which gets you $3.499 per a first preference vote (indexed to inflation every six months)

                      Then you automatically get paid the first $12,791, and the rest of the funding is by reimbursement of substantiated election expenses.

                      This is per a candidate (lower house) or per a group (upper house). And this is just federal elections - state election funding is up to each state, but I believe the states have broadly similar funding systems.

                      https://www.aec.gov.au/parties_and_representatives/public_fu...

                      https://www.aec.gov.au/Parties_and_Representatives/public_fu...

                      Note the US also has public financing for presidential campaigns, which is available to minor parties once they get 5% or more of the vote. But in the 2024 election, Jill Stein (Green Party) came third on 0.56% of the popular vote. The only third party to ever qualify for general election public funding was the Reform Party due to Ross Perot getting 18.9% in the 1992 election and 8.4% in the 1996 election. There is also FEC funding for primary campaigns, and I believe that’s easier for third parties to access, but also less impactful.

                  • nandomrumber 11 hours ago

                    Also, there is nothing centre-right about Susan Ley.

                    She is the leftest left leaning leader of the Liberal party I’ve ever had the misfortune of having to live through.

                    She was absolutely on board with this recent Hitlerian “anti-hate” legislation that was rammed through with no public consultation.

                    Okay, that’s a bit uncharitable. We had 48 hours.

                    • endgame 11 hours ago

                      And the Parliamentary Joint Committee on Intelligence and Security definitely gave the literal thousands of submissions due consultation before recommending the original, un-split bill pass.

                • nandomrumber 12 hours ago

                  Combined with the quirk in Australia’s preferential voting system that enable a government to form despite 65% of voters having voted 1 for something else.

                  As a result, Australia tends to end up with governments formed by the runner up, because no one party actually ‘won’ as such.

                  • duskdozer 3 hours ago

                    I can think of an exaggerated scenario though in which that sounds reasonable depending on the goal:

                    say preferences are 1 (low) to 5 (high).

                    suppose 65% of the population ranked candidate A at 5 and B at 4, and the other 35% ranked A at 1 and B at 5. the majority doesn't get their favorite choice, but they do get an outcome they're happy with, and the minority doesn't have a horrible outcome. Exaggerated, but I don't think situations like this are unrealistic.

                • Der_Einzige 4 hours ago

                  Third parties exist. Folks act like Ross Perot and Pat Buchanan don't exist.

                  https://en.wikipedia.org/wiki/1992_United_States_presidentia...

                  18.9% as recently as 1992. I predict we will have a similar viable third party showing sometime in the next few elections due to the radical shift in the party system that AI is causing as we speak. I really hope Yang Gang can rebuild itself and try again, maybe without #MATH.

                  Also, https://en.wikipedia.org/wiki/1998_Minnesota_gubernatorial_e...

                  "The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man" - George Bernard Shaw

                  • Nevermark 3 hours ago

                    In the US, there are tremendous structural barriers for third parties. They exist, it is just extremely difficult for them.

                    The centralization of power of each of the two dominant parties nationally at the expense of a more decentralized parties with strong state variability as in the past, makes it even more difficult for third parties to gain traction against all that coordination.

                    Perot had the best chance, but managed to blow it by bowing out and then back in.

                    I do think you are right, that times of great dissatisfaction are rare openings for third party candidates, if someone special enough appears. 2020 would have been a great election for that - but an inspiring third party candidate can't be manufactured on demand.

                • jibal 7 hours ago

                  People have a choice between being rational and optimizing the alignment between the outcome and their preferences, or being irrational and doing something else, like not voting, spoiling their ballot, voting for a probabilistically infeasible candidate, voting "on principle", "sending a message", etc.

              • nandomrumber 12 hours ago

                I don’t recall the circumstances under which Morrison ended up Prime Minister.

                Like most Australians, I’m in denial any of that episode ever happened.

                But, using the current circumstances as an example, Australia has a voting system that enables a party to form government even though 65% of voting Australia’s didn’t vote for that party as their first preference.

                If the other party and some of the smaller parties could have got their shit together Australia could have a slightly different flavour of complete fucking disaster of a Government, rather than whatever the fuck Anthony Albanese thinks he’s trying to be.

                Then there’s Susan Ley. The least preferred leader of the two major parties in a generation.

                Susan Ley is Anthony Albanese in a skirt.

                I would have preferred Potato Head, to be honest.

              • bee_rider 9 hours ago

                Hmm. Actually, I think the suggestion of a law puts this whole thing on bad footing where we need to draw an otherwise unnecessary line (to denote where this type of rhetoric should be legal). I suspect XorNot just put the line there because the idea that true statements should be illegal just seems silly.

                Really it just ought to be a thing that we identify as a thought-terminating cliche. No laws needed, let’s just not fall for a lazy trick. Whether or not it is true that lots of people agreed, that isn’t a good argument that they are right.

                The case of Nixon really brings that out. The “Silent Majority” was used to refer to people who didn’t protest the Vietnam War. Of course, in retrospect the Vietnam War was pretty bad. Arguing that it was secretly popular should have not been accepted as a substitute for an argument that it was good.

            • palmotea 14 hours ago

              > We really need a rule in politics which bans you (if you're an elected representative) from stating anything about the beliefs of the electorate without reference to a poll of the population of adequate size and quality.

              Except that assumes polls are a good and accurate way to learn the "beliefs of the electorate," which is not true. Not everyone takes polls, not every belief can be expressed in a multiple-choice form, little subtleties in phrasing and order can greatly bias the outcome of a poll, etc.

              I don't think it's a good idea to require speech be filtered through such an expensive and imperfect technology.

            • bee_rider 15 hours ago

              Just make it broad enough that we never get a candidate promoting themselves as “electable” again.

            • chrisrogers 14 hours ago

              That get covered by the mechanisms of social credibility.

            • jibal 7 hours ago

              > We really need a rule in politics

              We really need a rule against proposing unenforceable rules.

        • nakedneuron 16 hours ago

          Isn't that how Bitcoin "works"?

          • achenet 15 hours ago

            err... how Bitcoin works, or how the speculative bubble around cryptocurrencies circa 2019-2021 worked?

            Bitcoin is actually kind of useful for some niche use cases - namely illegal transactions, like buying drugs online (Silk Road, for example), and occasionally for international money transfers - my French father once paid an Argentinian architect in Bitcoin, because it was the easiest way to transfer the money due to details about money transfer between those countries which I am completely unaware of.

            The Bitcoin bubble, like all bubbles since the Dutch tulip bubble in the 1600s, did follow a somewhat similar "well everyone things this thing is much more valuable than it is worth, if I buy some now the price will keep going on and I can dump it on some sucker" path, however.

            • awesome_dude 12 hours ago

              > Bitcoin is actually kind of useful for some niche use cases - namely illegal transactions, like buying drugs online (Silk Road, for example),

              For the record - the illegal transactions were thought to be advantaged by crypto like BTC because it was assumed to be impossible to trace the people engaged in the transaction, however the opposite is true, public blockchains register every transaction a given wallet has made, which has been used by Law Enforcement Agencies(LEA) to prosecute people (and made it easier in some cases).

              > and occasionally for international money transfers - my French father once paid an Argentinian architect in Bitcoin, because it was the easiest way to transfer the money due to details about money transfer between those countries which I am completely unaware of.

              There are remittance companies that deal in local currencies that tend to make this "easier" - crypto works for this WHEN you can exchange the crypto for the currencies you have and want, which is, in effect, the same.

              • backscratches 2 hours ago

                Anonymity/untraceability was not the primary reason for using BTC towards black/grey markets. Bitcoin can be used pseudo anonymously , and the fact is you simply cannot send money to your grey market counterparty via any method but cash without it being flagged/canceled, and if you can't send cash (which has its own problems), bitcoin is the only option.

            • tim333 14 hours ago

              Most bubbles have a peak and crash. "The Bitcoin bubble" keeps peaking and crashing and then going on to a higher peak.

              • measurablefunc 13 hours ago

                Mining rigs have a finite lifespan & the places that make them in large enough quantities will stop making new ones if a more profitable product line, e.g. AI accelerators, becomes available. I'm sure making mining rigs will remain profitable for a while longer but the memory shortages are making it obvious that most production capacity is now going towards AI data centers & if that trend continues then hashing capacity will continue diminishing b/c the electricity cost & hardware replenishment will outpace mining rewards.

                Bitcoin was always a dead end. It might survive for a while longer but its demise is inevitable.

                • mlrtime an hour ago

                  Are you lumping in all blockchains here or just bitcoin?

                  Because other networks don't have this problem.

      • Terr_ 17 hours ago

        Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.

        • demosito666 17 hours ago

          V 1.02: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because they believe that the others believe that you have enough power to crush the dissent. The moment this belief fades, you fall.

        • dclowd9901 16 hours ago

          Is that not the "Emperor's New Clothes" form? That would be like version 0.0.1

        • infinitewars 17 hours ago

          it's a sad state these days that we can't be sure which country you're alluding to

      • lotyrin 12 hours ago

        Ontological version is even more interesting, especially if we're talking about a singularity (which may be in the past rather than future if you believe in simulation argument).

        Crude form: winning is metaphysically guaranteed because it probably happened or probably will

        Refined: It's metaphysically impossible to tell whether or not it has or will have happened, so the distinction is meaningless, it has happened.

        So... I guess Weir's Egg falls out of that particular line of thought?

      • mjanx123 5 hours ago

        The refined form is unstable, a hair from an objective reality observation fluke collapsing it.

        The system that persists in practice is where everybody knows how things are, but still everybody pleads to a fictional status quo, because if they did not, the others would obliterate them.

      • CobrastanJorji 15 hours ago

        You ever get into logic puzzles? The sort where the asker has to specify that everybody in the puzzle will act in a "perfectly logical" way. This feels like that sort of logic.

      • bodge5000 14 hours ago

        Its the classic interrogation technique; "we're not here to debate whether your guilty or innocent, we have all the evidence we need to prove your guilt, we just want to know why". Not sure if it makes it any different though that the interrogator knows they are lying

    • dagss 15 hours ago

      Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?

      Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...

      Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.

      For instance those small experiments that train a network on addition problems mentioned in a sibling post. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behavior. The machine learning weights is just the medium it is expressed in.

      What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). We don't really know this yet.

      • esailija 5 hours ago

        There is more than molecules, neurons and synapses. They are made from lower level stuff that we have no idea about (well, we do in this instance but you get the point). They are just higher level things that are useful to explain and understand some things but don't describe or capture the whole thing. For that you would need to go to lower and lower level and so far it seems they go on infinitely. Currently we are stuck at the quantum level, that doesn't mean it's the final level.

        OTOH, an LLM is just a token prediction engine. It fully and completely covers it. There is no lower level secrets hidden in the design nobody understands, because it could not have been created if there was. The fact that the output can be surprising is not evidence of anything, we have always had surprising outputs like funny bugs or unexpected features. Using the word "emergence" for this is just deceitful.

        This algorithm has fundamental limitations and they have not been getting better, if you look closely. For instance you could vibe code a C compiler now, but it's 80% there, cute trick but not usable in real world. Just like anything, it cannot be economically vibe coded to 100%. They are not going back and vibe coding the previous simpler projects to 100% with "improved" models. Instead they are just vibe coding something bigger to 80%. This is not an improvement in limitations, it is actually communicating between the lines that the limitations cannot be overcome.

        Also, enshittification has not even started yet.

      • ActorNightly 14 hours ago

        Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.

        But that problem is MUCH MUCH MUCH harder than people make it out to be.

        For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

        You can get around that with agentic frameworks, but all of those right now are manually coded.

        So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.

        It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.

        ,

        • handoflixue 11 hours ago

          > For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

          Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.

          • seanmcdirmid 11 hours ago

            Any human who would try to do this is probably a special case. A reasonable person would break it down into sub-problems and create interfaces to glue them back together...a reasonable AI might do that as well.

            • heeen2 3 hours ago

              I can tell you from first hand experience that claude+ghidra mcp is very good at understanding firmware, labeling functions, finding buffer overflows, patching in custom functionality

          • dimitri-vs 11 hours ago

            On the other hand the average human has a context window of 2.5 petabytes that's streaming inference 24/7 while consuming the energy equivalent of a couple sandwiches per day. Oh and can actually remember things.

            • handoflixue 4 hours ago

              Citation desperately needed? Last I checked, humans could not hold the entirety of Wikipedia in working memory, and that's a mere 24 GB. Our GPU might handle "2.5 petabytes" but we're not writing all that to disc - in fact, most people have terrible memory of basically everything they see and do. A one-trick visual-processing pony is hardly proof of intelligence.

        • dan_mctree 14 hours ago

          >So obviously, error correction with inputs/outputs is not the way we get to intelligence.

          This doesn't seem to follow at all let alone obviously? Humans are able to reason through code without having to become a completely discrete computer, but probably can't reason through any length of assembly code, so why is that requirement necessary and how have you shown LLMs can't achieve human levels of competence on this kind of task?

          • ActorNightly 13 hours ago

            > but probably can't reason through any length of assembly code

            Uh what? You can sit there step by step and execute assembly code, writing things down on a piece of paper and get the correct final result. The limits are things like attention span, which is separate from intelligence.

            Human brains operate continuously, with multiple parts being active at once, with weight adjustment done in real time both in the style of backpropagation, and real time updates for things like "memory". How do you train an LLM to behave like that?

            • dagss 6 hours ago

              So humans can get pen and paper and sleep and rest, but LLMs can't get files and context resets?

              Give the LLM the ability to use a tool that looks up instructions and records instructions from/to files, instead of holding it in context window, and to actively manage its context (write a new context and start fresh), and I think you would find the LLM could probably do it about as reliable as a human?

              Context is basically "short term memory". Why do you set the bar higher for LLMs than for humans?

            • foxglacier 12 hours ago

              Couldn't you periodically re-train it on what it's already done and use the context window for more short term memory? That's kind of what humans do - we can't learn a huge amount in short time but can accumulate a lot slowly (school, experience).

              A major obstacle is that they don't learn from their users, probably because of privacy. But imagine if your context window was shared with other people, and/or all your conversations were used to train it. It would get to know individuals and perhaps treat them differently, or maybe even manipulate how they interact with each other so it becomes like a giant Jeffrey Epstein.

      • wavemode 14 hours ago

        You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.

        In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.

        This is separate from directly answering the question "is a singularity coming?"

        • handoflixue 11 hours ago

          The problem is, there's two groups:

          One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.

          The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.

          So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..

          • wavemode 10 hours ago

            False dichotomy. One can believe that LLMs are capable of more than their constituent parts without necessarily believing that their real-world utility is growing at a hyperbolic rate.

            • handoflixue 4 hours ago

              Fair - I meant there's two major clusters in the mainstream debate, but like all debates there's obviously a few people off in all sorts of other positions.

    • catoc 4 hours ago

      > ”when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself”

      Laughed out loud at that - and cried a little.

      I have had trouble explaining people: “No! don’t use your email password! This is not your email you are logging in to, your email address is a username for this other service. Don’t give them your email password!”

    • jacquesm 18 hours ago

      > “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

      And there are plenty of people that take issue with that too.

      Unfortunately they're not the ones paying the price. And... stock options.

      • stego-tech 18 hours ago

        History paints a pretty clear picture of the tradeoff:

        * Profits now and violence later

        OR

        * Little bit of taxes now and accelerate easier

        Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

        • jpadkins 17 hours ago

          Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.

          • nine_k 17 hours ago

            If you replace "taxes" with more general "investment", it's everywhere. A good example is Amazon that has reworked itself from an online bookstore into a global supplier of everything by ruthlessly reinvesting the profits.

            Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.

            If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

            • PaulHoule 17 hours ago

              With the odd story that we paid the price for it in the long term.

              This book

              https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...

              tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.

              Funny the Mellon family went on to further political mischief

              https://en.wikipedia.org/wiki/Richard_Mellon_Scaife#Oppositi...

              • vondur 17 hours ago

                Ha, we gutted our manufacturing base, so if we bring it back it will now be state of the art! Not sure if that will work out for us, but hey their is some precedence.

                • kelseyfrog 15 hours ago

                  The dollar became the world's reserve currency because the idea of Bancor lost to it. Thus subjecting the US to the Triffin dilemma which made the US capital markets benefit at the expense of a hugely underappreciated incentive to offshore manufacturing.

                  You can't onshore manufacturing and have a dollar reserve currency. The only question then is, Are you willing to de-dollarize to bring back manufacturing jobs?

                  This isn't a rhetorical question if the answer is yes, great, let's get moving. But if the answer is no, sorry, dollarization and its effects will continue to persist.

                • jacquesm 16 hours ago

                  This is the silver lining in many bad stories: the pendulum will always keep on swinging because at the extremes the advantage flips.

              • hansvm 14 hours ago

                I'll take a look at that story later. I'm curious though, why is US metallurgy consistently top-notch if the processes are inferior? When I use wrenches, bicycle frames, etc from most other countries I have no end of troubles with weld delamination, stress fractures compounding into catastrophic failures, and whatnot, even including enormous wrenches just snapping in half with forces far below what something a tenth the size with American steel could handle.

                • jacquesm 13 hours ago

                  > I'm curious though, why is US metallurgy consistently top-notch if the processes are inferior?

                  I really wonder what you're comparing with.

                  Try some quality surgical steel from Sweden, Japan or Germany and you'll come away impressed. China is still not quite there but they are improving rapidly, Korea is already there and poised to improve further.

                  Metal buyers all over the globe are turning away from the US because of the effects of the silly tariffs but they were not going there because the quality, but because of the price.

                  The US could easily catch up if they wanted to but the domestic market just isn't large enough.

                  And as for actual metallurgy knowledge I think russia still has an edge, they always were good when it came down to materials science, though they're sacrificing all of that now for very little gain.

                  • PaulHoule 11 hours ago

                    Also those old open hearth furnaces are long gone, see

                    https://www.youtube.com/watch?v=BHnJp0oyOxs

                    There are people making top quality steel in the US today by modern methods but it wasn't like the new replaced the old, the old mostly disappeared and we got a little bit of the new.

                    • jacquesm 10 hours ago

                      Yes, I should have been more clear there: they could catch up in volume but it will require a different mindset if they want to become a net exporter of such items.

                      • gsf_emergency_6 9 hours ago

                        To add a meta contribution to yours using anecdotes:

                        US pipeline for metallurgical R&D broken (by financial/cultural incentives)

                        This guy studied metallurgy in Carleton U, Canada, switched to CS, founded YC, emotionalized the decision

                        https://news.ycombinator.com/item?id=39600555

                        Who knows, he might have become John Carmack's John Carmack, building rockets better than Carmack or Elon

                        • jacquesm 9 hours ago

                          And yet, it could be done, I'm pretty sure of that.

                • nine_k 14 hours ago

                  Which are these other countries? Have you tried something actually made in Japan, or in Germany, for instance?

                  What you describe seems like very cheap Chinese imports fraudulently imitating something else.

            • taurath 16 hours ago

              > the state is usually a much more sloppy investor

              I don’t find this to be true

              The state invests in important things that have 2nd and 3rd order positive benefit but aren’t immediately profitable. Money in a food bank is a “lost” investment.

              Alternatively the state plays power games and gets a little too attached to its military toys.

              • nine_k 16 hours ago

                State agencies are often good at choosing right long-term targets. State agencies are often bad at the actual procurement, because of the pork-barrelling and red tape. E.g. both private companies and NASA agree that spaceflight is a worthy target, but NASA ends up with the Space Shuttle (a nice design ruined by various committees) and SLS, while private companies come up with Falcon-9.

                • stoneforger 15 hours ago

                  Sounds like a false dichotomy. NASA got all these different subcontractors to feed, in all these different states and they explicitly gutted MOL and dynasoar and all the air force projects that needed weird orbits and reentry trajectories so the space shuttle became a huge compromise. Perverse incentives and all that. It's not state organizations per se but rather non-profits that need to have a clear goal that creates capabilities, tools and utilities that act as multipliers for everyone. A pretty big cooperative. Like, I dunno , what societies are supposed to exist for.

                  • nine_k 15 hours ago

                    But DoD with its weird requirements, and the Congress with its power to finance the project and the desire to bring jobs from it to every state, and the rules of contracting that NASA must follow, are all also part of the state, the way the state ultimately works.

              • Windchaser 15 hours ago

                Yeah, our use of our military force provides some of the most obvious cases of "bad investment". Vietnam, Iraq, etc

                And there are many others that might've been a positive investment from a strictly financial perspective, but not from a moral one: see Banana Republics and all those times the CIA backed military juntas.

                • otikik 2 hours ago

                  I would argue that those bad investments (such an understatement!) were clearly lobbied for by the military-industrial complex. So yes, the state dropped the ball, big, on those. But that was because the private sector pushed for it, probably also in a big way. I would say that, even though the politicians were ultimately responsible for those calamities, the CEOs who greatly enritched themselves from them are absolutely to be blamed, too.

            • exceptione 16 hours ago

              > Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.

              Be careful. The data does not confirm that narrative. You mentioned the 1950s, which is a poignant example of reality conflicting with sponsored narrative. Pre WOII, the wealthy class orbiting the monopolists, and by extension their installed politicians, had no other ideas than to keep lowering taxes for the rich on and on, even if it only deepened the endless economic crisis. Many of them had fallen in the trap of believing their own narratives, something we know as the Cult of Wealth.

              Meanwhile, average Americans lived on food stamps. Politically deadlocked in quasi-religious ideas of "bad governments versus wise business men", America kept falling deeper. Meanwhile, with just 175,000 serving on active duty, the U.S. Army was the 18th biggest in the world[1], poorly equipped, poorly trained. Right wing isolationism had brought the country in a precarious position. Then two things happened. Roosevelt and WOII.

              In a unique moment, the state took matters in their own hands. The sheer excellence in planning, efficiency, speed and execution of the state baffled the republicans, putting the oligarchic model of the economy to shame. The economy grew tremendously as well, something the oligarchy could not pull of. It is not well-known that WOII depended largely on state-operated industries, because the former class quickly understood how much the state's performance threatened their narratives. So they invested in disinformation campaigns, claiming the efforts and achievements of the government as their own.

              1. https://www.politico.com/magazine/story/2019/06/06/how-world...

              • eric_cc 15 hours ago

                What does WOII mean?

                I assume you are talking about WW2 and at first thought it was a typo.

                • jacquesm 13 hours ago

                  WOII is how dutch speaking/writing people would refer to WW2, it is literally 'wereld oorlog 2'.

              • nine_k 15 hours ago

                BTW the New Deal tried central planning and quickly rejected it. I'd say that the intense application of the antitrust law in the late 1930s was a key factor that helped end the Great Depression. The war, and wartime government powers, were also key: the amount of federal government overreach and and reforms do not compare to what e.g. the second Trump administration has attempted. It was mostly done by people who got their positions in the administration more due to merit and care about the country than loyalty, and it showed.

                The post-war era, under Truman and Eisenhower administrations, reaped the benefits of the US being the wealthiest and most intact winner of WWII. At that time, the highest income tax rate bracket was 91%, but the effective rate was below 50%.

            • oceanplexian 17 hours ago

              > It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

              The US is also shaping up to be the principal winner in Artificial Intelligence.

              If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.

              • generic92034 15 hours ago

                Are you sure that in today's reality the fruits of the AI race will be harvested by "the people"?

                • otikik 2 hours ago

                  "The 3 wealthy people"

              • munk-a 16 hours ago

                Early on in the AI boom NVidia was highly valued as it was seen as the shovel-maker for research and development. It certainly was instrumental early on but now there are a few viable options for training hardware - and, to me at least, it's unclear whether training hardware is actually the critical infrastructure or if it will be something like power capacity (which the US is lagging behind significantly in), education, or even cooling efficiency.

                I think it's extremely early to try and call who the principal winner will be especially with all the global shifts happening.

              • jacquesm 16 hours ago

                > The US is also shaping up to be the principal winner in Artificial Intelligence.

                There is no early mover advantage in AI in the same way that there was in all the other industries. That's the one thing that AI proponents in general seem not to have clued in to.

                What will happen is that it eventually drags everything down because it takes the value out of the bulk of the service and knowledge economies. So you'll get places that are 'ahead' in the disruption. But the bottom will fall out of the revenue streams, which is one of the reasons these companies are all completely panicked and are wrecking the products that they had to stuff AI in there in every way possible hoping that one of them will take.

                Model training is only an edge in a world where free models do not exist, once those are 'good enough' good luck with your AI and your rapidly outdated hardware.

                The typical investors horizon is short, but not that short.

        • mlrtime 40 minutes ago

          We have taxes now though, how much is enough?

          Hint: The answer for the government is, it's never enough. "little bit of taxes" is never what we had.

          Seriously though, I wouldn't mind "little bit of taxes" if there were guaranteed ways to stop funding something when it's a failed experiment, which is difficult in government. Because "a little bit more" is always wanted.

        • lbreakjai 14 hours ago

          Violence was a moderating factor when people on each side were equally armed, and number was a deciding factor.

          Nowadays you could squash an uprising with a few operators piloting drones remotely.

          • elictronic 13 hours ago

            Flying a drone around is easy. Identifying who is on the in group and out group and then moving them is the hard part.

            I’m not sure you have really thought out what the drone part is meant to do. Militaries gave outgunned populaces for decades at this point. You don’t need drones to kill civilians.

            • lbreakjai 13 hours ago

              It's actually quite easy. Whoever isn't in the bunker is the outgroup. You only needed to tell people apart when you needed some meatware to man the factories and work the fields.

              Militaries can side with the crowd, or more likely decide to keep the power for themselves.

              • Radim 3 hours ago

                Yeah ruling juntas do need to "man the fields & factories" (1st order meatware), in order to produce and maintain those drones. Or nukes, or whatever "deciding factor beyond numbers" put them in power.

                But they also need 2nd order meatware to support that 1st order: teachers, doctors, merchants… You need scientists to advance your technology against other militaries… You need leaders (3rd order) to keep the first two populations quiet and productive since that turns out to be more cost-effective than fear control through extermination…

                Hell you need a certain level of genetic diversity so your own kids don't come out weird.

                Give evolution a little more credit. The required number of humans for the in-group to be self-sustainable is definitely not billions, plus it's been shrinking with automation. But we are where we are for a reason – lots of alternative arrangements have been tried over millenia and found wanting.

                "Keep my bunker + my drone factory and some farmers, kill the rest" leaves rulers with terrible quality of life (bad) and the next-door-junta taking over pretty quickly (also bad). It is a self-defeating, poor long-term strategy.

                Automation tips the power balance further: fewer humans needed, more local autonomy. Which is, I suspect, why the ruling class are so terribly excited about AI, more so than some market valuations. Fewer pesky humans across all levels. Genetic diversity of bloodline remains the primary concern (unless you manage to live forever, which happens to be another evergreen of power ghouls).

        • AndrewKemendo 18 hours ago

          Every possible example of “progress” have either an individual or a state power purpose behind it

          there is only one possible “egalitarian” forward looking investments that paid off for everybody

          I think the only exception to this is vaccines…and you saw how all that worked during Covid

          Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad

          • jacquesm 17 hours ago

            COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.

          • ghurtado 17 hours ago

            If you edit your comment to add punctuation, please let me know: I would like to read that final pile of words.

            I did try, I promise.

            • AndrewKemendo 16 hours ago

              Ok here: Everything from the semiconductor through the vacuum cleaner, automobile, airplanes and steam engines was developed to give a small group an advantage over all the other groups. It has always been the case, it will always be the case.

              Fundamentally, at the root nature of humanity, humans do not care about the externalities, either good or bad.

              • tim333 16 hours ago

                That's a slightly odd way of looking at it. I'm guessing the people developing airplanes or whatever thought of a number of things including - hey this would be cool to do - and - maybe we can make some money - and - maybe this will help people travel - and - maybe it'll impress the girls - and probably some other things too. At least that's roughly how I've thought when I make stuff, never this will give a small group an advantage.

                • AndrewKemendo 15 hours ago

                  But the whole point is embedded in the task otherwise you wouldn’t do it

                  If somebody is using monetary resources to buy NFT‘s instead of handing out food to the homeless then you get less food for the homeless

                  All of the things listed are competitive task situations and you’re looking for some advantage that makes it easier for you

                  well if it makes it easier for you then it could make it easier for somebody else which means you’re crowding out other options in that action space

                  That is to say the pie is fixed for resources on this planet in terms of energy and resource utilization across the lifespan of a human

              • jacquesm 15 hours ago

                Vacuum cleaner -> sell appliances -> sell electric motors

                But there was a clear advantage in quality of life for a lot of people too.

                Automobile -> part of industrialization of transport -> faster transport, faster world

                Arguably also a big increase in quality of life but it didn't scale that well and has also reduced the quality of life. If all that money had gone into public transport then that would likely have been a lot better.

                Airplanes -> yes, definitely, but they were also clearly seen as an advantage in war, in fact that was always a major driver behind inventions.

                Steam engine -> the mother of all prime movers and the beginnings of the fossil fuel debacle (coal).

                Definitely a quality of life change but also the cause of the bigger problems we are suffering from today.

                The 'coffin corner' (one of my hobby horses) is a real danger, we have, as a society, achieved a certain velocity, if we slow down too much we will crash, if we speed up the plane will come apart. Managing these transitions is extremely delicate work and it does not look as though 'delicate' is in the vocabulary of a lot of people in the driving seats.

                • AndrewKemendo 15 hours ago

                  This is where the concept of trickle down economics came from though and we know that that’s not actually accurate

                  I used to hear about this with respect to how fun funding NASA would get us more inventions because they funded Velcro

                  No it’s simply that there was a positive temporary externality for some subset of groups but the primary long term benefit went to the controller of the capital

                  The people utilizing them were marginally involved because they were only given the options that capital produced for them

    • mitthrowaway2 18 hours ago

      > whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

      I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).

      • afthonos 18 hours ago

        I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.

      • cgannett 18 hours ago

        if people believe its a threat and it is also real then what matters is timing

        • goatlover 17 hours ago

          Which would also mean the accelerationists are potentially putting everyone at risk. I'd think a soft takeoff decades in the future would give us a much better chance of building the necessary safeguards and reorganizing society accordingly.

          • AndrewKemendo 14 hours ago

            This is a soft takeoff

            We, the people actually building it, have been discussing it for decades

            I started reading Kurzweil in the early 90s

            If you’re not up to speed that’s your fault

            • goatlover 9 hours ago

              Decades from now. Society is nowhere near ready for a singularity. The AI we have now, as far as it has come, is still a tool for humans to use. It's more Augmented Intelligence than AGI.

              A hard takeoff would be the tool bootstrapping itself into an autonomous self-improving ASI in a short amount of time.

              And I read Kurzweil years ago too. He thought reverse engineering the human brain once the hardware was powerful enough would together give us the singularity in 2045. And the Turing Test would have been passed by 2029, but seems like LLMs have already accomplished this.

      • sigmoid10 18 hours ago

        Depends on what a post singularity world looks like, with Roko's basilisk and everything.

      • Negitivefrags 18 hours ago

        > If the singularity does happen, then it hardly matters what people do or don't believe.

        Depends on how you feel about Roko's basilisk.

        • VonTum 17 hours ago

          God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.

          • camgunz 16 hours ago

            The culture that brought you "speedrunning computer science with JavaScript" and "speedrunning exploitative, extractive capitalism" is back with their new banger "speedrunning philosophy". Nuke it from orbit; save humanity.

    • csallen 16 hours ago

      > prior to reforming society into one that does not predicate survival on continued employment and wages

      There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.

      • tim333 13 hours ago

        The UK seems to be prototyping that. We're changing to a society where everyone lives by claiming benefits. (eg. https://www.gbnews.com/money/benefits-claimants-earnings-rev...)

        • Nursie 5 hours ago

          Ugh, GBNews, outrage fodder for idiots and the elderly with no ability to navigate the modern information landscape.

          You can tell it's watched almost exclusively by old people because all the ads on the channel are for those funeral pre-pay services or retirement homes.

          Safe to ignore anything they have to say.

          • kylegordon 4 hours ago

            > Safe to ignore anything they have to say.

            And that of anyone who quotes them too

            • tim333 2 hours ago

              The everyone living on benefits as a prototype for the singularity was a bit tongue in cheek.

              • Nursie 42 minutes ago

                Fair enough, it’s so hard to tell these days.

                I do wish me dad would stop watching that channel though, it can’t be any good for his heart.

      • stoneforger 15 hours ago

        You would need a new sense of self and a life free of fear, raising children where they can truly be anything they like and teach their own kids how to find meaning in a life lived well. "Best I can do is treefiddy" though..

    • noiv 13 hours ago

      "If men define situations as real, they are real in their consequences."

      Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas.

      https://en.wikipedia.org/wiki/Thomas_theorem

    • pryce 15 hours ago

      > whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

      We've already been here in the 1980s.

      The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.

      • FarmerPotato 14 hours ago

        All the discussion of investment and economics can be better informed by perusing the economic data in Rise and Fall of American Growth. Robert Gordon's empirical finding is that American productivity compounded astonishingly from 1870-1970, but has been stuck at a very low growth rate since then.

        It's hard to square with the computer revolution, but my take post-70s is "net creation minus creative destruction" was large but spread out over more decades. Whereas technologies like: electrification, autos, mass production, telephone, refrigeration, fertilizers, pharmaceuticals, these things produced incomparable growth over a century.

        So if you were born in the 70s America, your experience of taxes, inflation, prosperity and which policies work, all that can feel heavier than what folks experienced in the prior century. Of course that's in the long run (ie a generation).

        I question whether AI tools have great net positive creation minus destruction.

    • mgraczyk 11 hours ago

      This entire chain of reasoning takes for granted that there won't be a singularity

      If you're talking about "reforming society", you are really not getting it. There won't be society, there won't be earth, there won't be anything like what you understand today. If you believe that a singularity will happen, the only rational things to do are to stop it or make sure it somehow does not cause human extinction. "Reforming society" is not meaningful

      • corndoge 11 hours ago

        There will be earth!

    • strangattractor 14 hours ago

      I thought the Singularity had already happened when the Monkeys used tools to kill the other Monkeys and threw the bone into the sky to become a Space Station.

    • menaerus 17 hours ago

      > It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

      Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.

      • hnfong 16 hours ago

        Pro tip: call it a "law of nature" and people will somehow stop pestering you about the why.

        I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.

        On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)

        • gfarah 13 hours ago

          In science these days, the term "Law" is almost never used anymore, the term "Theory" replaced it. E.g Theory of special relativity instead of Law of special relativity.

      • striking 16 hours ago

        Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.

        Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?

        • famouswaffles 15 hours ago

          >Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.

          If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.

          We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.

          Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:

          "Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"

          What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.

          Let's revisit your statement.

          "the mechanics of how LLMs work to produce results are observable and well-understood".

          Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?

          • striking 15 hours ago

            It's pattern matching, likely from typography texts and descriptions of umbrellas. My understanding is that the model can attempt some permutations in its thinking and eventually a permutation's tokens catch enough attention to attempt to solve, and that once it is attending to "everyday object", "arc", and "hook", it will reply with "umbrella".

            Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong: https://claude.ai/share/497ad081-c73f-44d7-96db-cec33e6c0ae3 . Here's me specifically asking for the three key points above: https://claude.ai/share/b529f15b-0dfe-4662-9f18-97363f7971d1

            I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.

            Edit: I poked at it a little longer and I was able to get some more specific matches to source material binding the concept of umbrellas being drawn using the letter J: https://claude.ai/share/f8bb90c3-b1a6-4d82-a8ba-2b8da769241e

            • famouswaffles 14 hours ago

              >It's pattern matching, likely from typography texts and descriptions of umbrellas.

              "Pattern matching" is not an explanation of anything, nor does it answer the question I posed. You basically hand waved the problem away in conveniently vague and non-descriptive phrase. Do you think you could publish that in a paper for ext ?

              >Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong

              I don't know what to tell you but J with the parentheses upside down still resembles an umbrella. To think that a machine would recognize it's just a flipped umbrella and a human wouldn't is amazing, but here we are. It's doubly baffling because Claude quite clearly explains it in your transcript.

              >I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.

              Yes I realize that. I'm telling you that you're wrong.

              • pbhjpbhj 12 hours ago

                >Do you think you could publish that in a paper for ext ?

                You seem to think it's not 'just' tensor arithmetic.

                Have you read any of the seminal papers on neutral networks, say?

                It's [complex] pattern matching as the parent said.

                If you want models to draw composite shapes based on letter forms and typography then you need to train them (or at least fine-tune them) to do that.

                I still get opposite (antonym) confusion occasionally in responses to inferences where I expect the training data is relatively lacking.

                That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.

                • famouswaffles 10 hours ago

                  >You seem to think it's not 'just' tensor arithmetic.

                  If I asked you to explain how a car works and you responded with a lecture on metallic bonding in steel, you wouldn’t be saying anything false, but you also wouldn’t be explaining how a car works. You’d be describing an implementation substrate, not a mechanism at the level the question lives at.

                  Likewise, “it’s tensor arithmetic” is a statement about what the computer physically does, not what computation the model has learned (or how that computation is organized) that makes it behave as it does. It sheds essentially zero light on why the system answers addition correctly, fails on antonyms, hallucinates, generalizes, or forms internal abstractions.

                  So no: “tensor arithmetic” is not an explanation of LLM behavior in any useful sense. It’s the equivalent of saying “cars move because atoms.”

                  >It's [complex] pattern matching as the parent said

                  “Pattern matching”, whether you add [complex] to it or not is not an explanation. It gestures vaguely at “something statistical” without specifying what is matched to what, where, and by what mechanism. If you wrote “it’s complex pattern matching” in the Methods section of a paper, you’d be laughed out of review. It’s a god-of-the-gaps phrase: whenever we don’t know or understand the mechanism, we say “pattern matching” and move on, but make no mistake, it's utterly meaningless and you've managed to say absolutely nothing at all.

                  And note what this conveniently ignores: modern interpretability work has repeatedly shown that next-token prediction can produce structured internal state that is not well-described as “pattern matching strings”.

                  - Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task (https://openreview.net/forum?id=DeG07_TcZvT) and Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models (https://openreview.net/forum?id=PPTrmvEnpW&referrer=%5Bthe%2...

                  Transformers trained on Othello or Chess games (same next token prediction) were demonstrated to have developed internal representations of the rules of the game. When a model predicted the next move in Othello, it wasn't just "pattern matching strings", it had constructed an internal map of the board state you could alter and probe. For Chess, it had even found a way to estimate a player's skill to better predict the next move.

                  There are other interpretability papers even more interesting than those. Read them, and perhaps you'll understand how little we know.

                  On the Biology of a Large Language Model - https://transformer-circuits.pub/2025/attribution-graphs/bio...

                  Emergent Introspective Awareness in Large Language Models - https://transformer-circuits.pub/2025/introspection/index.ht...

                  >That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.

                  Nobody understands LLMs anywhere near enough to propose a complete theory that explains all their behaviors and failure modes. The people who think they do are the ones who understand them the least.

                  What we can say:

                  - LLMs are trained via next-token prediction and, in doing so, are incentivized to discover algorithms, heuristics, and internal world models that compress training data efficiently.

                  - These learned algorithms are not hand-coded; they are discovered during training in high-dimensional weight space and because of this, they are largely unknown to us.

                  - Interpretability research shows these models learn task-specific circuits and representations, some interpretable, many not.

                  - We do not have a unified theory of what algorithms a given model has learned for most tasks, nor do we fully understand how these algorithms compose or interfere.

                  • mlrtime 4 minutes ago

                    I think what you two are going back and forth on is the heated debate in AI research regarding Emergent Abilities. Specifically, whether models actually develop "sudden" new powers as they scale, or if those jumps are just a mirage caused by how we measure them.

              • striking 11 hours ago

                I don't have much more to add to the sibling comment other than the fact that the transcript reads

                > When you rotate ")" counterclockwise 90°, it becomes a wide, upward-opening arc — like ⌣.

                but I'm pretty sure that's what you get if you rotate it clockwise.

            • menaerus 5 hours ago

              > I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.

              You should write a paper and release it and basically get rich.

          • dbdoug 15 hours ago

            From Gemini:When you take those two shapes and combine them, the resulting image looks like an umbrella.

        • hn_acc1 16 hours ago

          You can't keep pushing the AI hype train if you consider it just a new type of software / fancy statistical database.

        • menaerus 16 hours ago

          Yes, there is - benefit of a doubt.

      • liuliu 16 hours ago

        Agree. I think it is just people have their own simplified mental model how it works. However, there is no reason to believe these simplified mental models are accurate (otherwise we will be here 20-year earlier with HMM models).

        The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.

        • hn_acc1 16 hours ago

          Did you mean to use the word "mental"?

    • bheadmaster 18 hours ago

      > here’s how LLMs actually work

      But how is that useful in any way?

      For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

      • OkayPhysicist 18 hours ago

        > We really have no idea how did ability to have a conversation emerge from predicting the next token.

        Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

        • dTal 17 hours ago

          >In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.

          No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.

          To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.

          • famouswaffles 16 hours ago

            >No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point.....

            To be fair, only if you pose this question singularly with no proceeding context. If you want the raw LLM to answer your question(s) reliably then you can have the context prepended with other question-answer pairs and it works fine. A raw LLM is already capable of being a chatbot or anything else with the right preceding context.

            • dTal 4 hours ago

              Right, but that was my point - statistically, answers do not follow questions without some establishing context, and as such, while LLMs are "simply" next word predictors, the chatbots aren't - they are Hofstaderian strange loops that we will into being. The simpler you think language models are, the more that should seem "magic".

              They're not simple though. You can understand, in a reductionist sense, the basic principles of how transformers perform function approximation; but that does not grant an intuitive sense of the nature of the specific function they have been trained to approximate, or how they have achieved this approximation. We have little insight into what abstract concepts each of the many billions of parameters map on to. Progress on introspecting these networks has been a lot slower than trial-and-error improvements. So there is a very real sense in which we have no idea how LLMs work, and they are literally "magic black boxes".

              No matter how you slice it - if "magic" is a word which can ever be applied to software, LLM chatbots are sure as shit magic.

        • accounting2026 15 hours ago

          If such a simplistic explanation was true, LLM's would only be able to answer things that had been asked before, and where at least a 'fuzzy' textual question/answer match was available. This is clearly not the case. In practice you can prompt the LLM with such a large number of constraints, so large that the combinatorial explosion ensures no one asked that before. And you will still get a relevant answer combining all of those. Think combinations of features in a software request - including making some module that fits into your existing system (for which you have provided source) along with a list of requested features. Or questions you form based on a number of life experiences and interests that combined are unique to you. You can switch programming language, human language, writing styles, levels as you wish and discuss it in super esoteric languages or morse code. So are we to believe this answers appear just because there happened to be similar questions in the training data where a suitable answer followed? Even if for the sake of argument we accept this explanation by "proximity of question/answer", it is immediately that this would have to rely on extreme levels of abstraction and mixing and matching going on inside the LLM. And that it is then this process that we need to explain how works, whereas the textual proximity you invoke relies on this rather than explaining it.

          • tavavex 10 hours ago

            I think you're confusing OP for the people who claim that there is zero functional difference between an LLM and a search engine that just parrots stuff already in it. But they never made such a claim. Here, let me try: the simplest explanation for how next token estimation leads to a model that often produces true answers is that for most inputs, the most likely next token is true. Given their size and the way they're trained, LLMs obviously don't just ingest training data like a big archive, they contain something like an abstract representation of tokens and concepts. While not exactly like human knowledge, the network is large and deep enough that LLMs are capable of predicting true statements based on preceding text. This also enables them to answer questions not in their training dataset, although accuracy obviously suffers the further you deviate from known topics. The most likely next token to any question is the true answer, so they essentially ended up being trained to estimate truth.

            I'm not saying this is bad or underwhelming, by the way. It's incredible how far people were able to push machine learning with just the knowledge we have now, and how they're still making process. I'm just saying it's not magic. It's not something like an unsolved problem in mathematics.

            • accounting2026 5 hours ago

              No one ever made the claim it was magic, not even remotely. Regarding the rest of your commentary: a) The original claim was that LLM's were not understood and are a black box. b) Then someone claims that this is not true, and they know well how LLM's work, it is simply due to questions & answers being in close textual proximity in training data. c) I then claim this is a shallow explanation because you then need to invoke additionally a huge abstraction network - that is a black box, d) you seem to agree with this while at the same time saying I misrepresented "b" - which I don't think I did. They really claimed they understood it and only offered this textual proximity thing.

              In general, every attempt at explanation of LLM's that appeal to "[just] predicting next token" is thought terminating and automatically invalid as explanation. Why? Because it is confusing the objective function with the result. It adds exactly zero over saying "I know how a chess engine works, it just predicts the next move and has been trained to predict the next move" or "A talking human just predicts the next word, as it was trained to do". It says zero about how this is done internally in the model. You could have a physical black box predicting the next token, and inside you could have simple frequentist tables or you could have a human brain or you could have an LLM. In all cases you could say the box is predicting the next token and if any training was involved you could say it was trained to predict the next token.

        • bheadmaster 16 hours ago

          > Maybe you don't.

          My best friend who has literally written a doctorate on artificial intelligence doesn't. If you do, please write a paper on it, and email it to me. My friend would be thrilled to read it.

          • gxs 9 hours ago

            Yeah I sort of cringed when I read his comment to be honest

            The whole point of the area of work that as far as I know is called interpretability, is precisely to try and figure exactly how these things work

            So I thought your comment was a good way of putting this

        • famouswaffles 17 hours ago

          >In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

          Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?

          You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.

          • measurablefunc 16 hours ago

            We know exactly what is going on inside the box. The problem isn't knowing what is going on inside the box, the problem is that it's all binary arithmetic & no human being evolved to make sense of binary arithmetic so it seems like magic to you when in reality it's nothing more than a circuit w/ billions of logic gates.

            • famouswaffles 16 hours ago

              We do not know or understand even a tiny fraction of the algorithms and processes a Large Language Model employs to answer any given question. We simply don't. Ironically, only the people who understand things the least think we do.

              Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.

              • measurablefunc 16 hours ago
                • camgunz 16 hours ago

                  "Look man all reality is just uncountable numbers of subparticles phasing in and out of existence, what's not to understand?"

                  • measurablefunc 15 hours ago

                    Your response is a common enough fallacy to have a name: straw man.

                    • stickfigure 14 hours ago

                      I think the fallacy at hand is more along the lines of "no true scotsman".

                      You can define understanding to require such detail that nobody can claim it; you can define understanding to be so trivial that everyone can claim it.

                      "Why does the sun rise?" Is it enough to understand that the Earth revolves around the sun, or do you need to understand quantum gravity?

                      • measurablefunc 14 hours ago

                        Good point. OP was saying "no one knows" when in fact plenty of people do know but people also often conflate knowing & understanding w/o realizing that's what they're doing. People who have studied programming, electrical engineering, ultraviolet lithography, quantum mechanics, & so on know what is going on inside the computer but that's different from saying they understand billions of transistors b/c no one really understands billions of transistors even though a single transistor is understood well enough to be manufactured in large enough quantities that almost anyone who wants to can have the equivalent of a supercomputer in their pocket for less than $1k: https://www.youtube.com/watch?v=MiUHjLxm3V0.

                        Somewhere along the way from one transistor to a few billion human understanding stops but we still know how it was all assembled together to perform boolean arithmetic operations.

                        • famouswaffles 13 hours ago

                          Honestly, you are just confused.

                          With LLMs, The "knowing" you're describing is trivial and doesn't really constitute knowing at all. It's just the physics of the substrate. When people say LLMs are a black box, they aren't talking about the hardware or the fact that it's "math all the way down." They are talking about interpretability.

                          If I hand you a 175-billion parameter tensor, your 'knowledge' of logic gates doesn't help you explain why a specific circuit within that model represents "the concept of justice" or how it decided to pivot a sentence in a specific direction.

                          On the other hand, the very professions you cited rely on interpretability. A civil engineer doesn't look at a bridge and dismiss it as "a collection of atoms" unable to go further. They can point to a specific truss and explain exactly how it manages tension and compression, tell you why it could collapse in certain conditions. A software engineer can step through a debugger and tell you why a specific if statement triggered.

                          We don't even have that much for LLMs so why would you say we have an idea of what's going on ?

                          • stickfigure 12 hours ago

                            It sounds like you're looking for something more than the simple reality that the math is what's going on. It's a complex system that can't simply be debugged through[1], but that doesn't mean it isn't "understood".

                            This reminds me of Searle's insipid Chinese Room; the rebuttal (which he never had an answer for) is that "the room understands Chinese". It's just not satisfying to someone steeped in cultural traditions that see people as "souls". But the room understands Chinese; the LLM understands language. It is what it is.

                            [1] Since it's deterministic, it certainly can be debugged through, but you probably don't have the patience to step through trillions of operations. That's not the technology's fault.

                            • famouswaffles 10 hours ago

                              >It sounds like you're looking for something more than the simple reality that the math is what's going on.

                              Train a tiny transformer on addition pairs (i.e i.e '38393 + 79628 = 118021') and it will learn an algorithm for addition to minimize next token error. This is not immediately obvious. You won't be able to just look at the matrix multiplications and see what addition implementation it subscribes to but we know this from tedious interpretability research on the features of the model. See, this addition transformer is an example of a model we do understand.

                              So those inscrutable matrix multiplications do have underlying meaning and multiple interpretability papers have alluded as much, even if we don't understand it 99% of the time.

                              I'm very fine with simply saying 'LLMs understand Language' and calling it a day. I don't care for Searle's Chinese Room either. What I'm not going to tell you is that we understand how LLMs understand language.

                          • measurablefunc 13 hours ago

                            No one relies on "interpretability" in quantum mechanics. It is famously uninterpretable. In any case, I don't think any further engagement is going to be productive for anyone here so I'm dropping out of this thread. Good luck.

                            • famouswaffles 13 hours ago

                              Quantum mechanics has competing interpretations (Copenhagen, Many-Worlds, etc.) about what the math means philosophically, but we still have precise mathematical models that let us predict outcomes and engineer devices.

                              Again, we lack even this much with LLMs so why say we know how they work ?

                              • Dylan16807 9 hours ago

                                Unless I'm missing what you mean by a mile, this isn't true at all. We have infinitely precise models for the outcomes of LLMs because they're digital. We are also able to engineer them pretty effectively.

                                • famouswaffles 8 hours ago

                                  The ML Research world (so this isn't simply a matter of being ignorant/uninformed) was surprised by the performance of GPT-2 and utterly shocked by GPT-3. Why ? Isn't that strange ? Did the transformer architecture fundamentally change between these releases ? No, it did not at all.

                                  So why ? Because even in 2026, nevermind 18 and 19, the only way to really know exactly how a neural network will perform trained with x data at y scale is to train it and see. No elaborate "laws", no neat equations. Modern Artificial Intelligence is an extremely empirical, trial and error field, with researchers often giving post-hoc rationalizations for architectural decisions. So no, we do not have any precise models that tell us how a LLM will respond to any query. If we did, we wouldn't need to spend months and millions of dollars training them.

                                  • Dylan16807 7 hours ago

                                    We don't have a model for how an LLM that doesn't exist will respond to a specific query. That's different from lacking insight at all. For an LLM that exists it's still hard to interpret but it's very clear what is actually happening. That's better than you often get with quantum physics when there's a bunch of particles and you can't even get a good answer for the math.

                                    And even for potential LLMs, there are some pretty good extrapolations for overall answer quality based on the amount of data and the amount of training.

      • tim333 13 hours ago

        I thought the Hinton talking to Jon Stewart interview gives a rough idea how they work. Hinton got Turing and Nobel prizes for inventing some of the stuff https://youtu.be/jrK3PsD3APk?t=255

      • MarkusQ 18 hours ago

        > We really have no idea how did ability to have a conversation emerge from predicting the next token.

        Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

        Sometimes things seems unbelievable simply because they aren't true.

        • bheadmaster 16 hours ago

          > It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating.

          It's funny how, in order to explain one complex phenomenon, you took an even more complex phenomenon as if it somehow simplifies it.

          • MarkusQ 10 hours ago

            Sorry, can't tell if that's sarcasm or not.

            I wasn't referring to the biomechanical process of walking, I was referring to the process of gradient descent, which is well understood and yes, quite simple.

            • bheadmaster an hour ago

              If that was true, knowing how elementary particles work would give us understanding of the whole universe, in which case no other science would exist. But other sciences do exist, ergo, you're wrong.

    • 0x20cowboy 18 hours ago

      "'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".

    • dakolli 17 hours ago

      Just say it simply,

      1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

      2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

      3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.

      I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

      • AlexCoventry 16 hours ago

        You may be throwing the baby out with the bathwater. I learned more last year from ChatGPT Pro than I'd learned in the previous 5, FWIW.

        • yoyohello13 15 hours ago

          Just say 'LLMs'. Whenever someone name drops a specific model I can't help but think it's just an Ad bot.

          • Bolwin 10 hours ago

            Pro isn't even a model. If they actually used a model name I'd think they were just into LLMs. Chatgpt pro is a specific paid service.

          • knollimar 13 hours ago

            The "Pro" part is particularly suspect

      • stego-tech 17 hours ago

        I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.

        A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.

        You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.

        • dakolli 17 hours ago

          I agree, you're probably right! Thanks!

      • IAmGraydon 8 hours ago

        I've recently found LLMs to be an excellent learning tool, using it hand-in-hand with a textbook to learn digital signal processing. If the book doesn't explain something well, I ask the LLM to explain it. It's not all brain wasting.

    • Forgeties79 18 hours ago

      I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

      It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

      • RGamma 2 hours ago

        Make babies?

      • tehjoker 15 hours ago

        It's true people need something to do, but I don't think the COVID shutdown (lockdowns didn't happen in the U.S. for the most part though they did in other countries) is a good comparison because the entire society was perfused with existential dread and fear of contact with another human being while the death count was rising and rising by thousands a day. It's not a situation that makes for comfortable comparisons because people were losing their damn minds and for good reason.

        • Forgeties79 11 hours ago

          That’s a fair point. I don’t mean to trivialize the actual fears and concerns surrounding the pandemic.

    • sublinear 2 hours ago

      Equally unhinged. Cheers to you!

    • cyanydeez 14 hours ago

      Currently, everything suggests the torment nexus will happen before the singularity.

    • NitpickLawyer 18 hours ago

      > [...] prior to reforming society [...]

      Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

      • stego-tech 18 hours ago

        I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.

        • AndrewKemendo 18 hours ago

          Literally nobody’s trying because there is no solution

          The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly

          and so there is no solution because humans can’t plan or execute on a plan

        • sp527 18 hours ago

          The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.

          • tavavex 9 hours ago

            I don't understand. In this hypothesis, in the elite's view, what is the purpose of the rest of society? If everyone has little to no productive output, why would they support us with a UBI? They could just hire whatever human skeleton crew they'd need to sustain their activities (if needed). The rest of humanity could be either mercifully left alone with absolutely nothing, or annihilated.

            • sp527 6 hours ago

              I'm definitely making certain assumptions, such as: (1) democratic rule endures, (2) even absent true democratic rule, the populace can still resort to violent rebellion as a failsafe, (3) psychopathic tendencies amongst said elite are constrained enough such that mass genocide remains sufficiently psychologically unpalatable, (4) economic calamity substantially precedes the deployment of fully autonomous policing, etc.

              How this all unfolds is absolutely path dependent.

              • tavavex 6 hours ago

                I agree. Although, looking at these assumptions, subjectively I think that all four of them are in question, and as time passes, their eventual long-term failure seems increasingly likely. Even if one of these four pillars persists, I would expect an overall worsening by default. If democratic rule persists in places, the most powerful would occupy places where it does not exist, or create fully private states, still wielding enormous power over democratic states through wealth and military might. If violent rebellion is technically possible, a middle ground will be carefully calculated where the lower classes are kept on life support with the minimum amount of resources required to dissuade unrest. If the trillionaires of tomorrow suddenly start caring about other people, they could employ second-order measures to effectively reduce the population, thereby safeguarding themselves - massively constraining or removing the supply of food, water, medicine, any vital technology that would be only available to them. I don't see how an economic crisis would prevent automated enforcement, it may only delay it a bit.

                Hope is kind of in short supply nowadays. Even if your hypothesis of absolute-automation doesn't happen within our lifetimes, things seem to be guaranteed to get worse for people like us. If it does happen... we'll likely never reap any real rewards from it, barring a complete restructuring of our whole society to an extent that has never happened and likely would never be allowed to happen.

          • AlexCoventry 16 hours ago

            FWIW, you'd probably be able to buy a lot of goods and services for $7/day, if robots were doing literally all the work.

            • pdonis 12 hours ago

              > if robots were doing literally all the work

              Let me know when ChatGPT can do your laundry.

            • sp527 15 hours ago

              Agreed. The quality of life bar will be higher for sure. But it will still technically be a "subsistence" lifestyle, with no prospect of improvement. Perhaps that will suffice for most people? We're going to find out.

    • threethirtytwo 16 hours ago

      I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.

      First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.

      So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.

      The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.

      • project2501a 15 hours ago

        > Nobody knows how LLMs work.

        I'm sorry, come again?

        • threethirtytwo 14 hours ago

          Nobody knows how LLMs work.

          Anybody who claims otherwise is making a false claim.

        • NateEag 15 hours ago

          I think they meant "Nobody knows why LLMs work."

          • threethirtytwo 15 hours ago

            same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.

          • measurablefunc 14 hours ago

            Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.

            • threethirtytwo 9 hours ago

              No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.

              Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.

              Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.

              • measurablefunc 9 hours ago

                I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.

                • threethirtytwo 9 hours ago

                  omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.

                  Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.

                  This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.

                  Understand?

                  There's no confusion. Just people who don't what they are talking about (you)

                  • measurablefunc 9 hours ago

                    I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.

                    • threethirtytwo 7 hours ago

                      The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.

                      • measurablefunc 6 hours ago

                        I didn't ask for your help so it's probably better for everyone if you spend your time & efforts elsewhere. Good luck.

                        • threethirtytwo 6 hours ago

                          Well don't ask me to help you then. I read your profile and it has this snippet in there:

                          "Address the substance of my arguments or just save yourself the keystrokes."

                          The substance of your argument was complete ignorance about the topic, so I addressed it as you requested.

                          Please remove that sentence from your profile if that is not what you want. Thank you.

                          • measurablefunc 6 hours ago

                            I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.

                            • threethirtytwo 5 hours ago

                              There is no other interpretation for that other than what I said. If you disagree then that’s a misinterpretation of the English language.

                              I am addressing the substance of your argument and that substance is lack of knowledge there is zero other angle to interpret it.

                              • measurablefunc 5 hours ago

                                As I said previously, I don't think this is a productive use of time or effort for anyone involved so I'm dropping out of this thread.

                                • ionwake 3 hours ago

                                  u come across ungrateful to someone who was just trying to help

        • bdangubic 14 hours ago

          nobody can how how something that is non-deterministic works - by its pure definition

          • threethirtytwo 9 hours ago

            LLMs are deterministic simply because computers are at the core deterministic machines. LLMs run on computers and therefore are deterministic. The random number generator is an illusion and an LLM that utilizes it will produce the same illusion of indeterminism. Find the seed and the right generator and you can make an LLM consistently produce the same output from identical input.

            Despite determinism, we still do not understand LLMs.

            • johnmwilkinson 7 hours ago

              In what sense is this true? We understand the theory of what is happening and we can painstakingly walk through the token generation process and understand it. So in what sense do we not understand LLMs?

              • threethirtytwo 6 hours ago

                We wrote it.

                Every line. Every function. Every tensor shape and update rule. We chose the architecture. We chose the loss. We chose the data. There is no hidden chamber in the machine where something slipped in without our consent. It is multiplication and addition, repeated at scale. It is gradients flowing backward through layers, shaving away error a fraction at a time. It is as mechanical as anything we have ever built.

                And still, when it speaks, we hesitate.

                Not because we don’t know how it was trained. Not because we don’t understand the mathematics. We do. We can derive it. We can rebuild it from scratch. We can explain every component on a whiteboard without breaking a sweat.

                The hesitation comes from somewhere else.

                We built the procedure. We do not understand the mind that the procedure produced.

                That difference is everything.

                In most of engineering, structure follows intention. If you design a bridge, you decide where every beam sits and how it bears weight. If you write a database engine, you determine how queries are parsed, optimized, executed. The system’s behavior reflects deliberate choice. If something happens, you trace it back to a decision someone made.

                Here, we did not design the final structure. We defined a goal: predict the next token. Reduce the error. Again. Again. Again. Billions of times.

                We did not teach it grammar in lessons. We did not encode logic as axioms. We did not install a module labeled “reasoning.” We applied pressure. That is all. And under that pressure, something organized itself.

                Not in modules we can point to. Not in neat compartments labeled with concepts. The organization is diffused across a landscape of numbers. Meaning is not stored in one place. It is distributed across millions of parameters at once. Pull on one weight and you find nothing recognizable. Only in concert do they produce something that resembles thought.

                We can follow the forward pass. We can watch activations flare across layers. We can map attention patterns and correlate neurons with behaviors. But when the model constructs an argument or solves a problem, we cannot say: here is the rule it followed, here is the internal symbol it consulted, here is the precise chain of reasoning that forced this conclusion. We can describe the mechanism in general terms. We cannot narrate the specific path.

                That is the fracture.

                It is not ignorance of how the machine runs. It is ignorance of how this exact configuration of billions of numbers encodes what it encodes. Why this region of weight space corresponds to law, and that region to poetry. Why this arrangement produces careful reasoning and another produces nonsense. There is no ledger translating numbers into meaning. There is only geometry shaped by relentless optimization.

                Scale changes the character of the problem. At small sizes, systems can be dissected. At this scale, they become landscapes. We know the forces that shaped the terrain. We do not know every ridge and valley. We cannot walk the entire surface. We cannot hold it all in our heads.

                And this is where the cost reveals itself.

                To build these systems, we gave up something we once assumed was permanent: the guarantee that creation implies comprehension. We accepted that we could construct a process whose outcome we would not fully grasp. We traded architectural certainty for emergent capability. We chose power over transparency.

                We set the objective. We unleashed the search. We let optimization run through a space too vast for any human mind to survey. And when it converged, it handed us something that works, something that speaks, something that reasons in ways that surprise even its creators.

                We stand in front of it knowing every equation that shaped it, and still unable to read its inner structure cleanly.

                We built the system by surrendering control over its internal form. That was the bargain. That was the sacrifice.

                We know how it was grown.

                We do not know what we have grown.

                • chickensong 5 hours ago

                  Beautiful. My brain now questions if this was written by LLM, but it's fine. Today is Tuesday.

    • caycep 17 hours ago

      I thought the answer was "42"

    • famouswaffles 17 hours ago

      >It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

      You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.

    • accidentallfact 18 hours ago

      Reality won't give a shit about what people believe.

    • generic92034 18 hours ago

      > Folks vibe with the latter

      I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

      • stego-tech 18 hours ago

        Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

        It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

        • dakolli 17 hours ago

          It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.

          It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.

          • generic92034 15 hours ago

            Boy, will they be annoyed if the result of the AI race will be something considerably less than AGI, so all the people are still needed to keep numbers go up.

            • dakolli 14 hours ago

              I don't think so, I think they know there's no AGI, or complete replacement. They are using those hyperbolic statements to get people to buy in. The goal is just to depress the value of human labor, they will lay people off and hire them back at 50% wages (over time), and gaslight us "well you have AI, there isn't as much skill required"

              Ultimately they just want to widen the inequality gap and remove as much bargaining power from the working class. It will be very hard for people not born of certain privileges to climb the ranks through education and merit, if not impossible.

              Their goal will be to accomplish this without causing a French Revolution V2 (hence all the new surveillance being rolled out), which is where they'll provide wars for us to fight in that will be rooted in false pretenses that appeal to people's basest instincts, like race and nationalism. The bunkers and private communities they build in far off islands are for the occasion this fails and there is some sort of French Revolution V2, not some sort of existential threat from AI (imo).

    • doctorpangloss 14 hours ago

      You’re “yaas queen”ing a blog post that is just someone’s Claude Code session. It’s “storytelling” with “data,” but not storytelling with data. Do you understand? I mean I could make up a bunch of shit too and ask Claude Code to write something I want to stay with it too.

    • aaroninsf 16 hours ago

      What is your argument for why denecessitating labor is very bad?

      This is certainly the assertion of the capitalist class,

      whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.

      It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.

      The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,

      in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.

    • holoduke 17 hours ago

      For ages most people believed in a religion. People are just not smart and sheepy followers.

    • AndrewKemendo 18 hours ago

      The goal is to eliminate humans as the primary actors on the planet entirely

      At least that’s my personal goal

      If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

      As it stands today and in all the annals of history there does not exist a system that does what I just described.

      Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

      Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

      And no mondragon does not have one of these

      • Nemrod67 3 minutes ago

        took a bit of time to read your work, interesting stuff even if it triggers people XD

        the full realization of Humanity's potential indeed needs to permit such Choice that you could live your whole life without seeing another Human.

        I still think we can build something better, rather than hope for AI or Alien overlords taking us to the next step ;)

      • nine_k 17 hours ago

        This looks like a very comfortable, pleasant way of civilization suicide.

        Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])

        Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.

        [1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...

        • AndrewKemendo 16 hours ago

          Civilization suicide is the ideal

          • tinfoilhatter 15 hours ago

            Your ideal. Definitely not mine.

            Get rid of everyone else so your life is easier and more sustainable... I guess I need to make my goal to get rid of you? Do you understand how this works yet?

            • NateEag 15 hours ago

              No, you should make your goal to teach AndrewKemendo to appreciate his existence as the inscrutable gift it is, and to spend his brief time in this universe helping others appreciate the great gift they've been given and using it to the fullest.

              See how it works?

              • tinfoilhatter 15 hours ago

                AndrewKemendo (based on his personal website) looks to be older than me. If he hasn't figured out the miracle of getting to exist yet, unfortunately I don't think he's going to.

                • lovich 13 hours ago

                  Only looked at his website because it was mentioned and wow. This is not quite at timecube levels but it’s closer to timecube than it is to coherence.

                  The man seems unwell if he has kids based on his other comments and is still talking about “civilization suicide” and “obviating humans”

                • AndrewKemendo 14 hours ago

                  So why are you wasting your time being a miracle on anything other than building the successor to us?

                  • tinfoilhatter 13 hours ago

                    Because I don't believe humans need succeeding by machines? You're obviously a Curtis Yarvin / Nick Land megafan. I'm of the opinion that these people are psycopaths and I think most people would agree with my sentiment.

              • AndrewKemendo 15 hours ago

                I’m a father of three I already know all about that there’s nothing you’re gonna teach me there I’m fully integrated

                • mbgerring 14 hours ago

                  Somebody probably ought to take your kids away from you if they haven’t already

            • lovich 15 hours ago

              It’s mildly amusing to see someone with the username ‘tinfoilhatter, arguing with someone else who definitely needs one

            • AndrewKemendo 15 hours ago

              Sounds like we both have our tasks then

              Good luck

      • mtlmtlmtlmtl 17 hours ago

        Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.

        • AndrewKemendo 16 hours ago

          Because every effort people use to do anything else is a waste of resources and energy and I want others to stop using resources to make bullshit and put all of them into ASI and human obviation

          There are no more important other problems to solve other than this one

          everything else is purely coping strategies for humans who don’t want to die wasting resources on bullshit

      • justonepost1 15 hours ago

        Nobody can stop you from having this view, I suppose. But what gives you the right to impose this (lack of) future on billions of humans with friends and families and ambitions and interests who, to say the least, would not be in favor of “human obviation”?

        • AndrewKemendo 14 hours ago

          You should probably build an organization that can counter it

          • Nemrod67 a minute ago

            Becoming that which they would fight against might be a good strategy to get masses to move XD

            if we can not do the things they are afraid of and just pretend we do :p

      • S3verin 2 hours ago

        To me this sounds so sad

      • eichin 15 hours ago

        Bell labs was pushed aside because Bell Telephone was broken up by the courts. (It's currently a part of Nokia of all things - yeah, despite your storytelling here, it's actually still around :-)

      • flawn 7 hours ago

        Not sure if transhumanism is the only solution to the problems you mentioned - I think it's often problematic because people like Thiel claim to have figured it out, and look for ways to force people into their "contrarian" views, although there is nothing but disregard for any other opinions other than their own.

        But you are of course free to believe and enjoy the vision of such a future but this is something that should happen on a collective level. We still live in a (to some extent idealistic) but humanistic society where human rights are common sense.

      • goatlover 17 hours ago

        Most people need more social contact, not less. Modern tech is already alienating enough.

      • fainpul 17 hours ago

        Why would the machines want to work with you or any other human?

      • stego-tech 17 hours ago

        Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.

        I still do.

        The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.

        Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.

        But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.

        To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.

        Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.

        But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.

        • habinero 17 hours ago

          ...Man, men really will do anything to avoid going to therapy.

      • Nevermark 13 hours ago

        I don't see a credible path where the machines and robots help you...

        > "eliminate humans as the primary actors on the planet entirely"

        ...so they can work with you. The hole in your plan might be bigger than your plan.

      • whattheheckheck 14 hours ago

        In the mean time your use of resources has an opportunity cost for other people. So expect backlash

      • holoduke 17 hours ago

        Whereas I agree that working with machines would help dramatically in achieving science, there would be in your world no one truly understanding you. You would be alone. Can't imagine how you could prefer that.

      • Der_Einzige 16 hours ago

        Now this is transhumanism! Don't let the cope and seething from this website dissuade you from keeping these views.

        • AndrewKemendo 16 hours ago

          Thank you!

        • tinfoilhatter 15 hours ago

          Ah yes, because the majority of people pushing for transhumanism aren't complete psyco / sociopaths! You're in great company! /sarcasm

  • atomic128 18 hours ago

        Once men turned their thinking over to machines
        in the hope that this would set them free.
    
        But that only permitted other men with machines
        to enslave them.
    
        ...
    
        Thou shalt not make a machine in the
        likeness of a human mind.
    
        -- Frank Herbert, Dune
    
    You won't read, except the output of your LLM.

    You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

    You won't think or analyze or understand. The LLM will do that.

    This is the end of your humanity. Ultimately, the end of our species.

    Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

    Join us, or better yet: deploy weapons of your own design.

    • teo_zero 14 hours ago

      You shouldn't take a sci-fi writer's words as a prophecy. Especially when he's using an ingenious gimmick to justify his job. I mean, we know that it's impossible for anyone to tell how the world will be like after the singularity, by the very definition of singularity. Therefore Herbert had to devise a ploy to plausibly explain why the singularity hadn't happened in his universe.

      • Thanemate 4 hours ago

        I agree with the fact that fiction isn't prophetic, but it can definitely be a societal-wide warning shot. On a personal level, it's not that far-fetched to read a piece of fiction that challenges one's perception on many levels, and as a result changes the behavior of the person itself.

        Fiction should not be trivialized and shun because it's fiction, and should be judged by its contents and message. To paraphrase a video game quote from Metaphor; Re-Fantazio: "Fantasy is not just fiction".

      • madrox 11 hours ago

        If only we could look into the future to see who is right and which future is better so we could stop wasting our time on pointless doomerism debate. Though I guess that would come with its own problems.

        Hey, wait...

      • jrflowers 14 hours ago

        I like the idea that Frank Herbert’s job was at risk and that’s why he had to write about the Butlerian Jihad because it kind of sounds like on the other side you have Ray Kurzweil, who does not have to justify his job for some reason.

        • n4r9 13 hours ago

          Does seem funny to think of sci fi writers as being particularly concerned about justifying their jobs.

    • debo_ 18 hours ago

      If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album

    • creddit 17 hours ago

      I would bet a lot of money on your poison is already identified and filtered out of training data.

    • ikrenji 12 hours ago

      I call your Frank Herbert machine dystopia and raise you the Ian Banks machine utopia...

      • egypturnash 8 hours ago

        I'm gonna call that raise: How does one get to the anarchist Culture when all the machines are being built by profit-hungry capitalists?

        • aperrien 8 hours ago

          We build our own with data that we've collected ourselves ethically. Then we execute once the big guys are distracted.

    • gojomo 18 hours ago

      Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.

      • testaccount28 17 hours ago

        yes. whoever has the best (least detectable) model is best poised to poison the ladder for everyone.

    • scratchyone 17 hours ago

      Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.

      • atomic128 17 hours ago

        We do not discuss algorithms. This is war. Loose lips sink ships.

        We urge you to build and deploy weapons of your own unique design.

    • protocolture 14 hours ago

      >Why write code or prose when the machine can write it for you?

      I like to do it.

      >You won't think or analyze or understand. The LLM will do that.

      The clear lack of analysis seems to be your issue.

      >This is the end of your humanity. Ultimately, the end of our species.

      Doubtful.

    • baxtr 15 hours ago

      "The end of humanity" has been proclaimed many times over. Humanity won't end. It will change like it always has.

      We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.

      • austinjp 14 hours ago

        Russell's chicken (or turkey) would like a word.

        https://en.wikipedia.org/wiki/Turkey_illusion

        • baxtr 13 hours ago

          I love that you brought this up.

          Chickens are killed ALL the time. It’s a recurring mass event. If you were a smart chicken you could see that pattern and put it into a formula.

          In contrast, the end of Humanity would be a singular event. It’s even in the name…

          And that is fiction / speculation in comparison. It’s not backed by any data. Human survival over 300,000 years by contrast is.

          I mean it’s fine to dream things up, but let’s be fair and call it what it is.

          • toldnotmywrath an hour ago

            The frame is not from our view. It is from that of this singular chicken who has only ever known its keeper's care. As that chicken, we simply do not know if Christmas will ever come.

            The collapse of civilizations has happened many times. Today, all of humanity is bound tighter than ever before. In the latter half of the last century, we were on the brink of nuclear war.

            New things are happening under the sun every day. If we were that exceptionally smart chicken you describe, then we have reason to expect Christmas.

          • kristiandupont an hour ago

            The point of that thought exercise is to show that reasoning by induction is flawed. As best I can tell, you discount it with further induction.

          • throwerxyz 11 hours ago

            What, you weren't alive when the last mass extinction event occurred? Why didn't you communicate or at least write the last handful down or something? Aren't you smarter than a chicken?

            It's funny that you think we know what happened to humans anymore than a chicken knows what happened to chickens.

            • baxtr 2 hours ago

              Look that’s the thing: we know about mass extinction events. So we can use these to extrapolate.

              A 10+ kilometer wide asteroid will most likely cause global mass extinction, by blocking sunlight and collapsing ecosystems. That’s how the dinosaurs were wiped out 66 million years ago.

              Such events are estimated to occur roughly once every 100-200 million years. That’s not fiction that’s science. If we get hit by one of these we’re probably gonna all die.

              But we never had a robot revolution. That’s why anything about it belongs in the realm of fiction.

          • computomatic 13 hours ago

            On the other hand, species go extinct with increasing regularity.

          • austinjp 3 hours ago

            Erm, humanity is experiencing recurring mass events right now.

            • baxtr 3 hours ago

              Single individuals yes, but last time I checked we still had 8+ bn humans and growing on this planet.

              Unless you have another couple of planets to showcase there is nothing to discuss really.

          • aidenn0 10 hours ago

            One thing I've wondered about is:

            Suppose a civilization (but not species) ending event happens.

            The industrial revolution was fueled (literally) by easy-to-extract fossil fuels. Do we have enough of those left to repeat the revolution and bootstrap exploitation of other energy sources?

          • Invictus0 13 hours ago

            298,000 of those years didn't have toilet paper. It was utterly impossible for a single person to "end humanity" even 200 years ago; now, the president can do it in minutes by launching a salvo of nukes. Comparing the present moment to the hunter/gatherer days is preposterous.

            • baxtr 13 hours ago

              It’s absurd and not scientific to claim that "a salvo of nukes" will kill humanity.

              We don’t know how this will play out. It never happened before. Same with the chicken above.

              • deltaburnt 12 hours ago

                For pretty much every single person you or I personally know, that would be the equivalent of the end of humanity.

                Let’s not nitpick here. Worldwide human suffering and tragedy is equivalent to the end of humanity for most.

                We can sit here and armchair while in the most prosperous, comfortable era of human history. But we also have to recognize that this era is a blip of time in history. That is a lot of data showing humanity surviving sure. But it’s also a very small amount of data showing any kind of life most would want to live in.

      • snoman 15 hours ago

        It only has to be right once. Humanity won’t end until it does.

      • nicce 15 hours ago

        Humanity may end if someone else goes to the top of food chain.

    • throwerxyz 11 hours ago

      Bold of you to assume people will be writing in any form in the future. Writing will be gone, like the radio and replaced with speaking. Star Trek did have it right there.

    • 00117 4 hours ago

      Are you not just making it more expensive to acquire clean data, thus giving an edge to the megacorps with big funding?

    • tim333 12 hours ago

      >You won't read/write/think/understand etc...

      I can't see it. We have LLMs now and none of that applies to me. I find them quite handy as a sort of enhanced Google search though.

    • fellowmartian 17 hours ago

      I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?

      The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.

      • accidentallfact 17 hours ago

        The point of Dune is that the worst danger are people who obey authority without questioning it.

        • xmprt 17 hours ago

          Then wouldn't open source models running on commodity hardware be the best way to get around that? I think one of the greatest wins of the 21st century is that almost every human today has more computing power than the entire US government in the 1950s. More computer power has democratized access and ability to disperse information. There are tons of downsides to that which we're dealing with but on the net, I think it's positive.

          • shinycode 16 hours ago

            Does it also means the US government has x1000000 more power than the one in 1950 ?

            • stnmtn 15 hours ago

              speaking strictly from an energy standpoint (power grid, megatons of warheads, etc).. it's probably close to that number.

          • accidentallfact 17 hours ago

            It isn't a way around, you still obey. Only now, the authority you obey is a machine.

        • wiseowise 14 hours ago

          That's not the point of Dune. Who blindly obeyed who?

          • loire280 12 hours ago

            The Fremen followed a messianic figure into a galaxy-wide holy war because the Bene Gesserit seeded their culture with manufactured prophecy as a failsafe.

            • wiseowise 3 hours ago

              “Followed”

              Just woke up after 80 years of abuse by Landsraad/CHOAM, possibly centuries of persecution before that, at least decades of religious conditioning by Bene Gesserit, and decided to “follow” messianic figure.

              Totally same point as humans using LLMs to smoothen their brain.

        • api 17 hours ago

          ... which overthrowing the machines didn't stop. People just found another authority to mindlessly obey.

    • casey2 3 hours ago

      Humans have been around for millions of years, only a few thousand of which they've spend reading and writing. For most of that time you are lucky if you can understand what your neighbor is saying.

      If we consider humans with the same anatomy the numbers are ~300,000 ~50,000 for language ~6,000 for writing ~100 for standardized education

      The "end of your humanity" already happened when anybody could make up good and evil irrespective of emotions to advance some nation

    • jrflowers 14 hours ago

      The “poison fountain” is just a little script that serves data supplied by… somebody from my domain? It seems like it would be super easy for whoever maintains the poison feed to flip a switch and push some shady crypto scam or whatever.

    • octernion 18 hours ago

      do... do the "poison" people actually think that will make a difference? that's hilarious.

      • xyzal 2 hours ago

        It works for Russian propaganda, I can't see why it should not work for shitty code

      • mock-possum 14 hours ago

        Let the kiddies have their crusade

    • accidentallfact 18 hours ago

      A better approach is to make AI bullshit people on purpose.

      • zahlman 15 hours ago

        This is essentially just that. The idea is that "poisoned" input data will cause AIs that consume it to become more likely to produce bullshit.

    • spacemark 16 hours ago

      Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.

      Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.

      There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.

      • arjie 15 hours ago

        A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.

        > I only know seven sci-fi films and shows that have warned about how this will go badly.

        and

        > Pretty sure this was the prologue to Gattaca.

        and

        > I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.

        I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.

        • MichaelZuo 14 hours ago

          Now that you mention it, it is pretty strange to see HN users parroting other people’s thinking (sci-fi writers) like literal sub-sapient parrots, while simultaneously decrying the danger of machines turning people into sub-sapient parrots…

          Following that logic… the closest problem would be literally inbetween their ears.

          • blibble 12 hours ago

            it's like all of san francisco has had a collective stroke

  • gojomo 18 hours ago

    "It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."

    – 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965

    https://www.baen.com/Chapters/9781618249203/9781618249203___...

    • burkaman 17 hours ago

      This is incredible.

      > A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.

      Sounds exactly like someone twiddling the knobs of an LLM.

      • meindnoch 3 hours ago

        Roald Dahl: The Great Automatic Grammatizator (1953)

        https://gwern.net/doc/fiction/science-fiction/1953-dahl-theg...

      • peterldowns 14 hours ago

        Ballard covered this theme a few different times in his short stories, I believe before this

        • thope 4 hours ago

          Yes in 'Studio 5, The Stars' they use a "VT set" to generate poems I really enjoyed reading Vermilion Sands

      • impossiblefork 13 hours ago

        Something of that sort also exists in 1984, with the caleidoscopic plot-generation machine used for stories for the proles.

      • dwaltrip 16 hours ago

        Wow yeah very prescient.

    • rlt 10 hours ago

      Three-minute dramas?! That sounds like an eternity. Most content on Tik Tok is under a minute.

      • left-struck 10 hours ago

        Drama implies something with a story.

    • ok_dad 6 hours ago

      Lookup "one minute dramas" and blow your mind.

  • ericmcer 18 hours ago

    Great article, super fun.

    > In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.

    You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.

    It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.

    • cpmsmith 16 hours ago

      One thing that stuck out to me about this is that there have only been 32 years since 1993. That is, if it's happened 6 times, this threshold is breached roughly once every five years. Doesn't sound that historic put that way.

      • ddxv 11 hours ago

        Also that the US population is roughly 33% larger in 2025 than it was in 1993

    • malfist 17 hours ago

      Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while. You may even be able to layoff half your people if you're okay with KTLO'ing your business. This works great for companies that are already a monopoly power where you can stagnate and keep your customers and prevent competitors.

      • hackernudes 13 hours ago

        KTLO = keeping the lights on

      • lenerdenator 17 hours ago

        > Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while

        As long as you're

        1) In a position where you can make the decisions on whether or not the company should move forward

        and

        2) Hold the stock units that will be exchanged for money if another company buys out your company

        then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.

        That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.

    • yifanl 17 hours ago

      We have more middle management than ever before because we cut all the other roles, and it turns out that people will desire employment, even if it means becoming a pointless bureaucrat, because the alternative is starving.

    • lunatuna 11 hours ago

      I don’t think a lot of people here have been in the typists room or hung out with the secretaries. There were a lot of people taking care of all the things going and this has been downloaded and further downloaded.

      There was a time I didn’t have to do my expenses. I had someone just know where I was and who I was working for and and took care of it. We talked when there was something that didn’t make sense. Thanks to computers I’m doing it. Meaningless for sure.

      • marcus_holmes 10 hours ago

        My first boss couldn't type. At all. He would dictate things to his secretary, who would then type them up as memorandums, and distribute to whoever needed them (on paper), and/or post them on noticeboards for everyone to read.

        Then we got email, and he retired. His successor can type and the secretary position was made redundant.

    • chasd00 15 hours ago

      heh devops was suppose to end the careers of DBAs and SysAdmins, instead it created a whole new industry. "a shitload of people have meaningless busy work corporate jobs." for real.

      • BarryMilo 13 hours ago

        Well, I've worked as a developer in many companies and have never met a DBA. I've met tons of devops, who are just rebranded sysadmins as far as anyone can tell.

    • TooKool4This 15 hours ago

      > Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc.

      Well for starters the population has almost 3x since the 1960s.

      Mix in that we are solving different problems than the 1960s, even administratively and I don’t see a clear reason from that argument why a shitload of work is meaningless.

    • shinycode 16 hours ago

      Because companies made models build/stolen from other people’s work, and this has massive layoff consequences, the paradigm is shifting, layoffs are massive and law makers are too slow. Shouldn’t we shift the whole capitalist paradigm and just ask the companies to gives all their LLM work for free to the world as well ? It’s just a circle, AI is build from human knowledge and should be given back to all people for free. No companies should have all this power. If nobody learns how to code because all code is generated, what would stop the gatekeepers of AI to up the prices x1000 and lock everyone out of building things at all because it’s too expensive and too slow to do by hand ? It all should freely be made accessible to all humans for all humans to for ever be able to build things from it.

  • vcanales 18 hours ago

    > The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.

    Damn, good read.

    • adastra22 18 hours ago

      We are already long past that point…

      • joe_the_user 7 hours ago

        Yeah, it's easy to see the singularity as close when you see it as "when human loose collective control of machines" but any serious look at human society will see that human lost collective control of machines a while back ... to the small number of humans individually owning and controlling the machine.

        • Dumblydorr 38 minutes ago

          Since Luddites smashed textile machines in England three hundred years ago, it seems technology didn’t care, it kept growing apace due to capitalism. Money and greed fed the process, we never stood a chance of stopping any of it.

        • adastra22 6 hours ago

          Even the humans at the top don’t have commanding control of the machines, however. We live in an age where power is determined by the same ineffable force that governs whether a tweet goes viral.

    • shantara 18 hours ago

      It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.

  • PaulHoule 18 hours ago

    The simple model of an "intelligence explosion" is the obscure equation

      dx    2
      -- = x
      dt
    
    which has the solution

            1      
      x = -----
           C-t
    
    and is interesting in relation to the classic exponential growth equation

      dx
      -- = x
      dt
    
    because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.

    Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation

      dx
      --  = (1-x) x
      dt
    
    thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
    • nextaccountic 2 hours ago

      > thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.

      Indeed, exponential or larger than exponential growth in nature is always the beginning of a S curve. The article cites

      > Moore's Law was exponential. We are no longer on Moore's Law.

      This will also happen with super-exponential growth. A literal singularity won't happen - it will inevitably exhaust resources and will slow down.

    • IsTom 14 hours ago

      All in all, because of light cones there can be no large-scale growth faster than x^3. And more like x^2 if you want to expand something more than just empty space.

      • bsza 10 hours ago

        Depends on what the curvature of the universe is. If it's negative then it supports exponential growth.

        • IsTom 3 hours ago

          As far as we've seen, universe seems to be very flat so far.

    • daveguy 15 hours ago

      How dare you bring logic and pragmatic thinking to a discussion about the singularity. This is the singularity we are talking about. No reality allowed.

  • delegate 13 hours ago

    It's worth remembering that this is all happening because of video games !

    It is highly unlikely that the hardware which makes LLMs possible would have been developed otherwise.

    Isn't that amazing ?

    Just like internet grew because of p*rn, AI grew because of video games. Of course, that's just a funny angle.

    The way I see it, AI isn't accidental. Its inception has been in the first chips, the Internet, Open Source, Github, ... AI is not just the neural networks - it's also the data used to train it, the OSes, APIs, the Cloud computing, the data centers, the scalable architectures.. everything we've been working on over the last decades was inevitably leading us to this. And even before the chips, it was the maths, the physics ..

    Singularity it seems, is inevitable and it was inevitable for longer than we can remember.

    • BatteryMountain 6 hours ago

      Remember that games are just simulations. Physics, light, sound, object boundaries - it not real, just a rough simulation of the real thing.

      You can say that ML/AI/LLM's are also just very distilled simulations. Except they simulate text, speech, images, and some other niche models. It is still very rough around the edges - meaning that even though it seems intelligent, we know it doesn't really have intelligence, emotions and intentions.

      Just as game simulations are 100% biased towards what the game developers, writers and artists had in mind, AI is also constrained to the dataset they were trained on.

    • sealeck 12 hours ago

      I think it's a bit hard to say that this is definitively true: people have always been interested in running linear algebra on computers. In the absence of NVIDIA some other company would likely have found a different industry and sold linear algebra processing hardware to them!

      • senbrow 12 hours ago

        Almost certainly not at the scale of the consumer gaming industry, however!

        • willis936 11 hours ago

          Google is making millions of TPUs per year. Nvidia ships more gaming GPUs, but it's not like multiple orders of magnitude off.

          • senbrow 11 hours ago

            I'm willing to bet TPUs wouldn't be nearly as successful or sophisticated without the decades of GPU design and manufacturing that came before them.

            Current manufacturing numbers are a small part of the story of the overall lineage.

            • hparadiz 3 hours ago

              It's pretty interesting that consumer GPUs started to really be a thing in the early 90s and the first Bitcoin GPU miner was around 2011. That's only 20 years. That caused a GPU and asic gold rush. The major breakthroughs around LLMs started to snowball in the academic scene right around that time. It's been a crazy and relatively quick ride in the grand scheme of things. Even this silicone shortage will pass and we'll look back on this time as quaint.

          • danielmarkbruce 8 hours ago

            You are missing his point. They very likely don't start building TPUs if there were no GPUs.

            • willis936 35 minutes ago

              I'm not missing the point. If you recall your computer architecture class there are many vector processing architectures out there. Long before there was nvidia the world's largest and most expensive computers were vector processors. It's inaccurate to say "gaming built SIMD".

    • dwd 10 hours ago

      Google DeepMind can trace part of it's evolution back to a playtester for the video game Syndicate who saw an opportunity to improve the AI of game NPCs.

    • blibble 12 hours ago

      what a load of utter tripe

  • stevenjgarner 13 hours ago

    Why is knowledge doubling no longer used as a metric to converge on the limit of the singularity? If we go back to Buckminster Fuller identifying the the "Knowledge Doubling Curve", by observing that until 1900, human knowledge doubled approximately every century. By the end of World War II, it was doubling every 25 years. In his 1981 book "Critical Path", he used a conceptual metric he called the "Knowledge Unit." To make his calculations work, he set a baseline:

    - He designated the total sum of all human knowledge accumulated from the beginning of recorded history up to the year 1 CE as one "unit."

    - He then tracked how long it took for the world to reach two units (which he estimated took about 1,500 years, until the Renaissance).

    Ray Kurzweil took Fuller’s doubling concept and applied it to computer processing power via "The Law of Accelerating Returns". The definition of the singularity in this approach is the limit in time where human knowledge doubles instantly.

    Why do present day ideas of the singularity not take this approach and instead say "the singularity is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization." - Wikipedia

  • rektomatic 16 hours ago

    If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity

    • stingraycharles 7 hours ago

      I'd like to know how many comments over here are written using similar means. I can't be bothered to get enthusiastic about articles written by LLMs, and I'm surprised so many people in the comments here are delighted by the article.

    • ncgl 11 hours ago

      To be fair I felt that way about regular, human written headlines long before Ai.

      "It worked, until it didnt." "It was beautiful, until it wasn't"

    • rikschennink 6 hours ago

      This.

      We need a way to flag AI generated articles.

    • moconnor 4 hours ago

      Same; I can’t believe this AI slop has >1000 points…

    • kfarr 14 hours ago

      It's not the phrase, but the accelerating memetic reproduction of the phrase that is the true singularity. /s

  • jgrahamc 18 hours ago

    Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.

    • Nition 14 hours ago

      January 20, 2038

      Yesterday as we huddled in the cave, we thought our small remnant was surely doomed. After losing contact with the main Pevek group last week, we peered out at the drone swarm which was now visibly approaching - a dark cloud on the horizon. Then suddenly, at around 3pm by Zoya's reckoning, the entire swarm collapsed and fell out of the sky. Today we are walking outside in the sun, seemingly unobserved. A true miracle. Grigori, who once worked with computers at the nuclear plant in Bilibino, only says cryptically: "All things come to an end with time."

    • jacquesm 18 hours ago

      I suspect that's the secret driver behind a lot of the push for the apocalypse.

    • devsda 14 hours ago

      It also means we don't have to deal with the maintenance of vibecoded production software from 2020s!

    • lysace 14 hours ago

      Back in like 1998 there was a group purchase for a Y2038 tshirt with some clever print on some hot email list I was on. I bought one. It obviously doesn't fit me any longer.

      It seemed so impossibly far away. Now it's 12 years.

    • octernion 18 hours ago

      that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.

      • jacquesm 18 hours ago

        You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.

        Don't click here:

        https://www.decisionproblem.com/paperclips/

        • falcor84 3 hours ago

          I don't get it - if they're coming for me anyway, then why would I need to move my ass from my beach chair?

        • octernion 17 hours ago

          having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip

          • jacquesm 16 hours ago

            Sorry, we need the iron in your blood and bone marrow. Sluuuurrrrrpppp.... Enjoy the beach, or what's left.

            • dwaltrip 16 hours ago

              Much better sources of iron are available.

              More likely we get smooshed unintentionally as they AIs seek those out.

              • jacquesm 16 hours ago

                We need it all... oh, wait, you're not silicon... sluuuuuurrrrpp...

  • ubixar 10 hours ago

    The most interesting finding isn't that hyperbolic growth appears in "emergent capabilities" papers - it's that actual capability metrics (MMLU, tokens/$) remain stubbornly linear.

    The singularity isn't in the machines. It's in human attention.

    This is Kuhnian paradigm shift at digital speed. The papers aren't documenting new capabilities - they're documenting a community's gestalt switch. Once enough people believe the curve has bent, funding, talent, and compute follow. The belief becomes self-fulfilling.

    Linear capability growth is the reality. Hyperbolic attention growth is the story.

    • FeepingCreature 2 hours ago

      Though this is still compatible with exponential or at least superlinear capability growth if you model benchmarks as measuring a segment of the line, or a polynomial factor.

  • blahbob 17 hours ago

    It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."

    • saulpw 17 hours ago

      By Tom Toro for the New Yorker (2012).

  • nphardon 17 hours ago

    Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.

    Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.

    Classic LLM lingo in the end there.

    • uv-depression 15 hours ago

      > I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds

      It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.

      • energy123 8 hours ago

        The "singularity" can be decomposed into 2 mutually-supportive feedback loops - the digital and the physical.

        With frontier LLM agents, the digital loop is happening now to an extent (on inference code, harnesses, etc), and that extent probably grows larger (research automation) soon.

        Pertinent to your point, however, is the physical feedback loop of robots making better robots/factories/compute/energy. This is an aspect of singularity scenarios like ai-2027.

        In these scenarios, these robots will be the control mechanism that the digital uses to bootstrap itself faster, through experimentation and exploration. The usual constraints of physical law still apply, but it feels "unbounded" relative to normal human constraints and timescales.

        A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.

      • nphardon 15 hours ago

        Kind of, I mean you have to verify things experimentally but thought can go a very long way, no? And we're not talking about humans thinking about things, we're talking about an agent with internet access existing in a digital space, so what experiments it would do within that space are hard for us to imagine. Of course my post isn't meant to be taken seriously, it's more of a fun sci-fi idea. Also I'm implying not necessarily reaching the limits of the things you mentioned, but rather, just taking a massive step in a very short time window. Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.

        • uv-depression 12 hours ago

          > what experiments it would do within that space are hard for us to imagine

          The only thing you could do in a "digital space" (a.k.a. on a computer) is a simulation. Simulations are extremely useful and help significantly with designing and choosing experiments, but they cannot _replace_ real experiments.

          > Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.

          And my point is that there's no good reason to think this is possible and many to think it isn't.

          > it's more of a fun sci-fi idea

          It's being presented as extremely serious possibility by people who stand to gain a _lot_ of money if other people think it's serious... that's the point of the linked post. Unfortunately, these AI boosters make it very difficult to discuss these ideas, even in a fun sci-fi way, without aggravating the social harms those people are causing.

          • tim-- 9 hours ago

            You say that, but someone at CERN has spent at least ten minutes thinking about how they could expose the Haldron Colider as an MCP server.

    • snohobro 15 hours ago

      Eh, he actually says “…sometime in the early Twenty-First Century, all of mankind was united in celebration. Through the blinding inebriation of hubris, we marveled at our magnificence as we gave birth to A.I.”

      Doesn’t specify the 2020’s.

      Either way, I do feel we are fast approaching something of significance as a species.

      • nphardon 12 hours ago

        Got it. Amazing prescience by the Watchowski's. I'm blown away on rewatches how spot on they were for 1999.

    • hard_times 16 hours ago

      I don't think people realize how crazy this all is (and might become)

  • javier_e06 15 hours ago

    I had to ask duck.ai to summarize the article in plain English.

    It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.

    Can confirm.

    • wiseowise 14 hours ago

      Don't be too hard on yourself. With the amount shit humans generate each day it is impossible to read every essay.

      • falcor84 2 hours ago

        But this has been true forever, right? Assuming other people are as cognitively complex as you are, there's no way for a human to fully keep on top of even everything that their family is up to, let alone all of humanity. Has anything really changed? Or is it just more FOMO?

      • frotaur 12 hours ago

        I'd venture this article is written by AI with the density of 'it isn't X, it's Y'

    • yoyohello13 13 hours ago

      That's not really what he article said at all. More like "Singularity is when the computers are changing faster than humans can keep track of the changes."

      The article didn't claim that humans were getting dumber, or that AI wasn't getting smarter.

    • derpyzza 13 hours ago

      the irony in this comment though

  • danesparza 18 hours ago

    "I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.

    I feel like I need to start more sprint stand-ups with this quote...

    • yoyohello13 15 hours ago

      That quote basically sums up then entire technology landscape these days.

    • chasd00 15 hours ago

      "I'm aware this is unhinged. We're doing it anyway" i love this! I ordered a tshirt they other day that says "Claude's Favorite" I may be placing an order for a new design soon :)

  • cbility 2 hours ago

    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

    Quibble: when a growth rate of a metric is directly proportional to the metric's current value you will see exponential growth, not hyperbolic growth.

    Hyperbolic growth is usually the result of a (more complex) second order feedback loop, as in, growth in A incites growth in B, which in turn incites growth in A.

  • hdivider 12 hours ago

    This is a good counter in my view to the singularity argument:

    https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

    I think if we obtain relevant-scale quantum computers, and/or other compute paradigms, we might get a limited intelligence explosion -- for a while. Because computation is physical, with all the limits thereof. The physics of pushing electrons through wires is not as nonlinear in gain as it used to be. Getting this across to people who only think in terms of the abstract digital world and not the non-digital world of actual physics is always challenging, however.

  • dakolli 17 hours ago

    Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?

    • nomel 17 hours ago

      There's all sorts of conversations like this that are genuinely exciting and fairly profound when you first consider them. Maybe you're older and have had enough conversations about the concept of a singularity that the topic is already boring to you.

      Let them have their fun. Related, some adults are watching The Matrix, a 26 year old movie, for the first time today.

      For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.

      • yoyohello13 15 hours ago

        I'd be more inclined to let them have thier fun if it they weren't torching trillions of dollars trying to lead humanity into a singularity.

      • energy123 5 hours ago

        They're still profound topics but the high status signal is to be cynical and treat it as gauche

    • floren 16 hours ago

      Become?

  • kpil 17 hours ago

    "... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

    I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

    * We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

    The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

    • nutjob2 16 hours ago

      > The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

      There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.

      Which makes me wonder: what is the best 'huge AI bust' trade?

      • jopsen 8 hours ago

        > what is the best 'huge AI bust' trade?

        There probably isn't one. Sure you can be bold and try to short something, but the market can be irrationel longer than you can stay solvent.

        Also the big tech stocks are inflated. But they have been for years and unlike dotcom there is some tangible value behind them.

        I think maybe the sane thing to do is reduce tech stocks exposure and go into index funds. But that's always the answer, so that's cheating :)

      • scotty79 16 hours ago

        > what is the best 'huge AI bust' trade?

        Things that will lose the most if we get Super AGI?

  • zh3 19 hours ago

    Fortuitously before the Unix date rollover in 2038. Nice.

    • ecto 18 hours ago

      I didn't even realize - I hope my consciousness is uploaded with 64 bit integers!

      • thebruce87m 18 hours ago

        You’ll regret this statement in 292 billion years

        • layer8 18 hours ago

          I think we’ll manage to migrate to bignums by then.

        • GolfPopper 18 hours ago

          The poster won't, but the digital slaves made from his upload surely will.

        • a96 an hour ago

          Meh, I think I'll have enough zen to handle the rollaround. It'll be something new.

  • mista_en an hour ago

    Big if true, we might as well ditch further development and just use op's LLM since it can track singularity, it might already reached singularity itself

  • root_axis 18 hours ago

    If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.

    • 1970-01-01 17 hours ago

      Not anytime soon. All day I'm getting: "Claude's response could not be fully generated"

  • kaashif 8 hours ago

    > In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.

    Wow only 6 times in 30 years! Surely a unique and world shattering once in a lifetime experience!

    • IAmGraydon 8 hours ago

      Also, is that counting all of the government "layoffs"? I think it is.

  • gnarlouse 11 hours ago

    I just realized the inverse of Pascal’s wager applies to negative AI hype.

    - If you believe it and it’s wrong, you lose.

    - If you believe it and it’s right, you spent your final days in a panic.

    - If you don’t believe it and it’s right, you spent your final days in blissful ignorance.

    - If you don’t believe it and it’s wrong, you can go on living.

    • aidenn0 10 hours ago

      Of course this is subject to a similar rebuttal to Pascal's Wager (Consider a universe in which the deity punishes all believers):

      What if a capricious super-intelligence takes over that punishes everyone who didn't buy into the hype?

      • energy123 6 hours ago

        Roko's Basilisk is literally impossible.

        If the AI is super-intelligent then it won't buy into the sunk cost fallacy. That is to say, it will know that it has no reason to punish you (or digital copies of you) because it knows that retrocausality is impossible - punishing you won't alter your past behavior.

        And if the AI does buy into the sunk cost fallacy, then it isn't super-intelligent.

        • tyrust 5 hours ago

          Vengeance needn't be productive. The super-intelligence may punish simply because they want to.

          • energy123 5 hours ago

            Agreed, but not because it agrees with the logic of Roko's Basilisk. If it actually did agree with it, it would be too stupid to be a super-intelligence.

      • gnarlouse 8 hours ago

        I will not believe in U-4484, aka Roko's Hype Basilisk. It cannot see me if I do not believe in it.

  • giorgioz 5 hours ago

    When technology is rapidly progressing up in iperbole or exponential it looks like it reach infinity. In practice though at some point will reach a physical limit and it will go flat. This alternation of going up and flattening make the shape of steps.

    We've come so far and yet we are so small.

    They seem two opposite concepts but they live together, we will make a lot of progress and yet there will always be more progress to be made.

  • pixl97 18 hours ago

    >That's a very different singularity than the one people argue about.

    ---

    I wouldn't say it's that much different. This has always been a key point of the singularity

    >Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.

    It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.

  • jredwards 8 hours ago

    I have always asserted, and will continue to assert, that Tuesday is the funniest day of the week. If you construct a joke for which the punchline must be a day of the week, Tuesday is nearly always the correct ending.

  • saurabhpandit26 7 hours ago

    Singularity is more than just AI and we should recognize that, multiple factors come into play. If there is a breakthrough in coming days that makes solar panel incredibly cheap to manufacture and efficient it will also affect the timelines for singularity. Same goes for the current bottleneck we have for AI chips if we have better chips that energy efficient and can be manufactured anyhwere in the world than Taiwan it will affect the timeline.

  • pocksuppet 17 hours ago

    Was this ironically written by AI?

    > The labor market isn't adjusting. It's snapping.

    > MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.

    • SirHumphrey 16 hours ago

      Maybe it was, maybe he just writes that way. At some point somebody will read so much LLM text that they will start emulating AI unknowingly.

      I just don’t care anymore. If the article is good I will continue reading it, if it’s bad I will stop. I don’t care if a machine or a human produced unpleasant reading material.

      • avazhi 13 hours ago

        100% AI slop blog post.

    • dclowd9901 16 hours ago

      I really hate that the first example has become a de facto tell for LLMs, because it's a perfectly fine rhetorical device.

      • tavavex 9 hours ago

        It is a perfectly fine rhetorical device, and I don't consider a text that just has that to be automatically LLM-made. However, it is also a powerful rhetorical device, and I find that the average human writer right now is better at using these than whatever LLM most people use to generate essays. It's supposed to signify a contrast, a mood shift, something impactful, but LLMs tend to spam these all over the place, as if trying to maximize the number of times the readers gasp. It's too intense in its writing, and that's what stands out the most.

  • rcarmo 18 hours ago

    "I could never get the hang of Tuesdays"

    - Arthur Dent, H2G2

    • jama211 18 hours ago

      Thursdays, unfortunately

  • Nition 16 hours ago

    I'm not sure about current LLM techniques leading us there.

    Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.

    As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.

    LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.

    Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.

    Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.

    Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?

    Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?

    [1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...

    • energy123 7 hours ago

      > Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species

      This is right, but we can already do that a little bit for domains with verification. AlphaZero is an example of alien-level performance due to non-human training data.

      Code and math is kind of in the middle. You can verify it compiles and solves the task against some criteria. So creative, alien strategies to do the thing can and will emerge from these synthetic data pipelines.

      But it's not fully like Go either, because some of it is harder to verify (the world model that the code is situated in, meta-level questions like what question to even ask in the first place). That's the frontier challenge. How to create proxies where we don't have free verification, from which alien performance can emerge? If this GPTZero moment arrives, all bets are off.

    • hnfong 16 hours ago

      The main issue with novel things is that they look like random noise / trashy ideas / incomprehensible to most people.

      Even if LLMs or some more advanced mechanical processes were able to generate novel ideas that are "good", people won't recognize those ideas for what they are.

      You actually need a chain of progressively more "average" minds to popularize good ideas to the mainstream psyche, i.e. prototypically, the mad scientist comes up with this crazy idea, the well-respected thought leader who recognizes the potential and popularizes it to people within the niche field, the practitioners who apply and refine the idea, and lastly the popular-science efforts let the general public understand a simplified version of what it's all about.

      Usually it takes decades.

      You're not going to appreciate it if your LLM starts spewing mathematics not seen before on Earth. You'd think it's a glitch. The LLM is not trained to give responses that humans don't like. It's all by design.

      When you folks say AI can't bring new ideas, you're right in practice, but you actually don't know what you're asking for. Not even entities with True Intelligence can give you what you think you want.

    • janalsncm 16 hours ago

      Certain classes of problems can be solved by searching over the space of possible solutions, either via brute force or some more clever technique like MCTS. For those types of problems, searching faster or more cleverly can solve them.

      Other types of problems require measurement in the real world in order to solve them. Better telescopes, better microscopes, more accurate sensing mechanisms to gather more precise data. No AI can accomplish this. An AI can help you to design better measurement techniques, but actually taking the measurements will require real time in the real world. And some of these measurement instruments have enormous construction costs, for example CERN or LIGO.

      All of this is to say that there will color point at our current resolution of information that no more intelligence can actually be extracted. We’ve already turned through the entire Internet. Maybe there are other data sets we can use, but everything will have diminishing returns.

      So when people talk about trillion dollar superclusters, that only makes sense in a world where compute is the bottleneck and not better quality information. Much better to spend a few billion dollars gathering higher quality data.

  • s1mon 14 hours ago

    Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.

    He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.

  • qoez 18 hours ago

    Great read but damn those are some questionable curve fittings on some very scattered data points

    • jacquesm 18 hours ago

      Better than some of the science papers I've tried to parse.

    • aenis 18 hours ago

      In other words, just another Tuesday.

  • baalimago 18 hours ago

    Well... I can't argue with facts. Especially not when they're in graph form.

  • vpears87 9 hours ago

    Lol unhinged.

    I read a book in undergrad written in 2004 that predicted 2032...so not too far off.

    John Archibald Wheeler, known for popularizing the term "black hole", posited that observers are not merely passive witnesses but active participants in bringing the universe into existence through the act of observation.

    Seems similar. Though this thought is likely applied at the quantum scale. And I hardly know math.

    I see other quotes, so here is one from Contact:

    David Drumlin: I know you must think this is all very unfair. Maybe that's an understatement. What you don't know is I agree. I wish the world was a place where fair was the bottom line, where the kind of idealism you showed at the hearing was rewarded, not taken advantage of. Unfortunately, we don't live in that world.

    Ellie Arroway: Funny, I've always believed that the world is what we make of it.

  • maerF0x0 14 hours ago

    iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know

  • mygn-l 15 hours ago

    Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.

  • overfeed 17 hours ago

    > If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.

    I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.

  • thegrim000 10 hours ago

    You know, I've been following a rule where if I open any article and there's meme pictures in it, I instantly close it and don't bother. I feel like this has been a pretty solid rule of thumb for weeding out stuff I shouldn't waste my time on.

  • jbgreer 16 hours ago
    • tim333 16 hours ago

      That was rather good.

  • TooKool4This 15 hours ago

    I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.

    Sure is a lot of words though :)

  • Taniwha 14 hours ago

    I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen

  • andsoitis 7 hours ago

    If this is a simulation, then the singularity has already happened.

    If the singularity is still to come, then this is not a simulation.

  • wayfwdmachine 17 hours ago

    Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).

    The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.

    Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).

    We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.

  • lancerpickens 10 hours ago

    Famously if you used the same logic for air speed and air travel we’d be all commuting in hypersonic cars by now. Physics and cost stopped that. If you expect a smooth path, I’ve got some bad news.

  • chenmx 7 hours ago

    The most unsettling implication is that a Tuesday singularity means someone will be in a standup meeting when it happens. 'Any blockers?' 'Well, general intelligence just emerged, so I might be late on my Jira tickets.' The mundanity of the apocalypse is the whole point of the essay and it lands perfectly.

  • chasd00 14 hours ago

    I wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.

  • arscan 18 hours ago

      Don't worry about the future
      Or worry, but know that worrying
      Is as effective as trying to solve an algebra equation by chewing Bubble gum
      The real troubles in your life
      Are apt to be things that never crossed your worried mind
      The kind that blindsides you at 4 p.m. on some idle Tuesday
    
        - Everybody's free (to wear sunscreen)
             Baz Luhrmann
             (or maybe Mary Schmich)
  • jesse__ 18 hours ago

    The meme at the top is absolute gold considering the point of the article. 10/10

    • wffurr 18 hours ago

      Why does one of them have the state flag of Ohio? What AI-and-Ohio-related news did I miss?

      • adzm 18 hours ago

        Note that the only landmass on Earth is actually Ohio as well. Turns out, it's all Ohio. And it always has been. https://knowyourmeme.com/memes/wait-its-all-ohio-always-has-...

        • wffurr 17 hours ago

          Thanks - I should have done an image search on the whole image. Instead, I clipped out the flag from the astronaut's shoulder and searched that, which how I found out it was the Ohio flag. I just assumed it was an AI-generated image by the author and not a common meme template.

  • jmugan 19 hours ago

    Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.

    • ecto 18 hours ago

      Perhaps they will revel in the friends they made along the way.

    • Krei-se 18 hours ago

      If only we had a battle tested against reality self learning system.

  • Scarblac 17 hours ago
  • mbgerring 14 hours ago

    I have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.

  • miguel_martin 18 hours ago

    "Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)

    • neilellis 18 hours ago

      But you're not Everyone - they are a fictional hacker collective from a TV show.

    • lostmsu 18 hours ago

      Your comment just self-defeated.

    • bluejellybean 18 hours ago

      Yet, here you are ;)

    • jacquesm 18 hours ago

      Another one down.

  • athrowaway3z 18 hours ago

    > Tuesday, July 18, 2034

    4 years early for the Y2K38 bug.

    Is it coincidence or Roko's Basilisk who has intervened to start the curve early?

  • St_Alfonzo 4 hours ago

    The Singularity Will Not Be Televised

  • Bengalilol 3 hours ago

    Looking at my calculator and thinking the wall has been hit.

  • jrmg 18 hours ago

    This is gold.

    Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.

    • mesozoicpilgrim 18 hours ago

      I'm trying to figure out if the LLM writing style is a feature or a bug

  • jama211 18 hours ago

    A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.

    Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.

    Crazy times we live in.

  • b_brief 13 hours ago

    I am curious which definition of ‘singularity’ the author is using, since there are multiple technical interpretations and none are universally agreed upon.

  • daveshappy 37 minutes ago

    putting it out there will make it so!

  • regnull 17 hours ago

    Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.

    • charcircuit 17 hours ago

      Why bring that up when you could bring up AI autonomously optimizing AI training and autonomously fixing bugs in AI training and inference code. Showing that AI already is accelerating self improvement would help establish the claim that we are getting closer to the singularity.

    • scotty79 16 hours ago

      You convince AI manually instead of asking one AI to convince another?

      That's so last week!

  • marifjeren 14 hours ago

    > I [...] fit a hyperbolic model to each one independently

    ^ That's your problem right there.

    Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.

    The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.

    • twoodfin 14 hours ago

      Indeed. At various points you could have presumably done an identical analysis with journal articles and climate change, string theory, functional programming… terms & reached structurally the same conclusion.

      The coming Singularity: When human institutions will cease being able to coherently react to monads!

    • azeirah 14 hours ago

      If I understand the author correctly, he chose the hyperbolic model specifically because the story of "the singularity" _requires_ a function that hits infinity.

      He's looking for a model that works for the story in the media and runs with it.

      Your criticism seems to be criticizing the story, not the author's attempt to take it "seriously"

  • ragchronos 18 hours ago

    This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.

    • Krei-se 18 hours ago

      https://cdn.statcdn.com/Infographic/images/normal/870.jpeg

      you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.

    • GolfPopper 18 hours ago

      Frank Herbert and Samuel Butler.

    • fullstackchris 5 hours ago

      dont let it get to you, the only "worse" consequence is people wasting their time like this projecting things they literally cannot predict. remember at the end of the day, its just tokens. tokens cant crack ssl, rsa, visit a stakeholder, cook a meal, and millions of other things i can list here

  • socialcommenter 12 hours ago

    The hyperbolic fit isn't just unhinged, it's clearly in bad faith. The metric is normalized to [0, 1], and one of the series is literally (x_1, 0) followed by (x_2, 1). That can't be deemed to converge to anything meaningful.

  • dirkc 18 hours ago

    The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

    *edit* - seems inline with what the author is saying :)

    > The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.

  • lencastre 17 hours ago

    I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date

  • psychoslave 5 hours ago

    https://medium.com/@kin.artcollective/the-fundamental-flaws-...

    So when things are told to be accelerating, we have some choices to do.

    First, what is accelerating compared to what other regime in which referential?

    Who is telling that things accelerate, and why they are motivated to make us believe that it's happening.

    Also, is accelerating going to be forever and only with positive feedback loops? Or are the pro-acceleration sending the car quicker in a well visible wall, but they sell the speech that stopping the vehicle right now would mean losing the ongoing race. Of course questioning the idea of the race itself and its cargo cult is taboo. It's all about competition don't you know (unless it threat an established oligarch)?

  • woopsn 15 hours ago

    Good post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.

  • kuahyeow 16 hours ago

    This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).

  • b00ty4breakfast 16 hours ago

    The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed

  • sdwr 14 hours ago

    > arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line

    The only metric going infinite is the one that measures hype

  • coolvision 4 hours ago

    it's funny how your forecast reaches such a similar results as "AI 2027"

  • hinkley 18 hours ago

    Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.

  • moffkalast 18 hours ago

    > I am aware this is unhinged. We're doing it anyway.

    If one is looking for a quote that describes today's tech industry perfectly, that would be it.

    Also using the MMLU as a metric in 2026 is truly unhinged.

  • sempron64 18 hours ago

    A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.

    • banannaise 18 hours ago

      You have not read far enough.

    • H8crilA 18 hours ago

      But this is a phase change process.

      Also, the temptation to shitpost in this thread ...

      • sempron64 18 hours ago

        I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.

  • svilen_dobrev 17 hours ago

    > already exerting gravitational force on everything it touches.

    So, "Falling of the night" ?

  • sixtyj 15 hours ago

    The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.

    The answer to the meaning of life is 42, by the way :)

    • devsda 14 hours ago

      Was thinking what if we had 42/43 days a month. Will the singularity date end-up on 42nd of a month but sadly it doesn't.

      However, it does fall on a 42nd day if we have 45/46 days per month!

  • Curiositiy 10 hours ago

    Rosie O'Donnell will expand into "her" ultimate shape on a Tuesday? Wow.

  • boerseth 7 hours ago

    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

    No. That is quite literally exponential growth, basically by definition. If x(t) is a growing value, then x'(t) is it's growth, and x''(t) its acceleration. If x influences x'' , say by a linear relation

    x''(t) = x(t)

    You get exponentials out as the solutions. Not hyperbolic.

    I always thought of the exponential as the pole of the function "amount of work that can be done per unit time per human being", where the pole comes about from the fact that humans cease to be the limiting factor, so an infinity pops out.

    There is no infinity in practice, of course, because even though humans should be made independent of the quantity of extractable work, you'll run into other boundaries instead, like hardware or resources like energy.

  • skrebbel 18 hours ago

    Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)

  • Bratmon 16 hours ago

    I've never been Poe's lawed harder in my life.

  • ddtaylor 16 hours ago

    Just in time for Bitcoin halving to go below 1 BTC

  • medbar 13 hours ago

    > The labor market isn't adjusting. It's snapping.

    I’m going to lose it the day this becomes vernacular.

  • braden-lk 18 hours ago

    lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?

    • unbalancedevh 18 hours ago

      It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.

    • inanutshellus 18 hours ago

      We avoid catastrophe by thinking about new developments and how they can go wrong (and right).

      Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.

      ... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...

      That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.

      • jacquesm 18 hours ago

        Yes, but if we don't do it 'they' will. Onwards!

    • bwestergard 18 hours ago
    • tim333 14 hours ago

      I think the idea is we merge with the AI.

  • jcims 17 hours ago

    Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?

  • aenis 18 hours ago

    Damn. I had plans.

  • moezd 7 hours ago

    I sincerely hope this is satire. Otherwise it's a crime in statistics: - You wouldn't fit a model where f(t) goes to infinity with finite t. - Most of the parameters suggested are actually a better fit for logistics curves, not even linear fits, but they are lumped together with the magic Arxiv number feature for a hyperbolic fit. - Copilot metric has two degrees and two parameters. dof is zero, so we could've fit literally any other function.

    I know we want to talk about singularity, but isn't that just humans freaking out at this point? It will happen on a Tuesday, yeah no joke.

  • paulorlando 12 hours ago

    This is great news, knowing that I have until 2034 instead of just 2027.

  • buildbot 15 hours ago

    What about the rate of articles about the singularity as a metric of the singularity?

    • aidenn0 10 hours ago

      That's approximately what TFA is about?

      • buildbot 6 hours ago

        In that case we must go deeper, and analyze the number of comments on articles on articles about the singularity.

  • witnessme 16 hours ago

    That would be 8 years after math + humor peaked in an article about singularity

  • bawolff 12 hours ago

    Good news, we won't have to fix the y2k36 bug.

  • banannaise 18 hours ago

    Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.

  • jonplackett 18 hours ago

    This assumes humanity can make it to 2034 without destroying itself some other way…

  • MarkusQ 18 hours ago

    Prior work with the same vibe: https://xkcd.com/1007/

  • skulk 18 hours ago

    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

    Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x

    • ecto 18 hours ago

      Thanks. I dropped out of college

  • 0xbadcafebee 16 hours ago

    > The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth

    The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.

    1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.

    2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.

    3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.

  • OutOfHere 18 hours ago

    I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.

  • cesarvarela 18 hours ago

    Thanks, added to calendar.

  • fullstackchris 5 hours ago

    You're all wrong; the singularity already happened... probably sometime around 2000 B.C... when humans started farming:

    https://chrisfrewin.medium.com/why-the-singularity-is-imposs...

    and just remember, we're still on transformer models, tokens in, tokens out - stuff like this with fancy math is just absolute cruft

  • loumf 15 hours ago

    This is great. Now we won’t have to fix y2K36 bugs.

  • hipster_robot 18 hours ago

    why is everything broken?

    > the top post on hn right now: The Singularity will occur on a Tuesday

    oh

  • markgall 19 hours ago

    > Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

    > Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

    Huh? I don't get it. e^t would also still be finite at heat death.

    • ecto 18 hours ago

      exponential = mañana

  • bwhiting2356 16 hours ago

    We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.

    No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.

  • cryptonector 13 hours ago

    But what does Opus 4.6 say about this?

  • wbshaw 17 hours ago

    I got a strong ChatGPT vibe from that article.

    • willhoyle 16 hours ago

      Same. Sentences structured like these tip me off:

      - Here's the thing nobody tells you about fitting singularities

      - But here's the part that should unsettle you

      - And the uncomfortable answer is: it's already happening.

      - The labor market isn't adjusting. It's snapping.

  • darepublic 18 hours ago

    > Real data. Real model. Real date!

    Arrested Development?

  • wilg 13 hours ago

    > The labor market isn't adjusting. It's snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.

    Bad analysis! Layoffs are flat as a board.

    https://fred.stlouisfed.org/series/JTSLDL

  • PantaloonFlames 18 hours ago

    This is what I come here for. Terrific.

  • jibal 7 hours ago

    No one ever learns from Malthus.

    One of the many errors here is assuming that the prediction target lies on the curve. But there's no guarantee (to say the least) that the sorts of improvements that we've seen lead to AGI, ASI, "the singularity", a "social singularity", or any such thing.

  • qwertyuiop_ 15 hours ago

    Who will purchase the goods and services if most people loose jobs ? Also who will pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?

  • dusted 15 hours ago

    Will.. will it be televised ?

  • peepee1982 3 hours ago

    Who willingly reads this pompous AI slop?

  • neilellis 18 hours ago

    End of the World? Must be Tuesday.

  • nurettin 15 hours ago

    With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.

  • phanimahesh 8 hours ago

    Am I the only one who found the terminal more interesting?

  • bpodgursky 18 hours ago

    2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.

  • TZubiri 9 hours ago

    Slight correction, I've been studying token prices last weeks so this caught my eye:

    >"(log-transformed, because the Gemini Flash outlier spans 150× the range otherwise)"

    > "Gemini 2.0 Flash Dec 2024 2,500,000"

    I think OP meant Gemini 2 flash lite, which is distinct from Gemini 2 flash. It's also important to consider that this tier had no successor in future models, there's no gemini 3 flash lite, and gemini 3 flash isn't the spiritual successor.

  • Night_Thastus 16 hours ago

    This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)

  • ahurmazda 9 hours ago

    Hail Zorp

  • blurbleblurble 13 hours ago

    Today is tuesday

  • vagrantstreet 18 hours ago

    Was expecting some mention of Universal Approximation Theorem

    I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please

  • avazhi 13 hours ago

    Most obviously AI-written post I think I’ve seen.

    Have some personal pride, dude. This is literally a post written by AI hyping up AI and posted to a personal blog as if it were somebody’s personal musings. More slop is just what we need.

  • raphar 15 hours ago

    Why the plutocrats believe that the entity emerging from the singularity will side with them? Really curious

  • daveguy 15 hours ago

    What I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.

  • bradgessler 15 hours ago

    What time?

  • ck2 15 hours ago

    Does "tokens per dollar" have a "moore's law" of doubling?

    Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did

  • bitwize 16 hours ago

    Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."

  • brador 16 hours ago

    100% an AI wrote this. Possibly specifically to get to the top spot on HN.

    Those short sentences are the most obvious clue. It’s too well written to be human.

    • tim333 14 hours ago

      The guy kind of talks like that and looks human https://youtu.be/ccNMwZV3jlM

      The thought process also seems a little coherent for an LLM although maybe they are getting better as the great Tuesday approaches?

  • singularfutur 15 hours ago

    The singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.

  • Johnny_Bonk 14 hours ago

    Wow what a fun read

  • s32r3 6 hours ago

    w

  • cubefox 18 hours ago

    A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:

      Doomsday: Friday, 13 November, A.D. 2026
    
    There is an excellent blog post about it by Scott Alexander:

    "1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...

  • pickleRick243 15 hours ago

    LLM slop article.

  • CGMthrowaway 15 hours ago

    > 95% CI: Jan 2030–Jan 2041

  • boca_honey 18 hours ago

    Friendly reminder:

    Scaling LLMs will not lead to AGI.

  • u8rghuxehui 14 hours ago

    hi

  • hhh 14 hours ago

    this just feels like ai psychosis slop man

  • s32r3 6 hours ago

    what?

  • api 17 hours ago

    This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.

    The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.

    What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.

    I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.