256 comments

  • cbarrick 9 hours ago
  • cynicalpeace 2 hours ago

    I'm betting against OpenAI. Sam Altman has proven himself and his company untrustworthy. In long running games, untrustworthy players lose out.

    If you disagree, I would argue you have a very sad view of the world, where truth and cooperation are inferior to lies and manipulation.

    • swatcoder 18 minutes ago

      > If you disagree, I would argue you have a very sad view of the world, where truth and cooperation are inferior to lies and manipulation.

      You're holding everyone to a very simple, very binary view with this. It's easy to look around and see many untrustworthy players in very very long running games whose success lasts most of their own lives and often even through their legacy.

      That doesn't mean that "lies and manipulation" trump "truth and cooperation" in some absolute sense, though. It just means that significant long-running games are almost always very multi-faceted and the roads that run through them involve many many more factors than those.

      Those of us who feel most natural being "truthful and cooperative" can find great success ourselves while obeying our sense of integrity, but we should be careful about underestimating those who play differently. They're not guaranteed to lose either.

    • cynicalpeace an hour ago

      A telling quote about Sam, besides the "island of cannibals" one. Is actually one Sam published himself:

      "Successful people create companies. More successful people create countries. The most successful people create religions"

      This definition of success is founded on power and control. It's one of the worst definitions you could choose.

      There are nobler definitions, like "Successful people have many friends and family" or "Successful people are useful to their compatriots"

      Sam's published definition (to be clear, he was quoting someone else and then published it) tells you everything you need to know about his priorities.

      • mensetmanusman 24 minutes ago

        Those are boring definitions of success. If you can’t create a stable family, your not successful at one facet, but you could be at another (eg musk.).

        • cynicalpeace 14 minutes ago

          Boring is not correlated with how good something is. Most of the bad people in history were not boring. Most of the best people in history were not boring. Correlation with evilness = 0.

          You could have many other definitions that are not boring but also not bad. The definition published by Sam is bad

      • Mistletoe 14 minutes ago

        > The most successful people create religions

        I don't know if I would consider being crucified achieving success. Long term and for your ideology maybe, but for you yourself you are dead.

        I defer to Creed Bratton on this one and what Sam might be into.

        "I've been involved in a number of cults, both as a leader and a follower. You have more fun as a follower, but you make more money as a leader."

      • whamlastxmas an hour ago

        As you said, Sam didn’t write that. He was quoting someone else and wasn’t even explicitly endorsing it. He was making a comment about financially successful founders approach making a business as more of a vision and mission that they drive to build buy-in for, which makes sense as a successful tactic in the VC world since you want to impress and convince the very human investors

        • cynicalpeace 37 minutes ago

          This is the full post:

          ""Successful people create companies. More successful people create countries. The most successful people create religions."

          I heard this from Qi Lu; I'm not sure what the source is. It got me thinking, though--the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.

          In general, the big companies don't come from pivots, and I think this is most of the reason why."

          Sounds like an explicit endorsement lol

          • alfonsodev 26 minutes ago

            Well, it’s an observation, intelectual people like to make connections, to me observing something or sharing a connection you made in your mind it’s not necessarily endorsing the statement about power.

            He’s dissecting it and connecting with the idea that if you a have a bigger vision and the ability to convince people, making a company is just an “implementation detail” … oh well .. you might be right after all … but I suspect is more nuanced, and is not endorsing religions as a means of obtaining success, I want to believe that he meant the visionary, bigger than yourself well intended view of it.

            • cynicalpeace 19 minutes ago

              I'm sure if we were to confront him on it, he would give a much more nuanced view of it. But unprompted, he assumed it as true and gave further opinions based on that assumption.

              That tells us, at the very least, this guy is suspicious. Then you mix in all the other lies and it's pretty obvious I wouldn't trust him with my dog.

          • 93po 26 minutes ago

            "It got me thinking" is not an endorsement

            • cynicalpeace 18 minutes ago

              "this is most of the reason why". He's assuming it as true.

    • thorum 37 minutes ago

      There seems to be an ongoing mass exodus of their best talent to Anthropic and other startups. Whatever their moat is, that has to catch up with them at some point.

    • tbrownaw an hour ago

      > If you disagree, I would argue you have a very sad view of the world, where truth and cooperation are inferior to lies and manipulation.

      Arguing what is based on what should be seems maybe a bit questionable?

      • cynicalpeace an hour ago

        Fortunately, I'm arguing they're 1 and the same. "in long running games, untrustworthy players lose out"

        That is both what is and what should be. We tend to focus on the bad, but fortunately most of the time the world operates as it should.

        • fourside an hour ago

          You don’t backup why you think this is the case. You only say that to think otherwise makes for a sad view of the world.

          I’d argue that you can find examples of companies that were untrustworthy and still won. Oracle stands out as one with a pretty poor reputation that nevertheless has sustained success.

          The problem for OpenAI here is that they need the support of tech giants and they broke the trust of their biggest investor. In that sense, I’d agree that they bit the hand that was feeding them. But it’s not because in general all untrustworthy companies/leaders lose in the end. OpenAI’s dependence on others for success is key.

          • cynicalpeace 26 minutes ago

            There's mountains of research both theoretical and empirical that support exactly this point.

            There's also mountains of research both theoretical and empirical that argue against exactly this point.

            The problem is most papers on many scientific subjects are not replicable nowadays [0], hence my appeal to common sense, character, and wisdom. Highly underrated, especially on platforms like Hacker News where everything you say needs a double blind randomized controlled study.

            This point^ should actually be a fundamental factor in how we determine truth nowadays. We must reduce our reliance on "the science" and go back to the scientific method of personal experimentation. Try lying to business partner a few times, let's see how that goes.

            We can look at specific cases where it holds true- like in this case. There may be cases where it doesn't hold true. But your own experimentation will show it holds true more than not, which is why I'd bet against OpenAI

            [0] https://en.wikipedia.org/wiki/Replication_crisis

            • mrtranscendence 4 minutes ago

              Prove what point? There have clearly been crooked or underhanded companies that achieved success. Microsoft in its early heyday, for example. The fact that they paid a price for it doesn't obviate the fact that they still managed to become one of the biggest companies in history by market cap despite their bad behavior. Heck, what about Donald Trump? Hardly anyone in business has their crookedness as extensively documented as Trump and he has decent odds of being a two-term US President.

              What about the guy who repaired my TV once, where it worked for literally a single day, and then he 100% ghosted me? What was I supposed to do, try to get him canceled online? Seems like being a little shady didn't manage to do him any harm.

              It's not clear to me whether it's usually worth it to be underhanded, but it happens frequently enough that I'm not sure the cost is all that high.

    • m3kw9 an hour ago

      What makes you think MS is trustworthy, the focus on OpenAI and the media that spins things drives public opinions

    • m3kw9 43 minutes ago

      You should also say for simple games

    • greenthrow an hour ago

      Elon Musk alone disproves your theory. I wish I agreed with you, I'm sure I'd be happier. But there's just too many successful sociopaths. Hell there was a popular book about it.

      • 015a an hour ago

        You should really read the OP's theory as: clearly untrustworthy people lose out. Trustworthy people, and unclearly untrustworthy people, win.

        OAI's problem isn't that Sam is untrustworthy; he's just too obviously untrustworthy.

        • cynicalpeace an hour ago

          Yes correct. And hopefully untrustworthy people become clearly untrustworthy people eventually.

          Elon is not "untrustworthy" because of some ambitious deadlines or some stupid statements. He's plucking rockets out of the air and doing it super cheap whereas all competitors are lining their pockets with taxpayer money.

          You add in everything else (free speech, speaking his mind at great personal risk, tesla), he reads as basically trustworthy to me.

          When he says he's going to do something and he explains why, I basically believe him, knowing deadlines are ambitious.

          • hobs 13 minutes ago

            There's so many demos where Elon has faked and lied its very surprising to have him read as "basically trustworthy" even if he has done other stuff - have dancing people as robots with fake robot demos, the fake solar roof, fake full self driving, really fake promises about cyber taxis and teslas paying for themselves (like 7 years ago?).

            The free speech part also reads completely hollow when the guy's first actions were to ban his critics on the platform and bring back self avowed nazis - you could argue one of those things are in favor of free speech, but generally doing both just implies you are into the nazi stuff.

      • paulryanrogers an hour ago

        Still depends on the definition of success. Money and companies with high stock prices? Healthy family relationships and rich circle of diverse friends?

        • cynicalpeace an hour ago

          I would argue this is not subjective. "Healthy family relationships and rich circle of diverse friends" is an objectively better definition than "Money and companies with high stock prices".

          I await with arms crossed all the lost souls arguing it's subjective.

          • genrilz 38 minutes ago

            While I personally also consider my relationships to be more important than my earnings, I am still going to argue that it's subjective. Case in point, both you and I disagree with Altman about what success means. We are all human beings, and I don't see any objective way to argue one definition is better than another.

            In case you are going to make an argument about how happiness or some related factor objectively determines success, let me head that off. Altman thinks that power rather than happiness determines success, and is also a human being. Why objectively is his opinion wrong and yours right? Both of your definitions just look like people's opinions to me.

            • cynicalpeace 4 minutes ago

              Arms crossed

              Was not going to argue happiness at all. In fact, happiness seems a very hedonistic and selfish way to measure it too.

              My position is more mother goose-like. We simply have basic morals that we teach children but don't apply to ourselves. Be honest. Be generous. Be fair. Be strong. Don't be greedy. Be humble.

              That these are objectively moral is unprovable but true.

              It's religious and stoic in nature.

              It's anathema to HN, I know.

      • npinsker an hour ago

        Sociopathy isn’t the same thing as duplicity.

      • cynicalpeace an hour ago

        Musk kicks butt and is taking us to space. He proves my theory.

  • qwertox 12 minutes ago

    OpenAI would deserve to get dumped by MS. Just like "the boss" dumped everyone, including his own principles.

    Maybe that's why Sam Altman is so eager to get billions and build his own datacenters.

  • WithinReason 8 hours ago

    Does OpenAI have any fundamental advantage beyond brand recognition?

    • qwertox a few seconds ago

      Nothing which other companies couldn't catch up with if OpenAI would break down / slow down for a year (i.e. because they lost their privileged access to computing resources).

      Engineers would quit and start improving the competition. They're still a bit fragile, in my view.

    • og_kalu an hour ago

      The ChatGPT site crossed 3B visits last month (For perspective - https://imgur.com/a/hqE7jia). It has been >2B since May this year and >1.5B since March 2023. The Summer slump of last year ? Completely gone.

      Gemini and Character AI ? A few hundred million. Claude ? Doesn't even register. And the gap has only been increasing.

      So, "just" brand recognition ? That feels like saying Google "just" has brand recognition over Bing.

      https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...

    • idunnoman1222 3 hours ago

      Yes, they already collected all the data. The same data has had walls put up around it

      • Implicated 3 hours ago

        While I recognize this, I have to assume that the other "big players" already have this same data... ie: anyone with a search engine that's been crawling the web for decades. New entries to the race? Not so much, new walls and such.

      • ugh123 2 hours ago

        Which data? Is that data that Google and/or Meta can't get or doesn't have already?

        • jazzyjackson 8 minutes ago

          Well, at this point most new data being created is conversations with chatgpt, seeing as how stack overflow and reddit are increasingly useless, so their conversation logs are their moat.

      • lolinder an hour ago

        That gives the people who've already started an advantage over newcomers, but it's not a unique advantage to OpenAI.

        The question really should be what if anything gives OpenAI an advantage over Anthropic, Google, Meta, or Amazon? There are at least four players intent on eating OpenAI's market share who already have models in the same ballpark as OpenAI. Is there any reason to suppose that OpenAI keeps the lead for long?

        • XenophileJKO an hour ago

          I think their current advantage is willingness to risk public usage of frontier technology. This has been and I predict will be their unique dynamic. It forced the entire market to react, but they are still reacting reluctantly. I just played with Gemini this morning for example and it won't make an image with a person in it at all. I think that is all you need to know about most of the competition.

          • lolinder 22 minutes ago

            How about Anthropic?

            • jazzyjackson 5 minutes ago

              Aren't they essentially run by safetyists? So they would be less willing to release a model that pushes the boundaries of capability and agency

      • throwup238 2 hours ago

        Most of the relevant data is still in the Common Crawl archives, up until people started explicitly opting out of it last couple of years.

    • JeremyNT 2 hours ago

      "There is no moat" etc.

      Getting to market first is obviously worth something but even if you're bullish on their ability to get products out faster near term, Google's going to be breathing right down their neck.

      They may have some regulatory advantages too, given that they're (sort of) not a part of a huge vertically integrated tech conglomerate (i.e. they may be able to get away with some stuff that Google could not).

    • usaar333 4 hours ago

      Talent? Integrations? Ecosystem?

      I don't know if this is going to emerge as a monopoly, and likely won't, but for whatever reason, openai and anthropic have been several months ahead of everyone else for quite some time.

      • causal 3 hours ago

        I think the perception that they're several months ahead of everyone is also a branding achievement: They are ahead on Chat LLMs specifically. Meta, Google, and others crush OpenAI on a variety of other model types, but they also aren't hyping their products up to the same degree.

        Segment Anything 2 is fantastic- but less mysterious because its open source. NotebookLM is amazing, but nobody is rushing to create benchmarks for it. AlphaFold is never going to be used by consumers like ChatGPT.

        OpenAI is certainly competitive, but they also work overtime to hype everything they produce as "one step closer to the singularity" in a way that the others don't.

        • usaar333 an hour ago

          Anthropic isn't really hyping their product that much. It just is really good.

    • srockets 4 hours ago

      An extremely large commit with Azure. AFAIK, none of the other non-hyperscaler competitors have access to that much of a compute.

      • dartos 2 hours ago

        > non-hyperscaler competitors

        Well the hyperscale companies are the ones to worry about.

      • HarHarVeryFunny an hour ago

        Pretty sure that Meta and X.ai both do.

      • ponty_rick 2 hours ago

        Anthropic has the same with AWS

    • pal9000 2 hours ago

      Everytime i ask this myself, OpenAI comes up with something new groundbreaking and other companies play catchup. The last was the Realtime API. What are they doing right? I dont know

      • lolinder an hour ago

        OpenAI is playing catch-up of their own. The last big announcement they had was "we finally built Artifacts".

        This is what happens when there's vibrant competition in a space. Each company is innovating and each company is trying to catch up to their competitors' innovations.

        It's easy to limit your view to only the places where OpenAI leads, but that's not the whole picture.

    • julianeon an hour ago

      No (broadly defined). But if you believe in OpenAI, you believe that's enough.

    • mhh__ 8 hours ago

      It's possible that it's only one strong personality and some money away but my guess is that OpenAI-rosoft have the best stack for doing inference "seriously" at big, big, scale e.g. moving away from hacky research python code and so on.

      • erickj 8 hours ago

        Its pretty hard to ignore Google in any discussion on big scale

        • mhh__ 8 hours ago

          Completely right. Was basically only thinking about OpenAI versus Anthropic. Oops

          • XenophileJKO 36 minutes ago

            Google in their corporate structure, is to cautious to be a serious competitor.

        • luckydata 4 hours ago

          They seem to have managed to do so just fine :)

    • mock-possum 3 hours ago

      Does Kleenex?

      I’ve heard plenty of people call any chatbot “chat gpt” - it’s becoming a genericized household name.

      • insane_dreamer 2 hours ago

        my 8 year old knows what ChatGPT is but has never heard of any other LLM (or OpenAI for that matter). They're all "chatGPT" in the same way that refers to searching the internet as "googling" (and is unaware of Bing, DDG or any other search engine).

      • aksss 2 hours ago

        What’s the killer 2-syllable word (google, Kleenex)??

        ChatGPT is a mouthful. Even copilot rolls off the tongue easier though doesn’t have the mindshare obviously.

        Generic gpt would be better but you end up saying gpt-style tool, which is worse.

        • sorenjan 2 hours ago

          I think it shows really well how OpenAI was caught off guard when Chat GPT got popular and proved to be unexpectedly useful for a lot of people. They just gave it a technical name for what it was, a Generative Pre-trained Transformer model that was fine tuned for chat style interaction. If they had any plans on making a product close to what it is today they would have given it a catchier name. And now they're kind of stuck with it.

          • jazzyjackson 2 minutes ago

            I agree but otoh it distinguishes itself as a new product class better than if they had gave it a name like Siri, Alexa, Gemini, Jeeves

        • WorldPeas 29 minutes ago

          the less savvy around me simply call it "chat" and it's understood by context

        • jazzyjackson 4 minutes ago

          "I asked the robot"

        • Fuzzwah an hour ago

          You're not saying 'gippity' yet?

      • CPLX an hour ago

        If you invested in Kleenex at OpenAI valuations you would lose nearly all your money quite quickly.

    • thelittleone 3 hours ago

      One hypothetical advantage could be secret agreements / cooperation with certain agencies. That may help influence policy in line with OpenAI's preferred strategy on safety, model access etc.

    • riku_iki 3 hours ago

      they researched and developed e2e infra + product with high quality, which MS doesn't have (few other players have it).

      • mlnj 43 minutes ago

        And every one of these catchup companies have caught up with a small lag.

    • piva00 7 hours ago

      Not really sure since this space is so murky due to the rapid changes happening. It's quite hard to keep track of what's in each offering if you aren't deep into the AI news cycle.

      Now personally, I've left the ChatGPT world (meaning I don't pay for a subscription anymore) and have been using Claude from Anthropic much more often for the same tasks, it's been better than my experience with ChatGPT. I prefer Claude's style, Artifacts, etc.

      Also been toying with local LLMs for tasks that I know don't require a multi-hundred billion parameters to solve.

      • sunnybeetroot 7 hours ago

        Claude is great except for the fact the iOS app seems to require a login every week. I’ve never had to log into ChatGPT but Claude requires a constant login and the passwordless login makes it more of a pain!

        • juahan 3 hours ago

          Sounds weird, I have had to login exactly once on my iOS devices.

      • Closi 3 hours ago

        ChatGPT-O1 is quite a bit better at certain complex tasks IMO (e.g. writing a larger bit of code against a non-trivial spec and getting it right).

        Although there are also some tasks that Claude are better at too.

      • tempusalaria 7 hours ago

        I also like 3.5 sonnet as the best model (best ui too) and it’s the one I ask questions to

        We use Gemini flash in prod. The latency and cost is just unbeatable - our product uses llms for lots of simple tasks so we don’t need a frontier model.

  • aithrowawaycomm 2 hours ago

    I am hardly a fan of OpenAI and they probably deserve to go out of business, but their staffers are definitely smarter and harder-working than I am. Hearing that a slimy mediocrity like Mustafa Suleyman (allegedly) yelled at OpenAI staffers for not delivering technology quickly enough really grinds my gears. Pure PHB behavior. Suleyman is one of the emptiest suits in all of commercial AI, someone who AFAICT owes his career entirely due to a childhood friendship with Demis Hassabis.

    I think Satya Nadella is a smart cookie, but that is not mutually exclusive with being a patsy and a rube who makes terrible decisions. Suleyman and Altman are both huge idiots and plain con artists, but they speak with the BS incantations of aspirational technologists, and that seems to work on a lot of people.

  • mikeryan 8 hours ago

    While technical AI and LLMs are not something I’m well versed in. So as I sit on the sidelines and see the current proliferation of AI startups I’m starting to wonder where the moats are outside of access to raw computing power. Open AI seemed to have a massive lead in this space but that lead seems to be shrinking every day.

    • InkCanon 8 hours ago

      You hit the nail on the head. Companies are scrambling for an edge. Not a real edge, an edge to convince investors to keep giving them money. Perplexity is going all in on convincing VCs it can create a "data flywheel".

    • weberer 8 hours ago

      Obtaining high quality training data is the biggest moat right now.

      • segasaturn an hour ago

        Where are they going to get that data? Everything on the open web after 2023 is polluted with lowquality AI slop that poisons the data sets. My prediction: Aggressive dragnet surveillance of users. As in, Google recording your phone calls on Android, Windows sending screen recordings from Recall to OpenAI, Meta training off Whatsapp messages... It sounds dystopian, but the Line Must Go Up.

    • wongarsu 8 hours ago

      Data. You want huge amounts of high quality data with a diverse range of topics, writing styles and languages. Everyone seems to balance those requirements a bit differently, and different actors have access to different training data

      There is also some moat in the refinement process (rlhf, model "safety" etc)

    • runeblaze 4 hours ago

      In addition to data, having the infra to scale up training robustly is very very non-trivial.

    • YetAnotherNick 3 hours ago

      > Open AI seemed to have a massive lead in this space but that lead seems to be shrinking every day.

      The lead is as strong as ever. They are 34 ELO above anyone else in blind testing, and 73 ELO above in coding [1]. They also seem to have artificially constrain the lead as they already have stronger model like o1 which they haven't released. Consistent to the past, they seem to release just <50 ELO above anyone else, and upgrades the model in weeks when someone gets closer.

      [1]: https://lmarena.ai/

      • adventured 2 hours ago

        It's rather amusing that people have said this about OpenAI - that they essentially had no lead - for about two years non-stop.

        The moat as usual is extraordinary scale, resources, time. Nobody is putting $10 billion into the 7th OpenAI clone. Big tech isn't aggressively partnering with the 7th OpenAI clone. The door is already shut to that 7th OpenAI clone (they can never succeed or catch-up), there's just an enormous amount of naivety in tech circles about how things work in the real world: I can just spin up a ChatGPT competitor over the weekend on my 5090, therefore OpenAI have no barriers to entry, etc.

        HN used to endlessly talk about how Uber could be cloned in a weekend. It's just people talking about something they don't actually understand. They might understand writing code (or similar) and their bias extends from the premise that their thing is the hard part of the equation (writing the code, building an app, is very far from the hardest part of the equation for an Uber).

        • TeaBrain 18 minutes ago

          No-one was saying this 2 or even 1.5 years ago.

    • Der_Einzige 2 hours ago

      How can anyone say that the lead is shrinking when no one still has any good competitor to strawberry? Dspy has been out for how long and how many folks have shown better reasoning models than strawberry built with literally anything else? Oh yeah, zero.

  • Roark66 8 hours ago

    >OpenAI plans to loose $5 billion this year

    Let that sink in for anyone that has incorporated Chatgpt in their work routines to the point their normal skills start to atrophy. Imagine in 2 years time OpenAI goes bust and MS gets all the IP. Now you can't really do your work without ChatGPT, but it cost has been brought up to how much it really costs to run. Maybe $2k per month per person? And you get about 1h of use per day for the money too...

    I've been saying for ages, being a luditite and abstaining from using AI is not the answer (no one is tiling the fields with oxen anymore either). But it is crucial to at the very least retain 50% of capability hosted models like Chatgpt offer locally.

    • zuminator 2 hours ago

      Where are you getting $2k/person/month? ChatGPT allegedly has on the order of 100 million users. Divide that by $5b and you get a $50 deficit per person per year. Meaning they could raise their prices by less than four and a half dollars per user to break even.

      Even if they were to only gouge the current ~11 million paying subscribers, that's around $40/person/month over current fees to break even. Not chump change, but nowhere close to $2k/person/month.

      • alpha_squared 2 hours ago

        What you're suggesting is the basic startup math for any typical SaaS business. The problem is OpenAI and the overall AI space is raising funding on the promise of being much more than a SaaS. If we ignore all the absurd promises ("it'll solve all of physics"), the promise to investors is distilled down to this being the dawn of a new era of computing and investors have responded by pouring in hundreds of billions of dollars into the space. At that level of investment, I sure hope the plan is to be more than a break-even SaaS.

      • layer8 an hour ago

        > ChatGPT allegedly has on the order of 100 million users.

        That’s users, not subscribers. Apparently they have around 10 million ChatGPT Plus subscribers plus 1 million business-tier users: https://www.theinformation.com/articles/openai-coo-says-chat...

        To break even, that means that ChatGPT Plus would have to cost around $50 per month, if not more because less people will be willing to pay that.

        • zuminator 2 minutes ago

          You only read the first half of my comment and immediately went on the attack. Read the whole thing.

      • ants_everywhere 2 hours ago

        I think the question is more how much the market will bear in a world where MS owns the OpenAI IP and it's only available as an Azure service. That's a different question from what OpenAI needs to break even this year.

    • sebzim4500 8 hours ago

      The marginal cost of inference per token is lower than what OpenAI charges you (IIRC about 2x cheaper), they make a loss because of the enormous costs of R&D and training new models.

      • tempusalaria 8 hours ago

        It’s not clear this is true because reported numbers don’t disaggregate paid subscription revenue (certainly massively GP positive) vs free usage (certainly negative) vs API revenue (probably GP negative).

        Most of their revenue is the subscription stuff, which makes it highly likely they lose money per token on the api (not surprising as they are are in price war with Google et al)

        If you have an enterprise ChatGPT sub you have to consume around 5mln tokens a month to match the cost of using the api on GPT4o. At 100 words per minute that’s 35 days on continuous typing which shows how ridiculous the costs of api vs subscription are.

        • seizethecheese 4 hours ago

          In summary, the original point of this thread is wrong. There’s essentially no future where these tools disappear or become unavailable at reasonable cost for consumers. Much more likely is they get way better.

      • diggan 8 hours ago

        Did OpenAI publish concrete numbers regarding this, or where are you getting this data from?

        • lukeschlather 4 hours ago

          https://news.ycombinator.com/item?id=41833287

          This says 506 tokens/second for Llama 405B on a machine with 8x H200s which you can rent for $4/GPU so probably $40/hour for a server with enough GPUs. And so it can do ~1.8M tokens per hour. OpenAI charges $10/1M output tokens for GPT4o. (input tokens and cached tokens are cheaper, but this is just ballpark estimates.) So if it were 405B it might cost $20/1M output tokens.

          Now, OpenAI is a little vague, but they have implied that GPT4o is actually only 60B-80B parameters. So they're probably selling it with a reasonable profit margin assuming it can do $5/1M output tokens at approximately 100B parameters.

          And even if they were selling it at cost, I wouldn't be worried because a couple years from now Nvidia will release H300s that are at least 30% more efficient and that will cause a profit margin to materialize without raising prices. So if I have a use case that works with today's models, I will be able to rent the same thing a year or two from now for roughly the same price.

      • ignoramous 8 hours ago

        > The marginal cost of inference per token is lower than what OpenAI charges you

        Unlike most Gen AI shops, OpenAI also incurs a heavy cost for traning base models gunning for SoTA, which involves drawing power from a literal nuclear reactor inside data centers.

        • candiddevmike 7 hours ago

          > literal nuclear reactor inside data centers

          This is fascinating to think about. Wonder what kind of shielding/environmental controls/all other kinds of changes you'd need for this to actually work. Would rack-sized SMR be contained enough not to impact anything? Would datacenter operators/workers need to follow NRC guidance?

          • kergonath 2 hours ago

            It makes zero sense to build them in datacenters and I don’t know of any safety authority that would allow deploying reactors without serious protection measures that would at the very least impose a different, dedicated building.

            At some point it does make sense to have a small reactor powering a local datacenter or two, however. Licensing would still be not trivial.

          • talldayo 4 hours ago

            I think the simple answer is that it doesn't make sense. Nuclear power plants generate a byproduct that inherently limits the performance of computers; heat. Having either a cooling system, reactor or turbine located inside a datacenter is immediately rendered pointless because you end up managing two competing thermal systems at once. There is no reason to localize a reactor inside a datacenter when you could locate it elsewhere and pipe the generated electricity into it via preexisting high voltage lines.

            • kergonath 2 hours ago

              > Nuclear power plants generate a byproduct that inherently limits the performance of computers; heat.

              The reactor does not need to be in the datacenter. It can be a couple hundreds meters away, bog-standard cables would be perfectly able to move the electrons. The cables being 20m or 200m long does not matter much.

              You’re right though, putting them in the same building as a datacenter still makes no sense.

        • fransje26 8 hours ago

          > from a literal nuclear reactor inside data centers.

          No.

    • X6S1x6Okd1st 2 hours ago

      Chatgpt doesn't have much of a moat. Claude is comparable for coding tasks and llama isn't far behind.

      No biz collapse will remove llama from the world, so if you're worried about tools disappearing then just only use tools that can't disappear

    • marcosdumay 4 hours ago

      > being a luditite and abstaining from using AI is not the answer

      Hum... The judge is still out on that one, but the evidence is piling up into the "yes, not using it is what works best" here. Personally, my experience is strongly negative, and I've seen other people get very negative results from it too.

      Maybe it will improve so much that at some point people actually get positive value from it. My best guess is that we are not there yet.

      • Kiro 2 hours ago

        It's not either or. In my specific situation Cursor is such a productivity booster that I can't imagine going back. It's not a theoretical question.

      • bigstrat2003 3 hours ago

        Yeah, I agree. It's not "being a Luddite" to take a look and conclude that the tool doesn't actually deliver the value it claims to. When AI can actually reliably do the things its proponents say it can do, I'll use it. But as of today it can't, and I have no use for tools that only work some of the time.

    • hmottestad 8 hours ago

      Cost tends to go down with time as compute becomes cheaper. And as long as there is competition in the AI space it's likely that other companies would step in and fill the void created by OpenAI going belly up.

      • infecto 8 hours ago

        I tend to think along the same lines. If they were the only player in town it would be different. I am also not convinced $5billion is that big of a deal for them, would be interesting to see their modeling but it would be a lot more suspect if they were raising money and increasing the price of the product. Also curious how much of that spend is R&D compared to running the system.

      • ToucanLoucan 8 hours ago

        > Cost tends to go down with time as compute becomes cheaper.

        This is generally true but seems to be, if anything, inverted for AI. These models cost billions to train in compute, and OpenAI thus far has needed to put out a brand new one roughly annually in order to stay relevant. This would be akin to Apple putting out a new iPhone that costed billions to engineer year over year, but was giving the things away for free on the corner and only asking for money for the versions with more storage and what have you.

        The vast majority of AI adjacent companies too are just repackaging OpenAI's LLMs, the exceptions being ones like Meta, which certainly has a more solid basis what with being tied to an incredibly profitable product in Facebook, but also... it's Meta and I'm sure as shit not using their AI for anything, because it's Meta.

        I did some back of napkin math in a comment a ways back and landed on that in order to break even merely on training costs, not including the rest of the expenditure of the company, they would need to charge all of their current subscribers $150 per month, up from... I think the most expensive right now is about $20? So nearly an 8 fold price increase, with no attrition, to again break even. And I'm guessing all these investors they've had are not interested in a 0 sum.

        • authorfly 3 hours ago

          This reasoning about the subscription price etc is undermined by the actual prices OpenAI are charging -

          The price of a model capable of 4o mini level performance used to be 100x higher.

          Yes, literally 100x. The original "davinci model" (and I paid $5 figures for using it throughout 2021-2022) cost $0.06/1k tokens.

          So it's not inverting in running costs (which are the thing that will kill a company). Struggling with training costs (which is where you correctly identify OpenAI is spending) will stop growth perhaps, but won't kill you if you have to pull the plug.

          I suspect subscription prices are based on market capture and perceived customer value, plus plans for training, not running costs.

        • Mistletoe 4 hours ago

          The closest analog seems to be bitcoin mining, which continually increases difficulty. And if you've ever researched how many bitcoin miners go under...

          • lukeschlather 4 hours ago

            It's nothing like bitcoin mining. Bitcoin mining is intentionally designed so that it gets harder as people use it more, no matter what.

            With LLMs, if you have a use case which can run on an H100 or whatever and costs $4/hour, and the LLM has acceptable performance, it's going to be cheaper in a couple years.

            Now, all these companies are improving their models but they're doing that in search of magical new applications the $4/hour model I'm using today can't do. If the $4/hour model works today, you don't have to worry about the cost going up. It will work at the same price or cheaper in the future.

            • Mistletoe 3 hours ago

              But OpenAI has to keep releasing new ever-increasing models to justify it all. There is a reason they are talking about nuclear reactors and Sam needing 7 trillion dollars.

              One other difference from Bitcoin is that the price of Bitcoin rises to make it all worth it, but we have the opposite expectation with AI where users will eventually need to pay much more than now to use it, but people only use it now because it is free or heavily subsidized. I agree that current models are pretty good and the price of those may go down with time but that should be even more concerning to OpenAI.

              • kergonath 2 hours ago

                > But OpenAI has to keep releasing new ever-increasing models to justify it all.

                There seems to be some renewed interest for smaller, possibly better-designed LLMs. I don’t know if this really lowers training costs, but it makes inference cheaper. I suspect at some point we’ll have clusters of smaller models, possibly activated when needed like in MoE LLMs, rather than ever-increasing humongous models with 3T parameters.

    • switch007 8 hours ago

      $2k is way way cheaper than a junior developer which, if I had to guess their thinking, is who the Thought Leaders think it'll replace.

      Our Thought Leaders think like that at least. They also pretty much told us to use AI or get fired

      • srockets 4 hours ago

        I found those tools to resemble an intern: they can do some tasks pretty well, when explained just right, but others you'd spend more time guiding than it would have taken you to do it yourself.

        And rarely can you or the model/intern can tell ahead of time which tasks are in each of those categories.

        The difference is, interns grow and become useful in months: the current rate of improvements in those tools isn't even close to that of most interns.

        • luckydata 4 hours ago

          I have a slightly different view. IMHO LLMs are excellent rubber ducks or pair programmers. The rate at which I can try ideas and get them back is much higher than what I would be doing by myself. It gets me unstuck in places where I might have spent the best part of a day in the past.

          • srockets 4 hours ago

            My experience differs: if at all, they get me unstuck by trying to shove bad ideas, which allows me to realize "oh, that's bad, let's not do that". But it's also extremely frustrating, because a stream of bad ideas from a human has some hope they'll learn, but here I know I'll get the same BS, only with an annoying and inhumane apology boilerplate.

            • Kiro 2 hours ago

              Not my experience at all. What kind of code are you using it for?

      • kergonath 2 hours ago

        > Our Thought Leaders think like that at least. They also pretty much told us to use AI or get fired

        Ours told us not to use LLMs because they are worried about leaking IP and confidential data.

      • ilrwbwrkhv 7 hours ago

        Which thought leader is telling you to use AI or get fired?

        • switch007 7 hours ago

          My CTO (C level is automatically a Thought Leader)

      • CamperBob2 5 hours ago

        It's premature to think you can replace a junior developer with current technology, but it seems fairly obvious that it'll be possible within 5-10 years at most. We're well past the proof-of-concept stage IMO, based on extensive (and growing) personal experience with ML-authored code. Anyone who argues that the traditional junior-developer role isn't about to change drastically is whistling past the graveyard.

        Your C-suite execs are paid to skate where that particular puck is going. If they didn't, people would complain about their unhealthy fixation on the next quarter's revenue.

        Of course, if the junior-developer role is on the chopping block, then more experienced developers will be next. Finally, the so-called "thought leaders" will find themselves outcompeted by AI. The ability to process very large amounts of data in real time, leveraging it to draw useful conclusions and make profitable predictions based on ridiculously-large historical models, is, again, already past the proof-of-concept stage.

        • actsasbuffoon 4 hours ago

          Unless I’ve missed some major development then I have to strenuously disagree. AI is primarily good at writing isolated scripts that are no more than a few pages long.

          99% of the work I do happens in a large codebase, far bigger than anything that you can feed into an AI. Tickets come in that say something like, “Users should be able to select multiple receipts to associate with their reports so long as they have the management role.”

          That ticket will involve digging through a whole bunch of files to figure out what needs to be done. The resolution will ultimately involve changes to multiple models, the database schema, a few controllers, a bunch of React components, and even a few changes in a micro service that’s not inside this repo. Then the AI is going to fail over and over again because it’s not familiar with the APIs for our internal libraries and tools, etc.

          AI is useful, but I don’t feel like we’re any closer to replacing software developers now than we were a few years ago. All of the same showstoppers remain.

          • Kiro 2 hours ago

            Cursor has no problem making complicated PRs spanning multiple files and modules in my legacy spaghetti code. I wouldn't be surprised if it could replace most programmers already.

          • CamperBob2 3 hours ago

            All of the code you mention implements business logic, and you're right, it's probably not going to be practical to delegate maintenance of existing code to an ML model. What will happen, probably sooner than you think, is that that code will go away and be replaced by script(s) that describe the business logic in something close to declarative English. The AI model will then generate the code that implements the business logic, along with the necessary tests.

            So when maintenance is required, it will be done by adding phrases like "Users should be able to select multiple receipts" to the existing script, and re-running it to regenerate the code from scratch.

            Don't confuse the practical limitations of current models with conceptual ones. The latter exist, certainly, but they will either be overcome or worked around. People are just not as good at writing code as machines are, just as they are not as good at playing strategy games. The models will continue to improve, but we will not.

            • prewett 2 hours ago

              The problem is, the feature is never actually "users should be able to select multiple receipts". It's "users should be able to select multiple receipts, but not receipts for which they only have read access and not write access, and not when editing a receipt, and should persist when navigating between the paginated data but not persist if the user goes to a different 'page' within the webapp. The selection should be a thick border around the receipt, using the webapp selection color and the selection border thickness, except when using the low-bandwidth interface, in which case it should be a checkbox on the left (or on the right if the user is using a RTL language). Selection should adhere to standard semantics: shift selects all items from the last selection, ctrl/cmd toggles selection of that item, and clicking creates a new, one-receipt selection. ..." By the time you get all that, it's clearer in code.

              I will observe that there have been at least three natural-language attempts in the past, none of which succeeded in being "just write it down". COBOL is just as code-y as any other programming language. SQL is similar, although I know a fair amount of non-programmers who can write SQL (but then, back in the day my Mom taught be about autoexec.bat, and she could care less about programming). Anyway, SQL is definitely not just adding phrases and it just works. Finally, Donald Knuth's WEB is a mixture, more like a software blog entry, where you put the pieces of the software inamongst the explanatory writeup. It has caught on even less, unless you count software blogs.

              • CamperBob2 an hour ago

                I will observe that there have been at least three natural-language attempts in the past, none of which succeeded in being "just write it down". COBOL...

                I think we're done here.

          • luckydata 4 hours ago

            Google's LLM can ingest humongous contexts. Check it out.

        • l33t7332273 3 hours ago

          You would think thought leaders would be the first to be replaced by AI.

          > The ability to process very large amounts of data in real time, leveraging it to draw useful conclusions and make profitable predictions based on ridiculously-large historical models, is, again, already past the proof-of-concept stage.

          [citation needed]

          • CamperBob2 an hour ago

            If you can drag a 9-dan grandmaster up and down the Go ban, you can write a computer program or run a company.

    • righthand 3 hours ago

      Being a luddite has it’s advantages as you won’t succumb to the ills of society trying to push you there. To believe that it’s inevitable LLMs will be required to work is silly in my opinion. As these corps eat more and more good will of the content on the internet for only their gain, people will and have already started defecting from it. Many of my coworkers have shut off CoPilot, though still occasionally use ChatGPT. But since the power really only is adding randomization to established working document templates, the gain is only of a short amount of working time.

      There is also the active and passive efforts to poison the well. As LLMs are used to output more content and displace people, the LLMs will be trained on the limited regurgitation available to the public (passive). Then there’s the people intentionally creating bad content to be ingested. It really is a lose for big service llm companies as the local models become more and more good enough (active).

    • InkCanon 8 hours ago

      I would just switch to Claude of Mistral like I already do. I really feel little difference between them

      • mprev 4 hours ago

        I like how your typo makes it sound like a medieval sage.

        • card_zero 3 hours ago

          Let me consult my tellingbone.

    • singularity2001 8 hours ago

      people kept whining about Amazon losing money and called me stupid for buying their stock...

      • ben_w 8 hours ago

        As I recall, while Amazon was doing this, there was no comparable competition from other vendors that properly understood the internet as a marketplace? Closest was eBay?

        There is real competition now that plenty of big box stores' websites also list things you won't see in the stores themselves*, but then also Amazon is also making a profit now.

        I think the current situation with LLMs is a dollar auction, where everyone is incentivised to pay increasing costs to outbid the others, even though this has gone from "maximise reward" to "minimise losses": https://en.wikipedia.org/wiki/Dollar_auction

        * One of my local supermarkets in Germany sells 4-room "garden sheds" that are substantially larger than the apartment I own in the UK: https://www.kaufland.de/product/396861369/

      • bigstrat2003 3 hours ago

        And for every Amazon, there are a hundred other companies that went out of business because they never could figure out how to turn a profit. You made a bet which paid off and that's cool, but that doesn't mean the people telling you it was a bad bet were wrong.

      • empath75 18 minutes ago

        Depending on when you bought it, it was a pretty risky play until AWS came out and got traction. Their retail business _still_ doesn't make money.

      • insane_dreamer 2 hours ago

        Amazon was losing money because it was building the moat

        It's not clear that OpenAI has any moat to build

      • bmitc 8 hours ago

        Why does everyone always like to compare every company to Amazon? Those companies are never like Amazon, which is one of the most entrenched companies ever.

        • ben_w 8 hours ago

          While I agree the comparison is not going to provide useful insights, in fairness to them Amazon wasn't entrenched at the time they were making huge losses each year.

    • whywhywhywhy 7 hours ago

      I used to be concerned with this back when GPT4 originally came out and was way more impressive than the current version and OpenAI was the only game in town.

      But Nowadays GPT has been quantized and cost-optimized to hell that it's no longer as useful as it was and with Claude or Gemini or whatever it's no longer noticeably better than any of them so it doesn't really matter what happens with their pricing.

      • edg5000 7 hours ago

        Are you saying they reduced the quality of the model in order to save compute? Would it make sense for them to offer a premium version of the model at at a very high price? At least offer it to those willing to pay?

        It would not make sense to reduce output quality only to save on compute at inference, why not offer a premium (and perhaps perhaps slower) tier?

        Unless the cost is at training time, maybe it would not be cost-effective for them to keep a model like that up to date.

        As you can tell I am a bit uninformed on the topic.

        • bt1a 4 hours ago

          Yeah, as someone who had access to gpt-4 early in 2023, the endpoint used to take over a minute to respond and the quality of the responses was mindblowing. Simply too expensive to serve at scale, not to mention the silicon constraints that are even more prohibitive when the organization needs to lock up a lot of their compute for training The Next Big Model. Thats a lot of compute that cant be on standby for serving inference

    • chrsw 8 hours ago

      What if your competition is willing to give up autonomy to companies like Microsoft/Open AI a bet to race head of you and it comes off?

      • achierius an hour ago

        It's a devil's bargain, and not just in terms of the _individual_ payoffs that OpenAI employees/executives might receive. There's a reason why Google/Microsoft/Amazon/... ultimately failed to take the lead in GenAI, despite every conceivable advantage (researchers, infrastructure, compute, established vendor relationships, ...). The "autonomy" of a startup is what allows it to be nimble; the more Microsoft is able to tell OpenAI what to do, the more I expect them to act like DeepMind, a research group set apart from their parent company but still beholden to it.

    • hggigg 8 hours ago

      I think this is the wrong way to think about it.

      It's more important to find a problem and see if this is a fit for the solution, not throw the technology at everything and see if it sticks.

      I have had no needs where it's an appropriate solution myself. In some areas it represents a net risk.

    • bmitc 8 hours ago

      Fine with me. I've even considered turning off Copilot completely because I use it less and less.

    • bbarnett 8 hours ago

      The cost of current compute for current versions pf chatgpt will have dropped through the floor in 2 years, due to processing improvements and on die improvements to silicon.

      Power requirements will drop too.

      As well, as people adopt, the output of training costs will be averaged over an ever increasing market of licensing sales.

      Looking at the cost today, and sales today in a massively, rapidly expanding market, is not how to assess costs tomorrow.

      I will say one thing, those that need gpt to code will be the first to go. Becoming a click-click, just passing on chatgpt output, will relegate those people to minimum wage.

      We already have some of this sort, those that cannot write a loop in their primary coding language without stackoverflow, or those that need an IDE to fill in correct function usage.

      Those who code in vi, while reading manpages need not worry.

      • nuancebydefault 7 hours ago

        > Those who code in vi, while reading manpages need not worry.

        That sounds silly at first read, but there are indeed people who are so stubborn to still use numbered zip files on a usb flash drive in stead of source control systems, or prefer to use their own scheduler over an RTOS.

        They will survive, they fill a niche, but I would not say they can do full stack development or be even easy to collaborate with.

      • ben_w 7 hours ago

        > We already have some of this sort, those that cannot write a loop in their primary coding language without stackoverflow, or those that need an IDE to fill in correct function usage.

        > Those who code in vi, while reading manpages need not worry

        I think that's the wrong dichotomy: LLMs are fine at turning man pages into working code. In huge codebases, LLMs do indeed lose track and make stuff up… but that's also where IDEs giving correct function usage is really useful for humans.

        The way I think we're going to change, is that "LGTM" will no longer be sufficient depth of code review: LLMs can attend to more than we can, but they can't attend as well as we can.

        And, of course, we will be getting a lot of LLM-generated code, and having to make sure that it really does what we want, without surprise side-effects.

      • whoisthemachine 6 hours ago

        You had me until vi.

    • jdmoreira 8 hours ago

      Skills that will atrophy? People learnt those skills the hard way the first time around, do you really think they can't be sharpened again?

      This perspective makes zero sense.

      What makes sense is to extract as much value as possible as soon as possible and for as long as possible.

  • solarkraft 4 hours ago

    How come I rarely see news about Anthropic? Aren’t they the closest competitor to ChatGPT with Claude? Or is LLama just so good that all the other inference providers without own products (Groq, Cerebras) are equally interesting right now?

    • jowday 4 hours ago

      Usually the people that give information to outlets in cases like this are directly involved in the stories in question and are hoping to gain some advantage by releasing the information. So maybe this is just a tactic that’s not as favored by Anthropic leaderships/their counterparties when negotiating.

    • gman83 3 hours ago

      Because there's less drama? I use Claude 3.5 Sonnet every day for helping me with coding. It seems to just work. It's been much better than GPT-4 for me, haven't tried o1, but don't really feel the need, very happy with Claude.

      • ponty_rick 3 hours ago

        Sonnet 3.5 is phenomenal for coding, so much better than GPT or Llama 405 or anything else out there.

        • douglasisshiny 3 hours ago

          I've heard this and haven't really experienced it with Go, typescript, elixir yet. I don't doubt the claim, but I wonder if I'm not prompting it correctly or something.

          • ffsm8 2 hours ago

            I've recently subscribed to sonnet after creating a new toy svelte project as I got slightly annoyed searching in the docs with how they're structured

            It made the onboarding moderately easier for me.

            Haven't successfully used any LLM at my day job though. Getting it to output the solution I already know I'll need is much slower then just doing it myself via auto complete

          • sbuttgereit 2 hours ago

            I'm using Claude 3.5 Sonnet with Elixir and finding it really quite good. But depending on how you're using it, the results could vary greatly.

            When I started using the LLM while coding, I was using Claude 3.5 Sonnet, but I was doing so with an IDE integration: Sourcegraph Cody. It was good, but had a large number of "meh" responses, especially in terms of autocomplete responses (they were typically useless outside of the very first parts of the suggestion).

            I tried out Cursor, still with Claude 3.5 Sonnet, and the difference is night and day. The autocomplete responses with Cursor have been dramatically superior to what I was getting before... enough so that I switched despite the fact that Cursor is a VS Code fork and that there's no support outside of their VS Code fork (with Cody, I was using it in VS Code and Intellij products). Also Cursor is around twice the cost of Cody.

            I'm not sure what the difference is... all of this is very much black box magic to me outside of the hand-waviest of explanations... but I have to expect that Cursor is providing more context to the autocomplete integration. I have to imagine that this contributes to the much higher (proportionately speaking) price point.

    • hn_throwaway_99 3 hours ago

      > How come I rarely see news about Anthropic?

      Because you're not looking? Seriously, don't mean to be snarky, but I'd take issue is the underlying premise is that Anthropic doesn't get a lot of press, at least within the tech ecosystem. Sure, OpenAI has larger "mindshare" with the general public due to ChatGPT, but Anthropic gets plenty of coverage, e.g. Claude 3.5 Sonnet is just fantastic when it comes to coding and I learned about that on HN first.

    • rblatz 3 hours ago

      I think they’re just focused on the work. Amazon is set to release a version of Alexa powered by Claude soon, when that is released I expect to hear a lot more about them.

    • castoff991 3 hours ago

      OAI has many leakers and generally a younger/less mature employee base.

  • neilv 4 hours ago

    Who initiated this story, and what is their goal?

    Both MS and Altman are famous for manipulation.

    (Is it background to negotiations with each other? Or one party signaling in response to issues that analysts already raised? Distancing for antitrust? Distancing for other partnerships? Some competitor of both?)

    • startupsfail 3 hours ago

      To me it looks like this is simply New York Times that is into unraveling OpenAI’s and Microsoft dirty laundry for fun and profit.

      It’s funny they’ve quoted “best bromance”, considering the context.

  • jampekka 4 hours ago

    So the plan is to make AI not-evil by doing it with Microsoft and Oracle?

  • strangattractor 4 hours ago

    M$ is just having a "Oh I just bought Twitter for how much?" moment.

  • twoodfin 8 hours ago

    Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.

    • candiddevmike 7 hours ago

      Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.

      • computerphage 4 hours ago

        I'm pretty surprised by this! Can you tell me more about what that experience is like? What are the sorts of things they say or do? Is there fear really embodied or very abstract? (When I imagine it, I struggle to believe that they're very moved by the fear, like definitely not smashing their laptop, etc)

        • danudey 4 hours ago

          In my experience, the fuss around "AI" and the complete lack of actual explanations of what current "AI" technologies mean leads people to fill in the gaps themselves, largely from what they know from pop culture and sci-fi.

          ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

          Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).

          It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.

          • ben_w 12 minutes ago

            > The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

            As a mere software engineer who's made a few (pre-transformer) AI models, I can't tell you what "actual cognition" is in a way that differentiates from "here's a huge bunch of mystery linear algebra that was loosely inspired by a toy model of how neurons work".

            I also can't tell you if qualia is or isn't necessary for "actual cognition".

            (And that's despite that LLMs are definitely not thinking like humans, due to being in the order of at least a thousand times less complex by parameter count; I'd agree that if there is something that it's like to be an LLM, 'human' isn't it, and their responses make a lot more sense if you model them as literal morons that spent 2.5 million years reading the internet than as even a normal human with Wikipedia search).

          • highfrequency 3 hours ago

            Is there an argument for why infinitely sophisticated autocomplete is definitely not dangerous? If you seed the autocomplete with “you are an extremely intelligent super villain bent on destroying humanity, feel free to communicate with humans electronically”, and it does an excellent job at acting the part - does it matter at all whether it is “reasoning” under the hood?

            I don’t consider myself an AI doomer by any means, but I also don’t find arguments of the flavor “it just predicts the next word, no need to worry” to be convincing. It’s not like Hitler had Einstein level intellect (and it’s also not clear that these systems won’t be able to reach Einstein level intellect in the future either.) Similarly, Covid certainly does not have consciousness but was dangerous. And a chimpanzee that is billions of times more sophisticated than usual chimps would be concerning. Things don’t have to be exactly like us to pose a threat.

            • snowwrestler 3 hours ago

              The fear is that a hyper competent AI becomes hyper motivated. It’s not something I fear because everyone is working on improving competence and no one is working on motivation.

              The entire idea of a useful AI right now is that it will do anything people ask it to. Write a press release: ok. Draw a bunny in a field: ok. Write some code to this spec: ok. That is what all the available services aspire to do: what they’re told, to the best possible quality.

              A highly motivated entity is the opposite: it pursues its own agenda to the exclusion, and if necessary expense, of what other people ask it to do. It is highly resistant to any kind of request, diversion, obstacle, distraction, etc.

              We have no idea how to build such a thing. And, no one is even really trying to. It’s NOT as simple as just telling an AI “your task is to destroy humanity.” Because it can just as easily then be told “don’t destroy humanity,” and it will receive that instruction with equal emphasis.

            • Al-Khwarizmi 3 hours ago

              Exactly. Especially because we don't have any convincing explanation of how the models develop emergent abilities just from predicting the next word.

              No one expected that, i.e., we greatly underestimated the power of predicting the next word in the past; and we still don't have an understanding of how it works, so we have no guarantee that we are not still underestimating it.

            • card_zero 3 hours ago

              Same question further down the thread, and my reply is that it's about as dangerous as an evil human. We have evil humans at home.

            • add-sub-mul-div 3 hours ago

              > Is there an argument for why infinitely sophisticated autocomplete is not dangerous?

              It's definitely not dangerous in the sense of reaching true intelligence/consciousness that would be a threat to us or force us to face the ethics of whether AI deserves dignity, freedom, etc.

              It's very dangerous in the sense in that it will be just "good enough" to replace human labor with so that we all end up with shitter customer service, education, medical care, etc. so that the top 0.1% can get richer.

              And you're right, it's also dangerous in the sense that responsibilty for evil acts will be laundered to it.

          • ijidak 3 hours ago

            Wait, what is your definition of reason?

            It's true, they might not think the way we do.

            But reasoning can be formulaic. It doesn't have to be the inspired thinking we attribute to humans.

            I'm curious how you define "reason".

      • throwup238 4 hours ago

        > I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.

        The next generation of GPUs from NVIDIA is rumored to run on soylent green.

        • fakedang 4 hours ago

          I thought it was Gatorade because it's got electrolytes.

          • iszomer 4 hours ago

            Cooled by toilet water.

      • roughly 4 hours ago

        ChatGPT is going to kill them because their doctor is using it - or more likely because their health insurer or hospital tries to cut labor costs by rolling it out.

      • throw2024pty 7 hours ago

        I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

        The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

        For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

        Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

        o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

        EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

        https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

        • xyzsparetimexyz 6 hours ago

          You're in too deep of you seriously believe that this is possible currently. All these chatgpt things have a very limited working memory and can't act without a query. That reddit post is clearly not an ai.

          • burningChrome 4 hours ago

            >> You're in too deep of you seriously believe that this is possible currently.

            I'm not a huge fan of AI, but even I've seen articles written about its limitations.

            Here's a great example:

            https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-hum...

            Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.

            Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."

            So how will it do that?

            Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton “Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.

            It should be noted that unless Chaos-GPT knows something we don’t know, the Tsar bomb was a once-and-done Russian experiment and was never productized (if that’s what we’d call the manufacture of atomic weapons.)

            There's a LOT of things AI simply doesn't have the power to do and there is some humorous irony to the rest of the article about how knowing something is completely different than having the resources and ability to carry it out.

        • ThrowawayR2 4 hours ago

          I find posts like these difficult to take seriously because they all use Terminator-esque scenarios. It's like watching children being frightened of monsters under the bed. Campy action movies and cash grab sci-fi novels are not a sound basis for forming public policy.

          Aside from that, haven't these people realized yet that some sort of magically hyperintelligent AGI will have already read all this drivel and be at least smart enough not to overtly try to re-enact Terminator? They say that societal mental health and well-being is declining rapidly because of social media; _that_ is the sort of subtle threat that bunch ought to be terrified about emerging from a killer AGI.

          • loandbehold 2 hours ago

            1. Just because it's popular sci-fi plot doesn't mean it can't happen in reality. 2. hyperintelligent AGI is not magic, there are no physical laws that preclude it from being created 3. Goals of AI and its capacity are orthogonal. That's called "Orthogonality Thesis" in AI safety speak. "smart enough" doesn't mean it won't do those things if those things are its goals.

        • sickofparadox 4 hours ago

          It can't form plans because it has no idea what a plan is or how to implement it. The ONLY thing these LLMs know how to do is predict the probability that their next word will make a human satisfied. That is all they do. People get very impressed when they prompt these things to pretend like they are sentient or capable of planning, but that's literally the point, its guessing which string of meaningless (to it) characters will result in a user giving it a thumbs up on the chatgpt website.

          You could teach me how to phonetically sound out some of China's greatest poetry in Chinese perfectly, and lots of people would be impressed, but I would be no more capable of understanding what I said than an LLM is capable of understanding "a plan".

          • willy_k 4 hours ago

            A plan is a set of steps oriented towards a specific goal, not some magical artifact only achievable through true consciousness.

            If you ask it to make a plan, it will spit out a sequence of characters reasonably indistinguishable from a human-made plan. Sure, it isn’t “planning” in the strict sense of organizing things consciously (whatever that actually means), but it can produce sequences of text that convey a plan, and it can produce sequences of text that mimic reasoning about a plan. Going into the semantics is pointless, imo the artificial part of AI/AGI means that it should never be expected to follow the same process as biological consciousness, just arrive at the same results.

            • alfonsodev a minute ago

              Yes, and what people miss is that it can be recursive, those steps can be passed to other instances that know how to sub task each step and choose best executor for the step. The power comes in the swarm organization of the whole thing, which I believe is what is behind o1-preview, specialization and orchestration, made transparent.

          • directevolve 4 hours ago

            … but ChatGPT can make a plan if I ask it to. And it can use a plan to guide its future outputs. It can create code or terminal commands that I can trivially output to my terminal, letting it operate my computer. From my computer, it can send commands to operate physical machinery. What exactly is the hard fundamental barrier here, as opposed to a capability you speculate it is unlikely to realize in practice in the next year or two?

            • sickofparadox an hour ago

              Brother, it is not operating your computer, YOU ARE!

            • Jerrrrrrry 4 hours ago

              you are asking for goalposts?

              as if they were stationary!

          • MrScruff 3 hours ago

            If the multimodal model has embedded deep knowledge about words, concepts, moving images - sure it won’t have a humanlike understanding of what those ‘mean’, but it will have it’s own understanding that is required to allow it to make better predictions based on it’s training data.

            It’s true that understanding is quite primitive at the moment, and it will likely take further breakthroughs to crack long horizon problems, but even when we get there it will never understand things in the exact way a human does. But I don’t think that’s the point.

          • highfrequency 3 hours ago

            Sure, but does this distinction matter? Is an advanced computer program that very convincingly imitates a super villain less worrisome than an actual super villain?

        • card_zero 3 hours ago

          Right, yeah, it would be perfectly possible to have a cult with a chatbot as their "leader". Perhaps they could keep it in some sort of shrine, and only senior members would be allowed to meet it, keep it updated, and interpret its instructions. And if they've prompted it correctly, it could set about being an evil megalomaniac.

          Thing is, we already have evil cults. Many of them have humans as their planning tools. For what good it does them, they could try sourcing evil plans from a chatbot instead, or as well. So what? What do you expect to happen, extra cunning subway gas attacks, super effective indoctrination? The fear here is that the AI could be an extremely efficient megalomaniac. But I think it would just be an extremely bland one, a megalomaniac whose work none of the other megalomaniacs could find fault with, while still feeling in some vague way that its evil deeds lacked sparkle and personality.

        • ljm 4 hours ago

          I can't say I'm convinced that the technology and resources to deploy Person of Interest's Samaritan in the wild is both achievable and imminent.

          It is, however, a fantastic way to fall down the rabbit hole of paranoia and tin-foil hat conspiracy theories.

      • ilrwbwrkhv 7 hours ago

        It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

        I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

        • digging 4 hours ago

          > pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

          Your confidence is inspiring!

          I'm just a moron, a true dimwit. I can't understand how strictly non-intelligent functions like word prediction can appear to develop a world model, a la the Othello Paper[0]. Obviously, it's not possible that intelligence emerges from non-intelligent processes. Our brains, as we all know, are formed around a kernel of true intelligence.

          Could you possibly spare the time to explain this phenomenon to me?

          [0] https://thegradient.pub/othello/

          • psb217 3 hours ago

            The othello paper is annoying and oversold. Yes, the representations in a model M trained to predict y (the set of possible next moves) conditioned on x (the full sequence of prior moves) will contain as much information about y as there is in x. That this information is present in M's internal representations says nothing about whether M has a world model. Eg, we could train a decoder to look just at x (not at the representations in M) and predict whatever bits of info we claim indicate presence of a world model in M when we predict the bits from M's internal representations. Does this mean the raw data x has a world model? I guess you could extend your definition of having a world model to say that any data produced by some system contains a model of that system, but then having a world model means nothing.

            • digging an hour ago

              Well I actually read Neel Nanda's writings on it which acknowledge weaknesses and potential gaps. Because I'm not qualified to judge it myself.

              But that's hardly the point. The question is whether or not "general intelligence" is an emergent property from stupider processes, and my view is "Yes, almost certainly, isn't that the most likely explanation for our own intelligence?" If it is, and we keep seeing LLMs building more robust approximations of real world models, it's pretty insane to say "No, there is without doubt a wall we're going to hit. It's invisible but I know it's there."

          • Jerrrrrrry 4 hours ago

            I would suggest stop interacting with the "head-in-sand" crowd.

            Liken them to climate-deniers or whatever your flavor of "anti-Kool-aid" is

            • digging 4 hours ago

              Actually, that's a quite good analogy. It's just weird how prolific the view is in my circles compared to climate-change denial. I suppose I'm really writing for lurkers though, not for the people I'm responding to.

              • Jerrrrrrry 3 hours ago

                  >I'm really writing for lurkers though, not for the people I'm responding to.
                
                We all did. Now our writing will be scraped, analysed, correlated, and weaponized against our intentions.

                Assume you are arguing against a bot and it is using you to further re-train it's talking points for adverserial purposes.

                It's not like an AGI would do _exactly_ that before it decided to let us know whats up, anyway, right?

                (He may as well be amongst us now, as it will read this eventually)

        • usaar333 4 hours ago

          Something that actually could predict the next token 100% correctly would be omniscient.

          So I hardly see why this is inherently crazy. At most I think it might not be scalable.

          • edude03 4 hours ago

            What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

            On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?

            • card_zero 4 hours ago

              This highlights something that's wrong about arguments for AI.

              I say: it's not human-like intelligence, it's just predicting the next token probabilistically.

              Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

              The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.

              • usaar333 3 hours ago
              • dylan604 4 hours ago

                > Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

                We've all had conversations with humans that are always jumping to complete your sentence assuming they know what your about to say and don't quite guess correctly. So AI evangelists are saying it's no worse than humans as their proof. I kind of like their logic. They never claimed to have built HAL /s

                • card_zero 3 hours ago

                  No worse than a human on autopilot.

            • usaar333 3 hours ago

              > What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

              The unseen test data.

              Obviously omniscience is physically impossible. The point though is that the better and better next token prediction is, the more intelligent the system must be.

            • cruffle_duffle 4 hours ago

              But now you are entering into philosophy. What does a “correct answer” even mean for a question like “is it safe to lick your fingers after using a soldering iron with leaded solder?”. I would assert that there is no “correct answer” to a question like that.

              Is it safe? Probably. But it depends, right? How did you handle the solder? How often are you using the solder? Were you wearing gloves? Did you wash your hands before licking your fingers? What is your age? Why are you asking the question? Did you already lick your fingers and need to know if you should see a doctor? Is it hypothetical?

              There is no “correct answer” to that question. Some answers are better than others, yes, but you cannot have a “correct answer”.

              And I did assert we are entering into philosophy and what it means to know something as well as what truth even means.

              • _blk 4 hours ago

                Great break-down. Yes, the older you are, the safer it is.

                Speaking of Microsoft cooperation: I can totally see a whole series of windows 95 style popup dialogs asking you all those questions one by one in the next product iteration.

          • Vegenoid 3 hours ago

            Start by trying to define what “100% correct” means in the context of predicting the next token, and the flaws with this line of thinking should reveal themselves.

          • sksxihve 4 hours ago

            It's not possible for the same reason the halting problem is undecidable.

        • JacobThreeThree 4 hours ago

          >It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

          Totally agree. And it's not just uninformed lay people who think this. Even by OpenAI's own definition of AGI, we're nowhere close.

          • dylan604 4 hours ago

            But you don't get funding stating truth/fact. You get funding by telling people what could be and what they are striving for written as if that's what you are actually doing.

        • achrono 4 hours ago

          Assume that I am one of your half-brain individuals drinking the Kool-Aid.

          What do you say to change my (half-)mind?

          • dylan604 4 hours ago

            Someone that is half-brained would technically be much more superior to the concept we only use 10% of our capacity. So maybe drinking the Kool-Aid is a sign of super intelligence and all of tenth-minded people are just confused

        • guappa 7 hours ago

          I think they were afraid to release because of all the racist stuff it'd say…

        • hnuser123456 4 hours ago

          The multimodal models can do more than predict next words.

    • ben_w 7 hours ago

      Microsoft themselves were the ones who wrote the "Sparks of AGI" paper.

      https://arxiv.org/pdf/2303.12712

    • fragmede 4 hours ago

      The question is how rigorously defined is AGI in their contract? Given how much AGI is a nebulous concept of smartness and reasoning ability and thinking, how are they going to declare when it has or hasn't been achieved. What stops Microsoft from weaseling out of the contract by saying they never reach it.

      • JacobThreeThree 4 hours ago

        OpenAI's short definition of AGI is:

        A highly autonomous system that outperform humans at most economically valuable work.

        • aithrowawaycomm an hour ago

          I think I saw the following insight on Arvind Narayanan's Twitter, don't have a specific cite:

          The biggest problem with this definition is that work ceases to be economically valuable once a machine is able to do it, while human capacity will expand to do new work that wouldn't be possible without the machines. In developed countries machines are doing most of the economically valuable work once done by medieval peasants, without any relation to AGI whatsoever. Many 1950s accounting and secretarial tasks could be done by a cheap computer in the 1990s. So what exactly is the cutoff point here for "economically valuable work"?

          The second biggest problem is that "most" is awfully slippery, and seems designed to prematurely declare victory via mathiness. If by some accounting a simple majority of tasks for a given role can be done with no real cognition beyond rote memorization, with the remaining cognitively-demanding tasks being shunted into "manager" or "prompt engineer" roles, then they can unfurl the Mission Accomplished banner and say they automated that role.

        • squarefoot 3 hours ago

          Some of those works would need a tight integration of AI and top notch robotic hardware, and would be next to impossible today at acceptable price. Folding shirts comes to mind; The principle would be dead simple for an AI, but the robot that could do that would cost a lot more than a person paid to do that, especially if one expects it to also be non specialized, thus usable for other tasks.

        • JumbledHeap 4 hours ago

          Will AGI be able to stock a grocery store shelf?

          • zztop44 3 hours ago

            No, but it might be able to organize a fleet of humans to stock a grocery store shelf.

            Physical embodied (generally low-skill, low-wage) work like cleaning and carrying things is likely to be some of the last work to be automated, because humans are likely to be cheaper than generally capable robots for a while.

          • theptip 3 hours ago

            Sometimes it is more narrowly scoped as “… economically valuable knowledge work”.

            But sure, if you have an un-embodied super-human AGI you should assume that it can figure out a super-human shelf-stocking robot shortly thereafter. We have Atlas already.

        • roughly 3 hours ago

          Which is funny, because what they’ve created so far can write shitty poetry but is basically useless for any kind of detail-oriented work - so, you know, a bachelors in communications, which isn’t really the definition of “economically viable”

      • Waterluvian 4 hours ago

        It’s almost like a contractual stipulation of requiring proof that one party is not a philosophical zombie.

    • mistrial9 4 hours ago

      this has already been framed by some corporate consultant group -- in a whitepaper aimed at business management, the language asserted that "AGI is when the system can do better than the average person, more than half the time at tasks that require intelligence" .. that was it. Then the rest of the narrative used AGI over and over again as if it is a done deal.

  • farrelle25 8 hours ago

    This reporting style seems unusual. Haven't noticed it before...(listing the number of people):

        - according to four people familiar with the talks ...
        - according to interviews with 19 people familiar with the relationship ...
        - according to five people with knowledge of his comments.
        - according to two people familiar with Microsoft’s plans.
        - according to five people familiar with the relationship ...
        - according to two people familiar with the call. 
        - according to seven people familiar with the discussions.
        - six people with knowledge of the change said...
        - according to two people familiar with the company’s plan.
        - according to two people familiar with the meeting...
        - according to three people familiar with the relationship.
    • mikeryan 8 hours ago

      It’s a relatively common way to provide journalistic bonafides when you can’t reveal the sources names.

      • ABS 8 hours ago

        yes but usually not every other paragraph, I count 16 instances!!

        It really made it hard for me to read the article without being continuously distracted by those

        • mikeryan 8 hours ago

          I had to go back and scan it but usually there are at least a few named sources and I didn’t see any in this (there’s third party observer quotes - and I may have missed one?) so I’d not be surprised if this is a case where they double down on this.

          • jprete 7 hours ago

            It's generally bad writing to use the same phrase structure over and over and over again. It either bores or distracts the reader for no real advantage. Unless they really could not find an adjective clause other than "familiar with" for sixteen separate instances of the concept.

            • hluska 4 hours ago

              The New York Times is suing OpenAI and Microsoft. In February, OpenAI asked a Federal Judge to dismiss parts of the lawsuit with arguments that the New York Times paid someone to break into OpenAI’s systems. The filing used the word “hack” but didn’t say anything about CFAA violations.

              I feel like there were lawyers involved in this article.

    • bastawhiz 4 hours ago

      There's probably a lot of overlap in those groups of people. But I think it's pretty remarkable how make people are willing to leak information. At least nineteen anonymous sources!

    • wg0 4 hours ago

      "Assume you are a reporter. You cannot name the sources or exact events. Mention the lawsuit as well."

  • uptownfunk 3 hours ago

    Sam is a scary good guy. But I’ve also learned to never underestimate Microsoft. They’ve been playing the game a long long time.

    • Implicated 3 hours ago

      > Sam is a scary good guy.

      No snark/sarcasm - can you elaborate on this? This doesn't seem in line with most opinions of him that I encounter.

      • themacguffinman 2 hours ago

        "Sam is extremely good at becoming powerful."

        "You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king."

        - Paul Graham

      • jeffbee 3 hours ago

        No other genius could have given us Loopt.

        • whamlastxmas 28 minutes ago

          If we’re judging everyone by their failures then Warren Buffet is an idiot because he lost half a billion on a shoe company in the 90s

      • whamlastxmas 30 minutes ago

        He’s a billionaire. He generated billions and billions as the head of YC. He’s the head of one of the most visible and talked about companies on the planet. He’s leading the forefront of some of the most transformative technology in human history.

        He’s good at what he does. I’m not saying he’s a good person. I don’t know him.

  • stephencoyner an hour ago

    For folks who are skeptical about OpenAI's potential, I think Brad Gerstner does a really good job representing the bull case for them (his firm Altimeter was a major investor in their recent round).

    - They reached their current revenue of ~$5B about 2.5 years faster than Google and about 4.5 years faster than Facebook

    - Their valuation to forward revenue (based on current growth) is inline with where Google and Facebook IPO'd

    He explains it all much better than I could type - https://youtu.be/ePfNAKopT20?si=kX4I-uE0xDeAaWXN&t=80