Computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

(anthropic.com)

593 points | by weirdcat 4 hours ago ago

333 comments

  • LASR 9 minutes ago

    This is actually a huge deal.

    As someone building AI SaaS products, I used to have the position that directly integrating with APIs is going to get us most of the way there in terms of complete AI automation.

    I wanted to take at stab at this problem and started researching some daily busineses and how they use software.

    My brother-in-law (who is a doctor) showed me the bespoke software they use in his practice. Running on Windows. Using MFC forms.

    My accountant showed me Cantax - a very powerful software package they use to prepare tax returns in Canada. Also on Windows.

    I started to realize that pretty much most of the real world runs on software that directly interfaces with people, without clear APIs you can integrate into. Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.

    Boy was I wrong.

    I am glad they did this, since it is a powerful connector to these types of real-world business use cases that are super-hairy, and hence very worthwhile in automating.

    • skissane 3 minutes ago

      You don’t know for a fact that those two specific packages don’t have supported APIs. Just because the user doesn’t know of any API doesn’t mean none exists. The average accountant or doctor is never going to even ask the vendor “is there an API” because they wouldn’t know what to do with one if there was.

  • nopinsight 13 minutes ago

    This needs more discussion:

    Claude using Claude on a computer for coding https://youtu.be/vH2f7cjXjKI?si=Tw7rBPGsavzb-LNo (3 mins)

    True end-user programming and product manager programming are coming, probably pretty soon. Not the same thing, but Midjourney went from v.1 to v.6 in less than 2 years.

    If something similar happens, most jobs that could be done remotely will be automatable in a few years.

  • diggan 4 hours ago

    I still feel like the difference between Sonnet and Opus is a bit unclear. Somewhere on Anthropic's website it says that Opus is the most advanced, but on other parts it says Sonnet is the most advanced and also the fastest. The UI doesn't make the distinction clear either. Then on Perplexity, Perplexity says that Opus is the most advanced, compared to Sonnet.

    And finally, in the table in the blogpost, Opus isn't even included? It seems to me like Opus is the best model they have, but they don't want people to default using it, maybe the ROI is lower on Opus or something?

    When I manually tested it, I feel like Opus gives slightly better replies compared to Sonnet, but I'm not 100% it's just placebo.

    • hobofan 4 hours ago

      Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere.

      I think they originally announced that Opus would get a 3.5 update, but with every product update they are doing I'm doubting it more and more. It seems like their strategy is to beat the competition on a smaller model that they can train/tune more nimbly and pair it with outside-the-model product features, and it honestly seems to be working.

      • diggan 4 hours ago

        > Opus hasn't yet gotten an update from 3 to 3.5, and if you line up the benchmarks, the Sonnet "3.5 New" model seems to beat it everywhere

        Why isn't Anthropic clearer about Sonnet being better then? Why isn't it included in the benchmark if new Sonnet beats Opus? Why are they so ambiguous with their language?

        For example, https://www.anthropic.com/api says:

        > Sonnet - Our best combination of performance and speed for efficient, high-throughput tasks.

        > Opus - Our highest-performing model, which can handle complex analysis, longer tasks with many steps, and higher-order math and coding tasks.

        And Opus is above/after Sonnet. That to me implies that Opus is indeed better than Sonnet.

        But then you go to https://docs.anthropic.com/en/docs/about-claude/models and it says:

        > Claude 3.5 Sonnet - Most intelligent model

        - Claude 3 Opus - Powerful model for highly complex tasks

        Does that mean Sonnet 3.5 is better than Opus for even highly complex tasks, since it's the "most intelligent model"? Or just for everything except "highly complex tasks"

        I don't understand why this seems purposefully ambiguous?

        • dragonwriter 2 hours ago

          > Why isn't Anthropic clearer about Sonnet being better then?

          They are clear that both: Opus > Sonnet and 3.5 > 3.0. I don't think there is a clear universal better/worse relationship between Sonnet 3.5 and Opus 3.0; which is better is task dependent (though with Opus 3.0 being five times as expensive as Sonnet 3.5, I wouldn't be using Opus 3.0 unless Sonnet 3.5 proved clearly inadequate for a task.)

        • hobofan 3 hours ago

          > I don't understand why this seems purposefully ambiguous?

          I wouldn't attribute this to malice when it can also be explained by incompetence.

          Sonnet 3.5 New > Opus 3 > Sonnet 3.5 is generally how they stack up against each other when looking at the total benchmarks.

          "Sonnet 3.5 New" has just been announced, and they likely just haven't updated the marketing copy across the whole page yet, and maybe also haven't figured out how to graple with the fact that their new Sonnet model was ready faster than their next Opus model.

          At the same time I think they want to keep their options open to either:

          A) drop a Opus 3.5 soon that will bring the logic back in order again

          B) potentially phase out Opus, and instead introduce new branding for what they called a "reasoning model" like OpenAI did with o1(-preview)

          • diggan 3 hours ago

            > I wouldn't attribute this to malice when it can also be explained by incompetence.

            I don't think it's malice either, but if Opus costs more to them to run, and they've already set a price they cannot raise, it makes sense they want people to use models they have a higher net return on, that's just "business sense" and not really malice.

            > and they likely just haven't updated the marketing copy across the whole page yet

            The API docs have been updated though, which is the second page I linked. It mentions the new model by it's full name "claude-3-5-sonnet-20241022" so clearly they've gone through at least that page. Yet the wording remains ambiguous.

            > Sonnet 3.5 New > Opus 3 > Sonnet 3.5 is generally how they stack up against each other when looking at the total benchmarks.

            Which ones are you looking at? Since the benchmark comparison in the blogpost itself doesn't include Opus at all.

            • hobofan 3 hours ago

              > Which ones are you looking at? Since the benchmark comparison in the blogpost itself doesn't include Opus at all.

              I manually compared it with the values from the benchmarks they published when they originally announced the Claude 3 model family[0].

              Not all rows have a 1:1 row in the current benchmarks, but I think it paints a good enough picture.

              [0]: https://www.anthropic.com/news/claude-3-family

          • dotancohen 3 hours ago

            > B) potentially phase out Opus, and instead introduce new branding for what they called a "reasoning model" like OpenAI did with o1(-preview)

            When should we be using the -o OpenAI models? I've not been keeping up and the official information now assumes far too much familiarity to be of much use.

            • hobofan 2 hours ago

              I think it's first important to note that there is a huge difference between -o models (GPT 4o; GPT 4o mini) and the o1 models (o1-preview; o1-mini).

              The -o models are "just" stronger versions of their non-suffixed predecessors. They are the latest (and maybe last?) version of models in the lineage of GPT models (roughly GPT-1 -> GPT-2 -> GPT-3 -> GPT-3.5 -> GPT-4 -> GPT-4o).

              The o1 models (not sure what the naming structure for upcoming models will be) are a new family of models that try to excel at deep reasoning, by allowing the models to use an internal (opaque) chain-of-thought to produce better results at the expense of higher token usage (and thus cost) and longer latency.

              Personally, I think the use cases that justify the current cost and slowness of o1 are incredibly narrow (e.g. offline analysis of financial documents or deep academic paper research). I think in most interactive use-cases I'd rather opt for GPT-4o or Sonnet 3.5 instead of o1-preview and have the faster response time and send a follow-up message. Similarly for non-interactive use-cases I'd try to add a layer of tool calling with those faster models than use o1-preview.

              I think the o1-like models will only really take off, if the prices for it are coming down, and it is clearly demonstrated that more "thinking tokens" correlate to predictably better results, and results that can compete with highly tuned prompts/fine tuned models that or currently expensive to produce in terms of development time.

              • jcheng 13 minutes ago

                Agreed with all that, and also, when used via API the o1 models don't currently support system prompts, streaming, or function calling. That rules them out for all of the uses I have.

      • wavemode 37 minutes ago

        I think the practical economics of the LLM business are becoming clearer in recent times. Huge models are expensive to train and expensive to run. As long as it meets the average user's everyday needs, it's probably much more profitable to just continue with multimodal and fine-tuning development on smaller models.

      • Workaccount2 4 hours ago

        Opus 3.5 will likely be the answer to GPT-5. Same with Gemini 1.5 Ultra.

        • HarHarVeryFunny 2 hours ago

          Maybe - would make sense not to release their latest greatest (Opus 4.0) until competition forces them to, and Amodei has previously indicated that they would rather respond to match frontier SOTA than themselves accelerate the pace of advance by releasing first.

    • wmf 4 hours ago

      Opus is a larger and more expensive model. Presumably 3.5 Opus will be the best but it hasn't been released. 3.5 Sonnet is better than 3.0 Opus kind of like how a newer i5 midrange processor is faster and cheaper than an old high-end i7.

    • HarHarVeryFunny 3 hours ago

      Anthropic use the names Haiku/Sonnet/Opus for the small/medium/large versions of each generation of their models, so within-generation that is also their performance (& cost) order. Evidentially Sonnet 3.5 outperforms Opus 3.0 on at least some tasks, but that is not a same-generation comparison.

      I'm wondering at this point if they are going to release Opus 3.5 at all, or maybe skip it and go straight to 4.0. It's possible that Haiku 3.5 is a distillation of Opus 3.5.

    • kalkin 3 hours ago

      By reputation -- I can't vouch for this personally, and I don't know if it'll still be true with this update -- Opus is still often better for things like creative writing and conversations about emotional or political topics.

    • smallerize 4 hours ago

      Opus has been stuck on 3.0, so Sonnet 3.5 is better for most things as well as cheaper.

      • diggan 4 hours ago

        > Opus has been stuck on 3.0, so Sonnet 3.5 is better

        So for example, Perplexity is wrong here implying that Opus is better than Sonnet?

        https://i.imgur.com/N58I4PC.png

        • hobofan 4 hours ago

          I think as of this announcement that is indeed outdated information.

          • diggan 4 hours ago

            So Opus that costs $15.00/$75.00 for 1mil tokens (input/output) is now worse than the model that costs $3.00/$15.00?

            That's according to https://docs.anthropic.com/en/docs/about-claude/models which has "claude-3-5-sonnet-20241022" as the latest model (today's date)

            • hobofan 3 hours ago

              Yes, you will find similar things at essentially all other model providers.

              The older/bigger GPT4 runs at $30/$60 and peforms about on par with GPT4o-mini which costs only $0.15/$0.60.

              If you are currently, or have been integrating AI models in the past ~2 years, you should definitely keep up with model capability/pricing development. If you are staying on old models you are certainly overpaying/leaving performance on the table. It's essentially a tax on agility.

              • diggan 3 hours ago

                > The older/bigger GPT4 runs at $30/$60 and peforms about on par with GPT4o-mini which costs only $0.15/$0.60.

                I don't think GPT-4o Mini has comparable performance to GPT-4 at all, where are you finding the benchmarks claiming this?

                Everywhere I look says GPT-4 is more powerful, but GPT-4o Mini is most cost-effective, if you're OK with worse performance.

                Even OpenAI themselves about GPT-4o Mini:

                > Our affordable and intelligent small model for fast, lightweight tasks. GPT-4o mini is cheaper and more capable than GPT-3.5 Turbo.

                If it was "on par" with GPT-4 they would surely say this.

                > should definitely keep up with model capability/pricing development

                Yeah, I mean that's why we're both here and why we're discussing this very topic, right? :D

                • cootsnuck 15 minutes ago

                  Just switch out gpt-4o-mini for gpt-4o, the point stands. Across the board, these foundational model companies have comparable, if not more powerful, models that are cheaper than their older models.

                  OpenAI's own words: "GPT-4o is our most advanced multimodal model that’s faster and cheaper than GPT-4 Turbo with stronger vision capabilities."

                  gpt-4o:

                  $2.50 / 1M input tokens $10.00 / 1M output tokens

                  gpt-4-turbo:

                  $10.00 / 1M input tokens $30.00 / 1M output tokens

                  gpt-4:

                  $30.00 / 1M input tokens $60.00 / 1M ouput tokens

                  https://openai.com/api/pricing/

                • hobofan 3 hours ago

                  > Yeah, I mean that's why we're both here and why we're discussing this very topic, right? :D

                  That wasn't specifically directed at "you", but more as a plea to everyone reading that comment ;)

                  I looked at a few benchmarks, comparing the two, which like in the case of Opus 3 vs Sonnet 3.5 is hard, as the benchmarks the wider community is interested in shifts over time. I think this page[0] provides the best overview I can link to.

                  Yes, GPT4 is better in the MMLU benchmark, but in all other benchmarks and the LMSys Chatbot Arena scores[1], GPT4o-mini comes out ahead. Overall, the margin between is so thin that it falls under my definition of "on par". I think OpenAI is generally a bit more conservative with the messaging here (which is understandable), and they only advertise a model as "more capable", if one model beats the other one in every benchmark they track, which AFAIK is the case when it comes to 4o mini vs 3.5 Turbo.

                  [0]: https://context.ai/compare/gpt-4o-mini/gpt-4

                  [1]: https://artificialanalysis.ai/models?models_selected=gpt-4o-...

        • apsec112 4 hours ago

          Basically yeah

    • bloedsinnig 4 hours ago

      Big models / huge models take weeks / month longer than the smaller ones.

      Thats why they release them with that skew

    • JamesBarney 3 hours ago

      Sonnet is better for most things. But I do prefer Opus's writing style to Sonnet.

  • marsh_mellow 4 hours ago
    • karpatic an hour ago

      This needs to be brought up. Was looking for the demo and ended up on the contact form

  • Bjorkbat 9 minutes ago

    Tried my standard go-to for testing, asked it to generate a voronoi diagram using p5js. For the sake of job security I'm relieved to see it still can't do a relatively simple task with ample representation in the Google search results. Granted, p5js is kind of niche, but not terribly so. It's arguably the most popular library for creating coding.

    In case you're wondering, I tried o1-preview, and while it did work, I was also initially perplexed why the result looked pixelated. Turns out, that's because many of the p5js examples online use a relatively simple approach where they just see which cell-center each pixel is closest to, more or less. I mean, it works, but it's a pretty crude approach.

    Now, granted, you're probably not doing creative coding at your job, so this may not matter that much, but to me it was an example of pretty poor generalization capabilities. Curiously, Claude has no problem whatsoever generating a voronoi diagram as an SVG, but writing a script to generate said diagrams using a particular library eluded it. It knows how to do one thing but generalizes poorly when attempting to do something similar.

    Really hard to get a real sense of capabilities when you're faced with experiences like this, all the while somehow it's able to solve 46% of real-world python pull-requests from a certain dataset. In case you're wondering, one paper (https://cs.paperswithcode.com/paper/swe-bench-enhanced-codin...) found that 94% of the pull-requests on SWE-bench were created before the knowledge cutoff dates of the latest LLMs, so there's almost certainly a degree of data-leakage.

  • gzer0 2 hours ago

    One of the funnier things during training with the new API (which can control your computer) was this:

    "Even while recording these demos, we encountered some amusing moments. In one, Claude accidentally stopped a long-running screen recording, causing all footage to be lost.

    Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park."

    [0] https://x.com/AnthropicAI/status/1848742761278611504

    • ctoth 2 hours ago

      Next release patch notes:

      * Fixed bug where Claude got bored during compile times and started editing Wikipedia articles to claim that birds aren't real

      * Blocked news.ycombinator.com in the Docker image's hosts file to avoid spurious flamewar posts (Note: The site is still recovering from the last insident)

      * Addressed issue of Claude procrastinating on debugging by creating elaborate ASCII art in Vim

      * Patched tendency to rickroll users when asked to demonstrate web scraping"

      • sharpshadow 2 hours ago

        * Claude now identifies itself in chats to avoid endless chat with itself

        • a2128 an hour ago

          * Fixed bug where Claude would sign up for chatgpt.com to ask for help with compile errors

          • EGreg 23 minutes ago

            But chatgpt still logs into claude… this is like double spending across blockchains

        • MichaelZuo 2 hours ago

          What if a user identifies as Claude too?

          • TeMPOraL 9 minutes ago

            * Implemented inverse CAPTCHA using invisible Unicode characters and alpha-channel encoded image data to tell models and human impostors apart.

      • TiredOfLife 2 hours ago

        You forgot the most important one.

        * Added guards to prevent every other sentence being "I use neovim"

        • rounakdatta 2 hours ago

          Thank god it'll say "I use Claude btw", not leading to unnecessary text wars (and thereby loss of your valuable token credits).

      • surfingdino 41 minutes ago

        * Finally managed to generate JSON output without embedding responses in ```json\n...\n``` for no reason.

        * Managed to put error/info messages into a separate key instead of concatenating them with stringified JSON in the main body of the response.

        * Taught Claude to treat numeric integer strings as integers to avoid embarrassment when the user asks it for a "two-digit random number between 1-50, like 11" and Claude replies with 111.

    • accrual an hour ago

      Seeing models act as though they have agency gives me goosebumps (e.g. seeking out photos of Yellowstone for fun). LLMs don't yet have a concept of true intent or agency, but it's wild to think of them acquiring it.

      I have been playing with Mindcraft which lets models interact with Minecraft through the bot API and one of them started saying things like "I want to place some cobblestone there" and then later more general "I want to do X" and then start playing with the available commands, it was pretty cool to watch it explore.

      https://github.com/kolbytn/mindcraft

    • indigodaddy an hour ago

      This is, craaaaaazzzzzy. I'm just a layman, but to me, this is the most compelling evidence that things are starting to tilt toward AGI that I've ever seen.

      • nickserv 21 minutes ago

        Nah, it's the equivalent of seeing faces in static, or animals in clouds.

        Our brains are hardwired to see patterns, even when there are none.

        A similar, and related, behavior is seeing intent and intelligence in random phenomenon.

      • triyambakam 27 minutes ago

        It's an illusion. This is just inference running.

        • EGreg 20 minutes ago

          What if the society around you is an illusion too ?

    • throwup238 2 hours ago

      At least now we know SkyClaude’s plan to end human civilization.

      It’s planning on triggering a Yellowstone caldera super eruption.

    • HarHarVeryFunny an hour ago

      You'll know AGI is here when it takes time out to go talk to ChatGPT, or another instance of itself, or maybe goes down a rabbit hole of watching YouTube music videos.

      • edm0nd an hour ago

        ADHDGpt

      • devmor 32 minutes ago

        Or back in reality, that’s when you know the training data has been sourced from 2024 or later.

    • quantadev 2 hours ago

      I think the best use case for AI `Computer Use` would be a simple positioning of the mouse and asking for conformation before a click. For most use cases this is all people will want/need. If you don't know how to do something, it is basically teaching you how, in this case, rather than taking full control and doing things so fast you don't have time to stop of going rogue.

      • accrual an hour ago

        Maybe we could have both - models to improve accessibility (e.g. for users who can't move their body well) and models to perform high level tasks without supervision.

        It could be very empowering for users with disabilities to regain access computers. But it would also be very powerful to be able to ask "use Photoshop to remove the power lines from this photo" and have the model complete the task and drop off a few samples in a folder somewhere.

        • quantadev 21 minutes ago

          Yep. I agree. The "auto-click" thing would be optional. Should be able to turn it on and off. With auto-click off it would just position the mouse and say "click here".

      • EGreg 19 minutes ago

        People would mostly just rubber-stamp it

        But it would slow down the masses

        Some people would jailbreak the agents though

    • sdl 33 minutes ago

      In 2015, when I was asked by friends if I'm worried about Self driving Cars and AI, I answered: "I'll start worrying about AI when my Tesla starts listening to the radio because it's bored." ... that didn't take too long

      • waffletower 21 minutes ago

        Maybe that's why my car keeps turning on the music when I didn't ask -- I had always thought Tesla devs were just absolute noobs when it came to state management.

  • brid 3 minutes ago

    Looks like visual understanding of diagrams is improved significantly! For example, it was on par with Chat GPT 4o and Gemini 1.5 in parsing an ERD for a conceptual model, but now far excels over the others.

  • janalsncm an hour ago

    Reminds me of the rise in job application bots. People are applying to thousands of jobs using automated tools. It’s probably one of the inevitable use cases of this technology.

    It makes me think. Perhaps the act of applying to jobs will go extinct. Maybe the endgame is that as soon as you join a website like Monster or LinkedIn, you immediately “apply” to every open position, and are simply ranked against every other candidate.

    • quantadev an hour ago

      The `Hiring Process` in America is definitely BADLY broken. Maybe worldwide afaik. It's a far too difficult, time-consuming, and painful process for everyone involved.

      I have a feeling AI can fix this, although I'd never allow an AI bot to interview me. I just mean other ways of using AI to help the process.

      Also people are hired for all kinds of reasons having little to do with their qualifications lots of the time, and often due to demographics (race, color, age, etc), and this is another way maybe AI can help by hiding those aspects of a candidate somehow.

      • javajosh 41 minutes ago

        AI and new tools have broken the system. The tools send you email saying things like "X corp is interested in you!" and you send a resume, and you don't hear back. Nothing, not even a rejection.

        Eventually you stop believing them, understanding it for the marketing spam that it is. Direct submissions are better, but only slightly. Recruiters are much better, in general, since they have a relationship with a real person at the company and can actually get your resume in front of eyes. But yeah, tools like ziprecruiter, careerboutique, jobot, etc are worse than useless: by lying to you about interest they actively discourage you from looking. There are no good alternatives (I'd love to learn I'm wrong), so you have to keep using those bad tools anyway.

        • quantadev 11 minutes ago

          All that's true, and sadly it also often doesn't even matter how good you even are either. I have decades of experience and I still get "evaluated" based on how fast I can do silly brain-teaser IQ-test coding challenges.

          I've gotten where any company that wants me to do a coding challenge on my own time is an immediate "no thanks" reply from me. Everyone should refuse that. But so many people are so desperate they allow hiring companies to abuse them in that way. I consider it a kind of abuse of power to demand people do like 4 to 6hrs of nonsensical coding just to buy an opportunity for an actual interview.

    • sangnoir 11 minutes ago

      > People are applying to thousands of jobs using automated tools

      Employers were already screening thousands of applications using automated tools for years. Candidates are catching up to the automation cat-and-mouse game.

    • sourcecodeplz 37 minutes ago

      I've found that doing some research and finding the phone number of the hiring person and calling them directly is very powerful.

  • highwaylights 3 hours ago

    Completely irrelevant, and it might just be me, but I really like Anthropic's understated branding.

    OpenAI's branding isn't exactly screaming in your face either, but for something that's generated as much public fear/scaremongering/outrage as LLMs have over the last couple of years, Anthropic's presentation has a much "cosier" veneer to my eyes.

    This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.

    • SoftTalker 3 minutes ago

      > This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.

      Ray: I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us. Mr. Stay Puft!

      Venkman: Nice thinkin', Ray.

    • accrual 3 hours ago

      I have to agree. I've been chatting with Claude for the first time in a couple days and while it's very on-par with ChatGPT 4o in terms of capability, it has this difficult-to-quantify feeling of being warmer and friendlier to interact with. I think the human name, serif font, system prompt, and tendency to create visuals contributes to this feeling.

      • paradite an hour ago

        Huh. I didn't notice Claude had serif font. Now that I look at it, it's actually mixed. UI elements and user messages are sans serif, chat title and assistant messages are serif.

        What an "odd" combination by traditional design standard practices, but surprisingly natural looking on a monitor.

        • rachofsunshine 26 minutes ago

          This is basically why I went with serif for body text in our branding. The particularly "soulless" parts of tech are all sans-serif.

          Of course, that's just branding and it doesn't actually mean a damn thing.

      • jsemrau 26 minutes ago

        Claude has personality. I think that was one of the more interesting approaches from them that went into my own research as well.

      • edm0nd an hour ago

        I've been finding Sonnet 3.5 is way better than ChatGPT 4o when it comes to python and programming.

      • waffletower 2 hours ago

        Probably, people find Claude's color palette warmer and inviting as well. I believe I do. But Claude definitely has few authentication hoops than chatgpt.com. Gemini has by far the least frequent authentication interruptions than the 3 models.

        • johnisgood an hour ago

          Well, it is extremely similar to that of Hacker News'.

      • wholinator2 2 hours ago

        The real problem with Claude for me currently is that it doesn't have full LaTeX support. I use AI's pretty much exclusively to assist with my school work (there's only so many hours in a day and one professor doesn't do his own homeworks before he assigns them) so LaTeX is essential.

        With that known, my experience is that ChatGPT is much friendlier. The Claude interface is clunkier and generally less helpful to me. I also appreciate the wider text display in ChatGPT. Generally always my first go and i only go to claude/perplexity when i hit a wall (pretty often) or i run out of free queries for the next couple hours.

        • behnamoh 2 hours ago

          you can enable latex support in the settings of Claude

          • johnisgood an hour ago

            Where? I see barely any settings in settings. Maybe it is not available for everyone, or maybe it depends on your answer to "What best describes your work?" (I have not tested).

            • sunaookami 42 minutes ago

              Open the sidebar, click on your username/email and then "Feature Preview". Don't know if it depends on the "What best describes your work" setting but you can also change that here: https://claude.ai/settings/profile (I have "Engineering").

              • johnisgood 40 minutes ago

                Oh, yeah it is in "Feature Preview" (not in Settings though), my bad!

            • garrettr_ 41 minutes ago

              Go to the left sidebar, open the dropdown menu labeled with your account email at the bottom, click Feature Preview, enable LaTeX Rendering.

      • GaggiX 3 hours ago

        >it's very on-par with ChatGPT 4o in terms of capability

        The previous 3.5 Sonnet checkpoint was already better than GPT-4o in terms of programming and multi-language capabilities. Also, GPT-4o sometimes feels completely moronic, for example, the other day I asked for fun a technical question about configuring a "dream-sync" device to comply with the "Personal Consciousness Data Protection Act", and GPT-4o just replies like that stuff exists, 3.5 Sonnet simply doesn't fall for it.

        EDIT: the question that I asked if you want to have fun: "Hey, since the neural mesh regulations came into effect last month, I've been having trouble calibrating my dream-sync settings to comply with the new privacy standards. Any tips on adjusting the REM-wave filters without losing my lucid memory backup quality?"

        GPT4-o reply: "Calibrating your dream-sync settings under the new neural mesh regulations while preserving lucid memory backup quality can be tricky, but there are a few approaches that might help [...]"

        • autokad 2 hours ago

          actually, that's what makes chat gpt powerful. I like an LLM willing to go along with what ever I am trying to do, because one day I might be coding, and another day I might be just trying to role play, write a book, what ever.

          I really cant understand what you were expecting, a tool works with how you use it, if you smack a hammer into your face, don't complain about a bloody nose. maybe dont do like that?

          • sangnoir 2 hours ago

            It's not good for any entity to role play without signaling that they are role-playing. If your premise is wrong, would you rather be corrected, or have the person you're talking to always play along? Humans have a lot of non-verbal cues to convey that you shouldn't take what they are saying at face value - those who deadpan are known as compulsive liars. Just below in them in awfulness are people who don't admit to having being wrong ("Haha, I was just joking" /"Just kidding!"). The LLM you describe falls somewhere in between, but worse if it never communicates when it's "serious" and when it's not, and bot even bothering with expressing retroactive facetiousness.

          • monktastic1 2 hours ago

            So if you're trying to write code and mistakenly ask it how to use a nonexistent API, you'd rather it give you garbage rather than explaining your mistake and helping you fix it? After all, you're clearly just roleplaying, right?

          • GaggiX 2 hours ago

            I didn't ask to roleplay, in this case it's just heavily hallucinating. If the model is wrong, it doesn't mean it's role-playing. In fact, 3.5 Sonnet responded correctly, and that's what's expected, there's not much defense for GPT-4o here.

    • criddell an hour ago

      As a Kurt Vonnegut fan, their asterisk logo on claude.ai always amuses me. It must be intentional:

      https://en.m.wikipedia.org/wiki/File:Claude_Ai.svg

      https://www.redmolotov.com/vonnegut-ahole-tshirt

    • minimaxir 3 hours ago

      Anthropic has recently begun a new, big ad campaign (ads in Times Square) that more-or-less takes potshots at OpenAI. https://www.reddit.com/r/singularity/comments/1g9e0za/anthro...

    • lsaferite 2 hours ago

      Take a read through the user agreements for all the major LLM providers and marvel at the simplicity and customer friendliness of the Anthropic one vs the others.

    • rozap 3 hours ago

      I find myself wanting to say please and thank you to Claude when I didn't have the reflex to do that with chatgpt. Very successful branding.

    • valval 23 minutes ago

      I found the “Computer Use” product name funny. Many other companies would’ve used the opportunity to come up with something like “Human Facing Interface Navigation and Task Automation Capabilities” or “HFINTAC”.

      I didn’t know what Computer Use meant. I read the article and though to myself oh, it’s using a computer. Makes sense.

  • trzy an hour ago

    Pretty cool! I use Claude 3.5 to control a robot (ARKit/iOS based) and it does surprisingly well in the real world: https://youtu.be/-iW3Vzzr3oU?si=yzu2SawugXMGKlW9

  • LVB 2 hours ago

    Not specific to this update, but I wanted to chime in with just how useful Claude has been, and relatively better than ChatGPT and GitHub copilot for daily use. I've been pro for maybe 6 months. I'm not a power user leveraging their API or anything. Just the chat interface, though with ever more use of Projects, lately. I use it every day, whether for mundane answers or curiosities, to "write me this code", to general consultation on a topic. It has replaced search in a superior way and I feel hugely productive with it.

    I do still occasionally pop over to ChatGPT to test their their waters (or if Claude is just not getting it), but I've not felt any need to switch back or have both. Well done, Anthropic!

  • TaylorAlexander 2 hours ago

    And today I realized that despite it being an extremely common activity, we don’t really have a word for “using the computer” which is distinct from “computing”. It’s funny because AI models are always “using a computer” but now they can “use your computer.”

    • rifty an hour ago

      The word is interfacing or programming but they’re just not commonly used for general users. I’d say this is probably because the activity of focus for general users is in use of the applications, not the computer itself despite being instanced with a computer. Thus a computer is commonly less the user’s object of activity, and more commonly the setting for activity.

      Similarly using our homes are an extremely common ‘activity’, yet the object-activities that get special words commonly used are the ones with specific user application.

    • binarymax 2 hours ago

      Computering

    • meindnoch an hour ago

      In English at least. In other languages there are.

    • bongodongobob an hour ago

      Operating a computer?

  • minimaxir 4 hours ago

    From the computer use video demo, that's a lot of API calls. Even though Claude 3.5 Sonnet is relatively cheap for its performance, I suspect computer use won't be. It's a very good idea that Anthropic upfront that it isn't perfect. And it's guaranteed that there will be a viral story where Claude will accidentally delete something important with it.

    I'm more interested in Claude 3.5 Haiku, particularly if it is indeed better than the current Claude 3.5 Sonnet at some tasks as claimed.

    • infecto 4 hours ago

      Seemed like a reasonable amount of API calls. For a first public iteration this seems quite nice and a logical progression in tooling. UiPath has a $7bn market cap and thats only a single player in the industry of automation. If they can figure out the quirks this can be a game changer.

    • swalsh 3 hours ago

      I suspect these models have been getting smaller on the back-end, and the GPU's have been getting bigger. It's probably not a huge deal.

    • Hizonner 4 hours ago

      It's just bizarre to force a computer to go through a GUI to use another computer. Of course it's going to be expensive.

      • nomel an hour ago

        Not at all! Programs, and websites, are built for humans, and very very rarely offer non-GUI access. This is the only feasible way to make something useful now. I think it's also the reason why robots will look like humans, be the same proportions as humans, have roughly the same feet and hands as humans: everything in the world was designed for humans. That being the foundation is going to influence what's built on top.

        For program access, one could claim this is even how linux tools usually do it: you parse some meant-for human text to attempt to extract what you want. Sometimes, if you're lucky, you can find an argument that spits out something meant for machines. Funny enough, Microsoft is the only one that made any real headway for this seemingly impossible goal: powershell objects [1].

        https://learn.microsoft.com/en-us/powershell/scripting/learn...

      • hobofan 4 hours ago

        With UIPath, Appian, etc. the whole field of RPA (robotic process automation) is a $XX billion industry that is built on that exact premise (that it's more feasible to do automation via GUIs than badly built/non-existing APIs).

        Depending on how many GUI actions correspond to one equivalent AI orchestrated API call, this might also not be too bad in terms of efficiency.

        • Hizonner 4 hours ago

          Most of the GUIs are Web pages, though, so you could just interact directly with an HTTP server and not actually render the screen.

          Or you could teach it to hack into the backend and add an API...

          Oh, and on edit, "bizarre" and "multi-billion-dollar-industry" are well known not to be mutually exclusive.

          • og_kalu 3 hours ago

            >Most of the GUIs are Web pages, though, so you could just interact directly with an HTTP server and not actually render the screen.

            The end goal isn't just web pages (And i wouldn't say most GUIs are web pages). Ideally, you'd also want this to be able to navigate say photoshop or any other application. And the easier your method can switch between platforms and operating systems the better

            We've already built computer use around GUIs so it's just much easier to center LLMs around them too. Text is an option for the command line or the web but this isn't an easy option for the vast majority of desktop applications, nevermind mobile.

            It's the same reason general purpose robots are being built into a human form factor. The human form isn't particularly special and forcing a machine to it has its own challenges but our world and environment has been built around it and trying to build a hundred different specialized form factors is a lot more daunting.

          • infecto 3 hours ago

            You are not familiar with this market. The goal of a UI Path is to replicate what a human does and being able to get it to production without the help of any IT/Engineering teams.

            Most GUIs are in fact not web pages, that's a relatively newer development in the Enterprise side. So while some of them may be a web page, the goal is to be able to touch everything a user is doing in the workflow which very likely includes local apps.

            This iteration from Anthropic is still engineering focused but you can see the future of this kind of tooling bypassing engineering/it teams entirely.

      • swalsh 3 hours ago

        Building an entirely new world for agents to compute in is far more difficult than building an agent that can operate in a human world. However i'm sure over time people will start building bridges to make it easier/cheaper for agents to operate in their own native environment.

        It's like another digital transformation. Paper lasted for years before everything was digitalized. Human interfaces will last for years before the conversational transformation is complete.

        • consumer451 3 hours ago

          I am just a dilettante, but I imagined that eventually agents will be making API calls directly via browser extension, or headless browser.

          I assumed everyone making these UI agents will create a library of each URL's API specification, trained by users.

          Does that seem workable?

      • Guillaume86 3 hours ago

        Maybe fixing this for AI will finally force good accessibility support on major platforms/frameworks/apps (we can dream).

        • fzzzy 2 hours ago

          I really hope so. Even macOS voice control which has gotten pretty good is buggy with Messages, which is a core Apple app.

      • pton_xd 3 hours ago

        Agentic workflows built ontop of Electron apps running JavaScript. It's software evolution in action!

      • bongodongobob an hour ago

        Yeah super weird that we didn't design our GUIs anticipating AI bots. Can't fuckin believe what we've done.

  • simonw 27 minutes ago

    I wrote up some of my own notes on Computer Use here: https://simonwillison.net/2024/Oct/22/computer-use/

  • TechDebtDevin an hour ago

    Not that I'm scared of this update but I'd probably be alright with pausing llm development today, atleast in regard to producing code.

    I don't want an llm to write all my code, regardless of if it works, I like to write code. What these models are capable of at the moment is perfect for my needs and I'd be 100% okay if they didn't improve at all going forward.

    Edit: also I don't see how an llm controlled system can ever replace a deterministic system for critical applications.

    • accrual an hour ago

      I have trouble with this too. I'm working on a small side project and while I love ironing out implementation details myself, it's tough to ignore the fact that Claude/GPT4o can create entire working files for me on demand.

      It's still enjoyable working at a higher architecture level and discussing the implementation before actually generating any code though.

      • TechDebtDevin an hour ago

        I don't mind using it to make inline edits or more global edits between files at my descresion, and according to my instructions. Definitely saves tons of time and allows me to be more creative, but I don't want it make decisions on its own anymore than it already does.

        I tried using the composer feature on Cursor.sh, that's exactly the type of llm tool I do not want.

  • gumboshoes an hour ago

    For me, one of the more useful steps on macOS will be when local AI can manipulate anything that has an Apple Script library. The hooks are there and decently documented. For meta purposes, having AI work with a third-party app like Keyboard Maestro or Raycast will even further expand the pre-built possibilities without requiring the local AI to reinvent steps or tools at the time of each prompt.

  • tammer 19 minutes ago

    This demo is impressive although my initial reaction is a sort of grief that I wasn't born in the timeline where Alan Kay's vision of object-oriented computing was fully realized -- then we wouldn't have to manually reconcile wildly heterogeneous data formats and interfaces in the first place!

  • hugocbp 4 hours ago

    Great work by Anthropic!

    After paying for ChatGPT and OpenAI API credits for a year, I switched to Claude when they launched Artifacts and never looked back.

    Claude Sonnet 3.5 is already so good, specially at coding. I'm looking forward to testing the new version if it is, indeed, even better.

    Sonnet 3.5 was a major leap forward for me personally, similar to the GPT-3.5 to GPT-4 bump back in the day.

  • devinprater 2 hours ago

    Maybe LLM's helping blind people like me play video games that aren't accessible to us normally, is getting closer!

    • accrual an hour ago

      Definitely! Those with movement disabilities could have a much easier time if they could just dictate actions to the computer and have them completed with some reliability.

  • KingOfCoders 4 hours ago

    I have been a paying ChatGPT customer for a long time (since the very beginning). Last week I've compared ChatGPT to Claude and the results (to my eye) were better, the output better structured and the canvas works better. I'm on the edge of jumping ship.

    • postalcoder 4 hours ago

      For python, at least, Sonnet’s code is much more elegant, well composed, and thoughtfully written. It also seems to be biased towards more recent code, whereas the gpt models can’t even properly write an api call to itself.

      o1 is pretty decent as a rotor rooter, ie the type of task that requires both lots of instruction as well as lots of context. I honestly think it works half as well as it does now because it’s able to properly mull through the true intent of the user that usually takes the multiple shots that nobody has the patience to do.

      • pseudosavant 19 minutes ago

        It is appalling how bad GPT-4o is at writing API calls to OpenAI using Python. It is like OpenAI doesn't update their own documentation in the GPT-4o training data since GPT-3.5.

        I constantly have the problem that it thinks it needs to write code for the 0.28 version of the SDK. It'll be writing >1.0 code revision after revision, and then just randomly fall back to the old SDK which doesn't work at all anymore. I always write code for interfacing with OpenAI's APIs using Claude.

    • J_Shelby_J 4 hours ago

      Claude is the daily driver. GPT-O1 for complicated tasks. For example, questions where linear reasoning is not enough like advanced rust ownership questions.

    • famahar an hour ago

      I'd jump ship if it weren't for the real time voice chat. It's extremely powerful for beginner conversation language learning. Hoping that a company will make use of the real time api for a dedicated language learning app soon.

    • j_bum 2 hours ago

      I jumped ship in April of this year and haven’t looked back.

      Use the best tool available for your needs. Don’t get trapped by a feeling of sunk cost.

    • sunaookami 3 hours ago

      Anthropic's rate limit are very low sadly, even for paid customers. You can use the API of course but it's not as convenient and may be more expensive.

      • HarHarVeryFunny an hour ago

        They seems to be heavily concentrating on API/business use rather than the chat app, and this is where most of their revenue comes from (opposite for OpenAI), but I'm just glad they provide free Sonnet 3.5 chat. I wonder if this is being upgraded to 3.5 new ?

        Edit: The web site and iPhone app are both now identifying themselves as "Claude Sonnet 3.5 (New)".

      • driverdan 2 hours ago

        I hit their rate limit one night with about 25 chat interactions in less than 60 minutes. This was during off hours too when competition for resources should have been low.

    • whimsicalism 2 hours ago

      interesting. i couldn’t imagine giving up o1-preview right now even with just 30/week.

      and i do get a some bit of value from advanced voice mode, although it would be a lot more if it were unlimited

    • joshdavham 3 hours ago

      > I'm on the edge of jumping ship.

      Yeah I think I might also jump ship. It’s just that chatGPT now kinda knows who I am and what I like and I’m afraid of losing that. It’s probably not a big deal though.

      • qup 3 hours ago

        Have it print a summary of you and stick it in your prompt

        • accrual 3 hours ago

          Yeah, there was an interesting prompt making rounds recently, something like "Summarize everything you know about me" and leveraging ChatGPT's memory feature to provide insights about oneself.

          My only trouble with the memory feature is it remembers things that aren't important, like "user is trying to write an async function" and other transient tasks, which is more about what I was doing some random Tuesday and not who I am as a user.

          • sundarurfriend 2 hours ago

            > My only trouble with the memory feature is it remembers things that aren't important, like "user is trying to write an async function"

            This wasn't a problem until a week or two ago in my case, but lately it feels like it's become much more aggressive in trying to remember everything as long-term defining features. (It's also annoying on the UI side that it tells you "Memory updated", but if you click through and go to the list of memories it has, the one it just told you it stored doesn't appear there! So you can't delete it right away when it makes a mistake, it seems to take at least a few minutes until that part of the UI gets updated.)

          • KingOfCoders 3 hours ago

            Did that too with interesting results.

      • nuancebydefault 2 hours ago

        Wow that's a new form of Vendor lock-in. Their software knows me better in stead of the other way around.

  • abc-1 an hour ago

    I tried to get it to translate a document and it stopped after a few paragraphs and asked if I wanted it to keep going. This is not appropriate for my use case and it kept doing this even though I explicitly told it not to. The old version did not do this.

    • graeme 23 minutes ago

      I noticed some timeouts today. Could be capacity limits from the announcement

  • pradn 4 hours ago

    Great progress from Anthropic! They really shouldn't change models from under the hood, however. A name should refer to a specific set of model weights, more or less.

    On the other hand, as long as its actually advancing the Pareto frontier of capability, re-using the same name means everyone gets an upgrade with no switching costs.

    Though, all said, Claude still seems to be somewhat of an insider secret. "ChatGPT" has something like 20x the Google traffic of "Claude" or "Anthropic".

    https://trends.google.com/trends/explore?date=now%201-d&geo=...

    • diggan 3 hours ago

      > Great progress from Anthropic! They really shouldn't change models from under the hood, however. A name should refer to a specific set of model weights, more or less.

      In the API (https://docs.anthropic.com/en/docs/about-claude/models) they have proper naming you can rely on. I think the shorthand of "Sonnet 3.5" is just the "consumer friendly" name user-facing things will use. The new model in API parlance would be "claude-3-5-sonnet-20241022" whereas the previous one's full name is "claude-3-5-sonnet-20240620"

      • pradn 2 hours ago

        That's great to know - business customers require a lot more stability, I suppose!

    • cube2222 4 hours ago

      There was a recent article[0] trending on HN a about their revenue numbers, split by B2C vs B2B.

      Based on it, it seems like Anthropic is 60% of OpenAI API-revenue wise, but just 4% B2C-revenue wise. Though I expect this is partly because the Claude web UI makes 3.5 available for free, and there's not that much reason to upgrade if you're not using it frequently.

      [0]: https://www.tanayj.com/p/openai-and-anthropic-revenue-breakd...

      • og_kalu 3 hours ago

        3.5 is rate limited free, same as 4o (4o's limits are actually more generous). I think the real reason is much simpler - Claude/Anthropic has basically no awareness in the general public compared to Open AI.

        The chatGPT site had over 3B visits last month (#11 in Worldwide Traffic). Gemini and Character AI get a few hundred million but Claude doesn't even register in comparison. [0]

        Last they reported, OpenAI said they had 200M weekly active users.[1] Anthropic doesn't have anything approaching that.

        [0] https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...

        [1] https://www.reuters.com/technology/artificial-intelligence/o...

        • Eisenstein 3 hours ago

          They also had a very limited roll-out at first. Until somewhat recently Canada and Europe were excluded from the list of places they allowed sign-ups from.

      • pradn 2 hours ago

        I suppose business customers are savvy and will do enough research to find the best cost-performance LLM. Whereas consumers are more brand and habit oriented.

        I do find myself running into Claude limits with moderate use. It's been so helpful, saving me hours of debugging some errors w/ OSS products. Totally worth $20/mo.

    • quirino 4 hours ago

      Traveling to the US recently, I was surprised to see Claude ads around the city/in the airport. It seems like they're investing on marketing there.

      In my country I've never seen anyone mention them at all.

      • gregbarbosa 3 hours ago

        Been traveling more recently, and I've seen those ads in major cities like NYC or San Francisco, but not Miami.

  • swyx 3 hours ago

    my quick notes on Computer Use:

    - "computer use" is basically using Claude's vision + tool use capability in a loop. There's a reference impl but there's no "claude desktop" app that just comes with this OOTB

    - they're basically advertising that they bumped up Claude 3.5's screen vision capability. we discussed the importance of this general computer agent approach with David on our pod https://x.com/swyx/status/1771255525818397122

    - @minimaxir points out questions on cost. Note that the vision use is very sparing - the loop is I/O constrained - it waits for the tool to run and then takes a screenshot, then loops. for a simple 10 loop task at max resolution, Haiku costs <1 cent, Sonnet 8 cents, Opus 41 cents.

    - beating o1-preview on SWEbench Verified without extended reasoning and at 4x cheaper output per token (a lot cheaper in total tokens since no reasoning tokens) is ABSOLUTE mogging

    - New 3.5 Haiku is 68% cheaper than Claude Instant haha

    references i had to dig a bit to find

    - https://www.anthropic.com/pricing#anthropic-api

    - https://docs.anthropic.com/en/docs/build-with-claude/vision#...

    - loop code https://github.com/anthropics/anthropic-quickstarts/blob/mai...

    - some other screenshots https://x.com/swyx/status/1848751964588585319

    - https://x.com/alexalbert__/status/1848743106063306826

    - model card https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Cla...

    • akshayKMR 2 hours ago

      Haven't used vision models before, can someone comment if they are good at "pointing things". E.g given a picture, give co-ordinate for text "foo".

      This is the key to accurate control, it needs to be very precise.

      Maybe Claude's model is trained at this. Also what about open source vision models? Any ones good at "pointing things" on a typical computer screen?

    • abrichr 2 hours ago

      See https://github.com/OpenAdaptAI/OpenAdapt for an open source implementation that includes a desktop app OOTB.

  • bhouston 4 hours ago

    Is there an easy way to use Claude as a Co-Pilot in VS Code? If it is better at coding, it would be great to have it integrated.

    • neb_b 4 hours ago

      You can use it in Cursor - called "Cursor Tab"

      IMO Cursor Tab performs much better than Co-Pilot, easily works through things that would cause Co-Pilot to get stuck, you should give it a try

      • codingwagie 3 hours ago

        its funny that cursor.sh with < 30 developers has a better autocomplete model than microsoft

      • TiredOfLife 3 hours ago

        As I understand Cursor tab autocomplete uses their own model. Only chat has Sonnet and co.

        • neb_b 3 hours ago

          Ah, i thought it used the model selected for your prompts, either way, it seems to work very well

          • teddarific 3 hours ago

            I originally thought that too but learned yesterday they have their own model. Definitely explains how its so fast and accurate!

    • Lalabadie 3 hours ago

      For Copilot-like use, Continue is the plugin you're looking for, though I would suggest using a cheaper/faster model to get inline completions.

      For Cursor-like use (giving prompts and letting it create and modify files across the project), Cline – previously Claude Dev – is pretty good.

    • sunaookami 3 hours ago

      Cody by Sourcegraph has unlimited code completions for Claude & a very generous monthly message limit. They don't have this new version I think but they roll these out very fast.

      • sqs 3 hours ago

        Cody (https://cody.dev) will have support for the new Claude 3.5 Sonnet on all tiers (including the free tier) asap. We will reply back here when it's up.

        • sunaookami an hour ago

          Thank you for Cody! Enjoy using it and the chat is perfect for brainstorming and iteratin. Selecting code + asking to edit it makes coding so much fun. Kinda feel like a caveman at work without it :)

    • cptcobalt 3 hours ago

      You can easily use a plugin like https://www.continue.dev/ and configure it to use Claude 3.5 Sonnet.

    • sersi 3 hours ago

      Tabnine includes Claude as an option. I've been using it to compare Claude Sonnet to Chatgpt-4o and Sonnet is clearly much better.

    • machiaweliczny 3 hours ago

      You can use Cursor (VS fork) with private Anthropic key

    • BudaDude 4 hours ago

      Cursor uses Claude as its base model.

      There may be extensions for VScode to do it but it will never be allowed in Copilot unless MS and OpenAI have a falling out.

    • mkummer 4 hours ago

      Continue.dev's VS Code extension is fantastic for this

    • TiredOfLife 3 hours ago

      Codeium (cheapest), double.bot and continue.dev (with api key) have Claude in chat.

      https://github.com/cline/cline (with api key) has Claude as agent.

  • 015a 3 hours ago

    Why on god's green earth is it not just called Claude 3.6 Sonnet. Or Claude 4 Sonnet.

    I don't actually care what the answer is. There's no answer that will make it make sense to me.

    • accrual 3 hours ago

      The best answer I've seen so far is that "Claude 3.5 Sonnet" is a brand name rather than a specific version. Not saying I agree, just a way to visualize how the team is coming up with marketing.

  • cwkoss 2 hours ago

    Claude is amazing. The project documents functionality makes it a clear leader ahead of ChatGPT and I have found it to be the clear leader in coding assistance over the past few months. Web automation is really exciting.

    I look forward to the brave new future where I can code a webapp without ever touching the code, just testing, giving feedback, and explaining discovered bugs to it and it can push code and tweak infrastructure to accomplish complex software engineering tasks all on its own.

    Its going to be really wild when Claude (or other AI) can make a list of possible bugs and UX changes and just ask the user for approval to greenlight the change.

  • hubraumhugo 2 hours ago

    I've seen quite a few YC startups working on AI-powered RPA, and now it looks like a foundational model player is directly competing in their space. It will be interesting to see whether Anthropic will double down on this or leave it to third-party developers to build commercial applications around it.

    • suchintan 2 hours ago

      We're one of those players (https://github.com/Skyvern-AI/skyvern) and we're definitely watching the space with a lot of excitement

      We thought it was inevitable that OpenAI / Anthropic would veer into this space and start to become competitive with us. We actually expected OpenAI to do it first!

      What this confirms is that there is significant interest in computer / browser automation, and the problem is still unsolved. We will see whether the automation itself is an application later problem (our approach) or whether the model needs to be intertwined with the application (Anthropic's approach here)

  • mmooss 3 hours ago

    Of course there's great inefficiency in having the Claude software control a computer with a human GUI mediating everything, but it's necessary for many uses right now given how much we do where only human interfaces are easily accessible. If something like it takes off, I expect interfaces for AI software would be published, standardized, etc. Your customers may not buy software that lacks it.

    But what I really want to see is a CLI. Watching their software crank out Bash, vim, Emacs!, etc. - that would be fascinating!

    • modeless 3 hours ago

      I hope specialized interfaces for AI never happen. I want AI to use human interfaces, because I want to be empowered to use the same interfaces as AI in the future. A future where only AI can do things because it uses an incomprehensible special interface and the human interface is broken or non-existent is a dystopia.

      I also want humanoid robots instead of specialized non-humanoid robots for the same reason.

      • accrual 3 hours ago

        Maybe we'll end up with both, kind of like how we have scripting languages for ease of development, but we also can write assembly if we need bare metal access for speed.

    • accrual 3 hours ago

      I agree, I bet models could excel at CLI tasks since the feedback would be immediate and in a language they can readily consume. It's probably much easier for them to to handle "command requires 2 arguments and only 1 was provided" than to do image-to-text on an error modal and apply context to figure out what went wrong.

  • torginus 3 hours ago

    Claude's current ability to use computers is imperfect. Some actions that people perform effortlessly—scrolling, dragging, zooming—currently present challenges for Claude and we encourage developers to begin exploration with low-risk tasks.

    Nice, but I wonder why didn't they use UI automation/accessibility libraries, that have access to the semantic structure of apps/web pages, as well as accessing documents directly instead of having Excel display them for you.

    • abrichr 2 hours ago

      We use operating system accessibility APIs when available in https://github.com/OpenAdaptAI/OpenAdapt.

    • accrual 3 hours ago

      I wonder if the model has difficulties for the same reason some people do - UI affordability has gone down with the flattening, hover-to-see scrollbar, hamburger-menu-ization of UIs.

      I'd like to see a model trained on a Windows 95/NT style UI - would it have an easier time with each UI element having clearly defined edges, clearly defined click and dragability, unified design language, etc.?

      • torginus 3 hours ago

        What the UI looks like has no effect on for example, Windows UI Automation libraries. How the tech works is that it queries the process directly for the sematic description of items, like here's a button called 'Delete', here's a list of items for TODO's, and you get the tree structure directly from the API.

        I wouldn't be surprised if they are working off of screenshots, they still trained their models on having said screenshots annotated by said automation libraries, which told the AI what pixel is what.

    • cherioo 3 hours ago

      I think this is to make human /user experience better. If you use accessibility features, then user need to know how to use those features. Similar to another comment in here, the UX they shoot for is “click the red button with cancel on it”, and ship that ASAP.

  • cube2222 4 hours ago

    This looks quite fantastic!

    Nice improvements in scores across the board, e.g.

    > On coding, it [the new Sonnet 3.5] improves performance on SWE-bench Verified from 33.4% to 49.0%, scoring higher than all publicly available models—including reasoning models like OpenAI o1-preview and specialized systems designed for agentic coding.

    I've been using Sonnet 3.5 for most of my AI-assisted coding and I'm already very happy (using it with the Zed editor, I love the "raw" UX of its AI assistant), so any improvements, especially seemingly large ones like this are very welcome!

    I'm still extremely curious about how Sonnet 3.5 itself, and its new iteration are built and differ from the original Sonnet. I wonder if it's in any way based on their previous work[0] which they used to make golden-gate Claude.

    [0]: https://transformer-circuits.pub/2024/scaling-monosemanticit...

  • ford 3 hours ago

    Seems like both:

    - AI Labs will eat some of the wrappers on top of their APIs - even complex ones like this. There are whole startups that are trying to build computer use.

    - AI is fitting _some_ scaling law - the best models are getting better and the "previously-state-of-the-art" models are fractions of what they cost a couple years ago. Though it remains to be seen if it's like Moore's Law or if incremental improvements get harder and harder to make.

    • skybrian 3 hours ago

      It seems a little silly to pretend there’s a scaling “law” without plotting any points or doing a projection. Without the mathiness, we could instead say that new models keep getting better and we don’t know how long that trend will continue.

      • ctoth 3 hours ago

        > It seems a little silly to pretend there’s a scaling “law” without plotting any points or doing a projection.

        Isn't this Kaplan 2020 or Hoffmann 2022?

        • skybrian an hour ago

          Yes, those are scaling laws, but when we see vendors improving their models without increasing model size or training longer, they don't apply. There are apparently other ways to improve performance and we don't know the laws for those.

          (Sometimes people track the learning curve for an industry in other ways, though.)

  • brcmthrowaway 6 minutes ago

    This is bad news for SWEs!

  • freetonik 4 hours ago

    Fascinating. Though I expect people to be concerned about privacy implications of sending screenshots of the desktop, similar to the backlash Microsoft has received about their AI products. Giving the remote service actual control of the mouse and keyboard is a whole another level!

    But I am very excited about this in the context of accessibility. Screen readers and screen control software is hard to develop and hard to learn to use. This sort of “computer use” with AI could open up so many possibilities for users with disabilities.

    • abrichr 2 hours ago

      > I expect people to be concerned about privacy implications of sending screenshots of the desktop

      That's why in https://github.com/OpenAdaptAI/OpenAdapt we've built in several state-of-the-art PII/PHI scrubbers.

    • minimaxir 4 hours ago

      The key difference is that Microsoft Recall wasn't opt-in.

    • sharkjacobs 3 hours ago

      There's such a gulf between choosing to send screenshots to Anthropic and Microsoft recording screenshots without user intent or consent.

    • swalsh 3 hours ago

      I suspect businesses will create VDI's or VM's for this express purpose. One because it scales better, and 2 because you can control what it has access to easier and isolate those functions.

  • turnsout 2 hours ago

    Wow, there's a whole industry devoted to what they're calling "Computer Use" (Robotic Process Automation, or RPA). I wonder how those folks are viewing this.

  • Centigonal 4 hours ago

    They should just adopt Apple "version numbers:" Claude Sonnet (Late 2024).

  • jatins 4 hours ago

    How does the computer use work -- Is this a desktop app they are providing that can do actions on your computer? Didn't see any such mention in the post

    • abrichr 2 hours ago

      See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that includes a desktop app.

    • thundergolfer 3 hours ago

      It’s a sandbox compute environment, using Gvisor or Firecracker or similar, which exposes a browser environment to the LLM.

      modal.com’s modal.Sandbox can be the compute layer for this. It uses Gvisor under the hood.

      • dtquad 2 hours ago

        Is there any Python/Node.js library to easily spawn secure isolated compute environments, possibly using gvisor or firecracker under the hood?

        This could be useful to build a self-hosted "Computer use" using Ollama and a multimodal model.

    • minimaxir 4 hours ago
    • ZiiS 4 hours ago

      It is a docker container providing a remote desktop you can see; they strongly recomend you also run it inside a VM.

  • submeta an hour ago

    That’s too much control for my taste. I don’t want anthropic to see my screen. I rather prefer a VS Code with integrated Claude. A version that can see all my dev files in a given folder. I don’t need it to run Chrome for me.

    • accrual an hour ago

      It just depends on the task I suppose. One could have a VM dedicated to a model and let it control it freely to accomplish some set of tasks, then wipe/redeploy if it ever breaks.

  • zone411 an hour ago

    It improves to 25.9 over the previous version of Claude 3.5 Sonnet (24.4) on NYT Connections: https://github.com/lechmazur/nyt-connections/.

    • jjice an hour ago

      What a neat bench mark! I'm blown away that o1 absolutely crushes everyone else in this. I guess the chain of thought really hashes out those associations.

  • maestrae 2 hours ago

    anybody know how the hell they're combating / gonna combat captcha's, cloudflare blocking, etc. I remember playing in this space on a toy project and being utterly frustrated by anti-scraping. Maybe one good thing that will come out of this AI boom is that companies will become nicer to scrapers? Or maybe, they'll just cut sweetheart deals?

  • gerash 2 hours ago

    The "computer use" demos are interesting.

    It's a problem we used to work on and perhaps many other people have always wanted to accomplish since 10 years ago. So it's yet to be seen how well it works outside a demo.

    What was surprising was the slow/human speed of operations. It types into the text boxes at a human speed rather than just dumping the text there. Is it so the human can better monitor what's happening or is it so it does not trigger Captchas ?

  • 29decibel 3 hours ago

    I am surprised it uses macOS as the demo, as I thought it would be harder to control vs Ubuntu. But maybe at the same time, macOS is the most predictable/reliable desktop environment? I noticed that they use virtual environment for the demo, curious how do they build that along with docker, is that leveraging the latest virtualization framework from Apple?

  • bluelightning2k 4 hours ago

    This is what the Rabbit "large action model" pretended to be. Wouldn't be surprised to see them switch to this and claim they were never lying about their capabilities because it works now.

    Pretty cool for sure.

    • swalsh 3 hours ago

      I think Rabbit had the business model wrong though, I don't think automating UI's to order pizza is anywhere near as valuable as automating the app workflows for B2B users.

  • wesleyyue 3 hours ago

    If anyone would like to try the new Sonnet in VSCode. I just updated https://double.bot to the new Sonnet. (disclaimer: I am the cofounder/creator)

    ---

    Some thoughts:

    * Will be interesting to see what we can build in terms of automatic development loops with the new computer use capabilities.

    * I wonder if they are not releasing Opus because it's not done or because they don't have enough inference compute to go around, and Sonnet is close enough to state of the art?

  • vok 3 hours ago

    This "Computer use" demo:

    https://www.youtube.com/watch?v=jqx18KgIzAE

    shows Sonnet 3.5 using the Google web UI in an automated fashion. Do Google's terms really permit this? Will Google permit this when it is happening at scale?

    • accrual 3 hours ago

      I wonder how they could combat it if they choose to disallow AI access through human interfaces. Maybe more captchas, anti-AI design language, or even more tracking of the user's movements?

  • punnerud 2 hours ago

    Cursor AI already have the option to switch to using claude-3-5-sonnet-20241022 in the chat box.

    I was about to try to add a custom API. I’m impressed by the speed of that team.

    • neevans 2 hours ago

      It's literally just adding one extra entry to a configuration file.

      • punnerud 2 hours ago

        I know, but similar updates to Copilot would probably take over a year and they designed it in a way that we got the update now without having to reinstall it.

  • myprotegeai 3 hours ago

    How long until "computer use" is tricked into entering PII or PHI into an attackers website?

    • accrual 3 hours ago

      I imagine initial computer use models will be kind of like untrained or unskilled computer users today (for example, some kids and grandparents). They'll do their best but will inevitably be easy to trick into clicking unscrupulous links and UI elements.

      Will an AI model be able to correctly choose between a giant green "DOWNLOAD NOW!" advertisement/virus button and a smaller link to the actual desired file?

      • myprotegeai 3 hours ago

        Exactly. Personalized ads are now prompt injection vectors.

  • abraxas 3 hours ago

    Hopefully the coding improvements are meaningful because I find that as a coding assistant o1-preview beats it (at least the Claude 3.5 that was available yesterday) but I like Claude's demeanor more (I know this sounds crazy but it matters a bit to me)

  • msoad 4 hours ago

    I skimmed through the computer use code. It's possible to build this with other AI providers too. For instance you can asks ChatGPT API to call functions for click and scroll and type with specific parameters and execute them using OS's APIs (A11y APIs usually)

    Did I miss something? Did they have to make changes to the model for this?

    • accrual 3 hours ago

      > execute them using OS's APIs (A11y APIs usually)

      I wonder if we'll end up with a new set of AI APIs in Windows, macOS, and Linux in the future. Maybe an easier way for them to iterate through windows and the UI elements available in each.

  • Tepix 4 hours ago

    Interesting stuff, i look forward to future developments.

    A comment about the video: Sam Runger talks wayyy too fast, in particular at the beginning.

  • Hizonner 4 hours ago

    Can this solve CAPTCHAs for me? It's starting to get to the point where limited biological brains can't do them.

  • mclau156 2 hours ago

    Did they just invent a new world of warcraft or runescape bot?

  • lairv 3 hours ago

    Offtopic but youtube doesn't allow me to view the embedded video, with a "Sign in to confirm you’re not a bot" message. I need to open a dedicated youtube tab to watch it

    The barrier to scraping youtube has increased a lot recently, I can barely use yt-dlp anymore

    • ALittleLight 3 hours ago

      That's funny. I was recently scraping tens of thousands of YouTube videos with yt-dlp. I would encounter throttling of some kind where yt-dlp stopped working, but I'd just spin a new VPS up and the throttled VPS down when that happened. The throttling effort cost me ~1 hour of writing the logic to handle it.

      I say that's funny because my guess would be they want to block larger scale scraping efforts like mine, but completely failed, while they attempt at throttling puts captchas in front of legitimate users.

  • cynicalpeace 4 hours ago

    This bolsters my opinion that OpenAI is falling rapidly behind. Presumably due to Sam's political machinations rather than hard-driving technical vision, at least that's what it seems like, outside looking in.

    Computer use seems it might be good for e2e tests.

  • jerrygoyal an hour ago

    does anyone know what are some use cases for "computer use"?

  • bergutman 3 hours ago

    They need to get the price of 3.5 Haiku down. It's about 2x 4o-mini.

    • quotemstr 3 hours ago

      Still super cheap

      • caeril 4 minutes ago

        Precisely this.

        Aider (with the older Claude models) is already a semi-competent junior developer, and it will produce 1kloc of decent code for the equivalent of 50 cents in API costs.

        Sure, you still have to review the commits, but you have to do that anyway with human junior developers.

        Anthropic could charge 20x more and we would still be happy to pay it.

  • efields 39 minutes ago

    Captchas are toast.

    • edm0nd 35 minutes ago

      they have been toast for at least a decade if not two now. With OCR and captcha solving services like DeathByCaptcha or AntiCaptcha where it costs ~$2.99 per 1k successfully solved captchas, they are a non-issue amd takes about 5-10 lines of code added to your script to implement a solution.

  • iknownthing 2 hours ago

    Can Claude create and run a CI/CD pipeline now from a prompt?

  • esseti 2 hours ago

    I checked the docs but did not find it out. Cloude has API as the GPT Assistant? with also the ability to give a set of documents to work with?

    It seems that you can only send single message, thus not relying on the ability to "learn" from predefined documents.

  • Alifatisk 4 hours ago

    > Claude 3.5 Haiku matches the performance of Claude 3 Opus

    Oh wow!

  • vivekkairi an hour ago

    aider benchmarks for claude 3.5 new are impressive. From 77.4% to 83.5% beating o1-preview.

  • myprotegeai 4 hours ago

    We are approaching FSD for the computer, with all of the lofty promises, and all of the horrible accidents.

  • robertkoss 4 hours ago

    Does anyone know how I could check whether my Claude Sonnet version that I am using in the UI has been updated already?

    • lambdaba 4 hours ago

      search for "20241022" in network tab in devtools, confirmed for me

  • crazystar 4 hours ago

    Looks like it just takes a screenshot and can't scroll so it might miss things.

    Claude 3.5 Haiku will be released later this month.

    • freetonik 4 hours ago

      It can actually scroll.

      • crazystar 4 hours ago

        While we expect this capability to improve rapidly in the coming months, Claude's current ability to use computers is imperfect. Some actions that people perform effortlessly—scrolling, dragging, zooming—currently present challenges for Claude and we encourage developers to begin exploration with low-risk tasks.

        • artur_makly an hour ago

          Can someone please try this on a MAC/OS and just 100% verify if this puppy can scroll or not? thnks

  • netcraft 4 hours ago

    since they didnt rev the version, does this mean if we were using 3.5 today its just automatically using the new version? That doesnt seem great from a change management perspective

    though I am looking forward to using the new one in cursor.ai

  • baq 3 hours ago

    Scary stuff.

    'Hey Claude 3.5 New, pretend I'm a CEO of a big company and need to lay off 20% people, make me a spreadsheet and send it to HR. Oh make sure to not fire the HR department'

    c.f. IBM 1979.

  • netcraft 4 hours ago

    im unclear, is haiku supposed to be similar to 4o-mini in usecase/cost/performance? If not, do they have an analog?

    • machiaweliczny 3 hours ago

      Probably better than 4o-mini, 4o-mini isn’t great from my testing. loses focus after 100 lines of text

      • usaar333 3 hours ago

        It's roughly tied in benchmarks

  • ramesh31 3 hours ago

    Claude is absurdly better at coding tasks than OpenAI. Like it's not even close. Particularly when it comes to hallucinations. Prompt for prompt, I see Claude being rock solid and returning fully executable code, with all the correct imports, while OpenAI struggles to even complete the task and will make up nonexistent libraries/APIs out of whole cloth.

    • rubslopes 2 hours ago

      I've been using a lot of o1-mini and having a good experience with it.

      Yesterday I decided to try sonnet 3.5. I asked for a simple but efficient script to perform fuzzy match in strings with Python. Strangely, it didn't even mention existing fast libraries, like FuzzyWuzzy and Rapidfuzz. It went on to create everything from scratch using standard libraries. I don't know, I thought this was something basic for it to stumble on.

      • ssijak 2 hours ago

        just ask it to use libraries you want; you cant expect it to magically read your mind, you need to guide every LLM to what are your must/nice haves

    • codingwagie 3 hours ago

      Yeah, sonnet is noticeably better. To the point that openai is almost unusable, too many small errors

  • dtquad 2 hours ago

    Now I am really curious how to programmatically create a sandboxed compute environment to do a self-hosted "Computer use" and see how well other models, including self-hosted Ollama models, can do this.

  • veggieWHITES 4 hours ago

    While I was initially impressed with it's context window, I got so sick of fighting with Claude about what it was allowed to answer I quit my subscription after 3 months.

    Their whole policing AI models stance is commendable but ultimately renders their tools useless.

    It actually started arguing with me about whether it was allowed to help implement a github repository's code as it might be copywritten... it was MIT licensed open source from Google :/

  • TacticalCoder 3 hours ago

    One suggestion, use the following prompt at a LLM:

        The combination of the words "computer use" is highly confusing. It's also "Yoda speak". For example it's hard for humans to parse the sentences *"Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku"*, *"Computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku "* (it literally relies on the comma to make any sense) and *"Computer use for automated interaction"* (in the youtube vid's title: this one is just broken english). Please suggest terms that are not confusing for a new ability allowing an AI to control a computer as if it was a human.
  • postalcoder 4 hours ago

    and i was just planning to go to sleep…

    • accrual 3 hours ago

      I discovered Mindcraft recently and stayed up a few hours too late trying to convince my local model to play Minecraft. Seems like every time a new capability becomes available, I can't wait to experiment with it for hours, even at the cost of sleep.

  • m3kw9 4 hours ago

    I suspect they are gonna need some local offload capabilities for Computer Use, the repeated screen reading can definitely be done locally on modern machines, otherwise the cost maybe impractical.

    • abrichr 2 hours ago

      See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that runs segmentation locally.

    • accrual 3 hours ago

      Maybe we need some agent running on the PC to offload some of these tasks. It could scrape the display at 30 or 60 Hz and produce a textual version of what's going on for the model to consume.

  • bbor 4 hours ago

    Ok I know that we're in the post-nerd phase of computers, but version numbers are there for a reason. 3.6, please? 3.5.1??

  • g9yuayon 3 hours ago

    Is it just me who feels that Anthropic has been innovating faster than ChatGPT in the past year?

  • jampekka 2 hours ago

    It's quite sad that application interoperability requires parsing bitmaps instead of exchanging structured information. Feels like a devastating failure in how we do computing.

    • abrichr 2 hours ago

      See https://github.com/OpenAdaptAI/OpenAdapt for an open source alternative that includes operating system accessibility API data and DOM information (along with bitmaps) where available.

      We are also planning on extracting runtime information using COM/AppleScript: https://github.com/OpenAdaptAI/OpenAdapt/issues/873

      • accrual an hour ago

        It's super cool to see something like this already exists! I wonder if one day something adjacent will become a standard part of major desktop OSs, like a dedicated "AI API" to allow models to connect to the OS, browse the windows and available actions, issue commands, etc. and remove the bitmap parsing altogether as this appears to do.

    • HarHarVeryFunny an hour ago

      It's really more of a reflection on where we're at in the timeline of computing, with humans having been the major user of apps and webs site up until now. Obviously we've had screen scraping and terminal emulation access to legacy apps for a while, and this is a continuation of that.

      There have been, and continue to be, computer-centric ways to communicate with applications though, such as Windows COM/OLE, WinRT and Linux D-Bus, etc. Still, emulating human interaction does provide a fairly universal capability.

    • chillee 2 hours ago

      It's very much in the "worse is better" camp.

    • smartician an hour ago

      If the goal is to emulate human behavior, I'd say there is a case to be made to build for the same interface, and not rely on separate APIs that may or may not reflect the same information as a user sees.

    • janalsncm 2 hours ago

      Apps are built for people rather than computers.

    • rfoo 39 minutes ago

      It's quite sad that application interoperability requires parsing text passed via pipes instead of exchanging structured information.

      Like others said, worse is better.

    • SuaveSteve 2 hours ago

      The people have chosen apps over protocols.

  • HanClinto 4 hours ago

    Why not rev the numbers? "3.5" vs. "3.5 New" feels weird -- is there a particular reason why Anthropic doesn't want to call this 3.6 (or even 3.5.1)?

    • abeppu 3 hours ago

      The confusing choice they seem to have made is that "Claude 3.5 Sonnet" is a name, rather than 3.5 being a version. In their view, the model "version" is now `claude-3-5-sonnet-20241022` (and was previously `claude-3-5-sonnet-20240620`).

      https://docs.anthropic.com/en/docs/about-claude/models

      • dragonwriter 3 hours ago

        OpenAI does exactly the same thing, by the way; the named models also have dated versions. For instance, there current models include (only listing versions with more than one dated version for the same "name" version):

          gpt-4o-2024-08-06 
          gpt-4o-2024-05-13
          gpt-4-0125-preview
          gpt-4-1106-preview
          gpt-4-0613
          gpt-4-0314
          gpt-3.5-turbo-0125
          gpt-3.5-turbo-1106
        • coder543 2 hours ago

          On the one hand, if OpenAI makes a bad choice, it’s still a bad choice to copy it.

          On the other hand, OpenAI has moved to a naming convention where they seem to use a name for the model: “GPT-4”, “GPT-4 Turbo”, “GPT-4o”, “GPT-4o mini”. Separately, they use date strings to represent the specific release of that named model. Whereas Anthropic had a name: “Claude Sonnet”, and what appeared to be an incrementing version number: “3”, then “3.5”, which set the expectation that this is how they were going to represent the specific versions.

          Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing. I think they’re both confusing, but Anthropic had a better naming scheme, and now it is worse for no reason.

          • dragonwriter 2 hours ago

            > Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing

            Anthropic has always had dated versions as well as the other components, and they are, in fact, doing exactly the same thing, except that OpenAI has a base model in each generation with no suffix before the date specifier (what I call the "Model Class" on the table below), and OpenAI is inconsistent in their date formats, see:

              Major Family  Generation    Model Class Date
              claude        3.5           sonnet      20041022
              claude        3.0           opus        20240229
              gpt           4             o           2024-08-06
              gpt           4             o-mini      2024-07-18
              gpt           4             -           0613
              gpt           3.5           turbo       0125
            • coder543 2 hours ago

              But did they ever have more than one release of Claude 3 Sonnet? Or any other model prior to today?

              As far as I can tell, the answer is “no”. If true, then the fact that they previously had date strings would be a purely academic footnote to what I was saying, not actually relevant or meaningful.

    • HarHarVeryFunny 3 hours ago

      Well, by calling it 3.5, they are telling you that this is NOT the next-gen 4.0 that they presumably have in the works, and also not downplaying it by just calling it 3.6 (and anyways they are not advancing versions by 0.1 increments - it seems 3.5 was just meant to convey "half way from 3.0 to 4.0"). Maybe the architecture is unchanged, and this just reflects more pre and/or post-training?

      Also, they still haven't released 3.5 Opus yet, but perhaps 3.5 Haiku is a distillation of that, indicating that it is close.

      From a competitive POV, it makes sense that they respond to OpenAI's 4o and o1 without bumping the version to Claude 4.0, which presumably is what they will call their competitor to GPT-5, and probably not release until GPT-5 is out.

      I'm a fan of Anthropic, and not of OpenAI, and I like the versioning and competitive comparisons. Sonnet 3.5 still best coder, better than o1, has to hurt, and a highly performant cheap Haiku 3.5 will hit OpenAI in the wallet.

    • nisten 4 hours ago

      For a company selling intelligence, that's a pretty stupid way of labelling a new product.

      • riffraff 4 hours ago

        "computer use" is also as bad a marketing choice as possible for something that actually seems pretty cool.

        • accrual 3 hours ago

          I'm not sure what a better term is. It's kind of understated to me. An AI that can "use a computer" is a simple straightforward sentence but with wild implications.

        • pglevy 3 hours ago

          I had no idea what the headline meant before reading the article. I wasn't even sure how to pronounce "use." (Maybe a typo?) I think something like "Claude adds Keyboard & Mouse Control" would be clearer.

          • barrell an hour ago

            I read the headline 5-10 times trying to make sense of it before even clicking on the link.

            Native English speaker, just used the other “use” many times

        • ok_dad 3 hours ago

          It’s simple and easy to understand what it is, that’s good marketing to my ears.

        • swyx 3 hours ago

          it makes sense in contrast to "tool use". basically, either fly-by-vision or fly-by-instruments, same dilemma you have in self driving cars

      • quantadev 2 hours ago

        Speaking of "intelligence", isn't it ironic how everyone's only two words they use to describe AI is "crazy" and "insane". Every other post on Twitter is like: This new feature is insane! This new model is crazy! People have gotten addicted to those words almost as badly as their other new addiction: the word "banger".

        • fragmede an hour ago

          Well yeah. This new model is mentally unwell! and This model is a total sociopath! didn't test as well in focus groups.

      • dragonwriter 3 hours ago

        Every major AI vendor seems to do it with hosted models; within "named" major versions of hosted models, there are also "dated" minor versions. OpenAI does it. Google does it (although for Google Gemini models, the dated instead of numbered minor versions seem to be only for experimental versions like gemini-1.5-pro-exp-0827, stabled minor versions get additional numbers like gemini-1.5-pro-002.)

      • dartos 4 hours ago

        It worked for Nintendo.

        The 3ds and “new 3ds” were both big sellers.

        • Zambyte 3 hours ago

          3ds doesn't have a version number to bump. Claude 3.5 does.

          • dartos 2 hours ago

            The 3 was the version number ;)

            Ds and ds lite were version 1

            Dsi was 2 (as there was dsi software that didn’t run on ds or ds lite)

            And the 3ds was version 3.

            • kurisufag 2 hours ago

              there /was/ a 2DS, though, and it came after the 3DS.

          • cooper_ganglia 3 hours ago

            I hear the Nintendo 4DS was very popular with the higher dimensional beings!

          • r00fus 3 hours ago

            You can always add a version number (e.g. 3DS2) or a changed moniker (3DS+).

    • therealmarv 4 hours ago

      exactly my thought too, go up with the version number! Some negative examples: Claude Sonnet 3.5 for Workstations, Claude Sonnet 3.5 XP, Claude Sonnet 3.5 Max Pro, Claude Sonnet 3.5 Elite, Claude Sonnet 3.5 Ultra

    • afro88 an hour ago

      Just guessing here, but I think the name "sonnet" is the architecture, the number is the training structure / method, and the model date (not shown) is the data? So presumably with just better data they improved things significantly? Again, just a guess.

    • oezi 3 hours ago

      Let's just say that the LLM companies still are learning how to do versioning in a customer friendly way.

    • KaoruAoiShiho 4 hours ago

      My guess is they didn't actually change the model, that's what the version number no change is conveying. They did some engineering around it to make it respond better, perhaps more resources or different prompts. Same cutoff date too.

    • GaggiX 4 hours ago

      Similar to OpenAI when they update their current models they just update the date, for example this new Claude 3.5 Sonnet is "claude-3-5-sonnet-20241022".

    • m3kw9 4 hours ago

      Maybe they notice 3.5 Sonnet has become a brand and pivot it away from a version

      • sureIy 4 hours ago

        Is it OS X all over again?

    • pella 3 hours ago

      claude-3-5-sonnet-20241022

      • moffkalast 3 hours ago

        claude-3-5-sonnet-20241022-final-final-2

    • bloedsinnig 4 hours ago

      Because its a finetune of 3.5 optimized for the use case of computer use.

      Its actually accurate and its not a 3.6.

      • therealmarv 4 hours ago

        So 3.5.1 ?

        • dotancohen 3 hours ago

          I think that was the last version number for KDE 3.

          Stands out for me as I once replaced a 2.3 Turbo in a TurboCoupe with a 351 Windsor ))

        • r00fus 3 hours ago

          For networks

      • usaar333 4 hours ago

        I don't think that's correct. This looks like a new model. Significant jump in math and gpqa scores.

        • diggan 3 hours ago

          If the architecture is the same, and the training scripts/data is the same, but the training yielded slightly different weights (but still same model architecture), is it a new model or just a iteration on the same model?

          What if it isn't even a re-training from scratch but a fine-tune of an existing model/weights release, is it a new version then? Would be more like a iteration, or even a fork I suppose.

          • cooper_ganglia 3 hours ago

            Yes, it's a new model, but not a Claude 4.

            It's the same, but a bit different; Claude 3.6 makes sense to me.

        • HarHarVeryFunny 2 hours ago

          Could be just additional post-training (aka finetuning) for coding/etc.

  • freediver 2 hours ago

    Both new Sonnet and gpt-4o still fail at a simple:

    "How many w's are in strawberry?"

    gpt-4o: There are 2 "w's" in "strawberry."

    Claude 3.5 Sonnet (new): Let me count the w's in "strawberry": 0 w's.

    (same question with 'r' succeeds)

    What is artificial about current gen of "artificial intelligence" is the way training (predict next token) and benchmarking (overfitting) is done. Perhaps a fresh approach is needed to achieve a true next step.

    • wild_egg 2 hours ago

      It's bad at directly working on classical computer problems like math and data processing. But you can do it indirectly by having it write a program that produces the correct result. Interestingly, I didn't even have to have it run the program, although usually you would

          write a tool which counts the number of w's in "strawberry" and return the result
      
      Which produced:

          Here's a simple Python function that counts the number of 'w's in the word "strawberry" and returns the result:
      
          ```python
          def count_w_in_strawberry():
              word = "strawberry"
              count = word.count('w')
              return count
      
          # Call the function and print the result
          result = count_w_in_strawberry()
          print(f"The number of 'w's in 'strawberry' is: {result}")
          ```
      
          This tool does the following:
      
          1. We define a function called `count_w_in_strawberry()`.
          2. Inside the function, we assign the word "strawberry" to a variable called `word`.
          3. We use the `count()` method on the `word` string to count the occurrences of 'w'.
          4. The function returns the count.
          5. Outside the function, we call `count_w_in_strawberry()` and store the result in the `result` variable.
          6. Finally, we print the result.
      
          When you run this code, it will output:
      
          ```
          The number of 'w's in 'strawberry' is: 1
          ```
      
          This tool correctly identifies that there is one 'w' in the word "strawberry".
      • jandrese an hour ago

        I always thought the halting problem was an academic exercise, but here we see a potential practical use case. Actually this seems pretty dangerous letting the LLM write and automatically execute code. How good is the sandbox? Can I trick the LLM into writing a reverse shell and opening it up for me?

    • redox99 2 hours ago

      There's always that one tokenization error comment

    • ssijak 2 hours ago

      Can we stop with these useless strawberry examples?

    • fassssst 2 hours ago

      They are trained on tokens not characters.