gpt-oss:20b is a top ten model (on MMLU (right behind Gemini-2.5-Pro) and I just ran it locally on my Macbook Air M3 from last year.
I've been experimenting with a lot of local models, both on my laptop and on my phone (Pixel 9 Pro), and I figured we'd be here in a year or two.
But no, we're here today. A basically frontier model, running for the cost of electricity (free with a rounding error) on my laptop. No $200/month subscription, no lakes being drained, etc.
I tried 20b locally and it couldn't reason a way out of a basic river crossing puzzle with labels changed. That is not anywhere near SOTA. In fact it's worse than many local models that can do it, including e.g. QwQ-32b.
Well river crossings are one type of problem. My real world problem is proofing and minor editing of text. A version installed on my portable would be great.
Have you tried Google's Gemma-3n-E4B-IT in their AI Edge Gallery app? It's the first local model that's really blown me away with its power-to-speed ratio on a mobile device.
I tried the two US presidents having the same parents one, and while it understood the intent, it got caught up in being adamant that Joe Biden won the election in 2024 and anything I do to try and tell it otherwise is dismissed as being false and expresses quite definitely that I need to do proper research with legitimate sources.
I mean I would hardly blame the specific model, Anthropic has a specific mention in their system prompts on trump winning. For some reason llms get confused with this one.
It is painful to read, I know, but if you make it towards the end it admits that its knowledge cutoff was prior to the election and that it doesn't know who won. Yet, even then, it still remains adamant that Biden won.
I’m still trying to understand what is the biggest group of people that uses local AI (or will)? Students who don’t want to pay but somehow have the hardware? Devs who are price conscious and want free agentic coding?
Local, in my experience, can’t even pull data from an image without hallucinating (Qwen 2.5 VI in that example). Hopefully local/small models keep getting better and devices get better at running bigger ones
It feels like we do it because we can more than because it makes sense- which I am all for! I just wonder if i’m missing some kind of major use case all around me that justifies chaining together a bunch of mac studios or buying a really great graphics card. Tools like exo are cool and the idea of distributed compute is neat but what edge cases truly need it so badly that it’s worth all the effort?
Privacy, both personal and for corporate data protection is a major reason. Unlimited usage, allowing offline use, supporting open source, not worrying about a good model being taken down/discontinued or changed, and the freedom to use uncensored models or model fine tunes are other benefits (though this OpenAI model is super-censored - “safe”).
I don’t have much experience with local vision models, but for text questions the latest local models are quite good. I’ve been using Qwen 3 Coder 30B-A3B a lot to analyze code locally and it has been great. While not as good as the latest big cloud models, it’s roughly on par with SOTA cloud models from late last year in my usage. I also run Qwen 3 235B-A22B 2507 Instruct on my home server, and it’s great, roughly on par with Claude 4 Sonnet in my usage (but slow of course running on my DDR4-equipped server with no GPU).
Add big law to the list as well. There are at least a few firms here that I am just personally aware of running their models locally. In reality, I bet there are way more.
A ton of EMR systems are cloud-hosted these days. There’s already patient data for probably a billion humans in the various hyperscalers.
Totally understand that approaches vary but beyond EMR there’s work to augment radiologists with computer vision to better diagnose, all sorts of cloudy things.
It’s here. It’s growing. Perhaps in your jurisdiction it’s prohibited? If so I wonder for how long.
I do think Devs are one of the genuine users of local into the future. No price hikes or random caps dropped in the middle of the night and in many instances I think local agentic coding is going to be faster than the cloud. It’s a great use case
It's striking how much of the AI conversation focuses on new use cases, while overlooking one of the most serious non-financial costs: privacy.
I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.
Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist. That left me deeply concerned—not just about this moment, but about where things are headed.
The real question isn't just "what can AI do?"—it's "who is keeping the record of what it does?" And just as importantly: "who watches the watcher?" If the answer is "no one," then maybe we shouldn't have a watcher at all.
> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
I'm fairly sure "seemed" is the key word here. LLMs are excellent at making things up - they rarely say "I don't know" and instead generate the most probable guess. People also famously overestimate their own uniqueness. Most likely, you accidentally recreated a kind of Barnum effect for yourself.
> I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.
> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
Maybe I'm missing something, but why wouldn't that be expected? The chat history isn't their only source of information - these models are trained on scraped public data. Unless there's zero information about you and your family on the public internet (in which case - bravo!), I would expect even a "fresh" LLM to have some information even without you giving it any.
Why not turn the question around. All other things being equal, who would prefer to use a rate limited and/or for-pay service if you could obtain at least comparable quality locally for free with no limitations, no privacy concerns, no censorship (beyond that baked into the weights you choose to use), and no net access required?
It's a pretty bad deal. So it must be that all other things aren't equal, and I suppose the big one is hardware. But neural net based systems always have a point of sharply diminishing returns, which we seem to have unambiguously hit with LLMs already, while the price of hardware is constantly decreasing and its quality increasing. So as we go further into the future, the practicality of running locally will only increase.
Healthcare organizations that can't (easily) send data over the wire while remaining in compliance
Organizations operating in high stakes environments
Organizations with restrictive IT policies
To name just a few -- well, the first two are special cases of the last one
RE your hallucination concerns: the issue is overly broad ambitions. Local LLMs are not general purpose -- if what you want is local ChatGPT, you will have a bad time. You should have a highly focused use case, like "classify this free text as A or B" or "clean this up to conform to this standard": this is the sweet spot for a local model
Aren’t there HIPPA compliant clouds? I thought Azure had an offer to that effect and I imagine that’s the type of place they’re doing a lot of things now. I’ve landed roughly where you have though- text stuff is fine but don’t ask it to interact with files/data you can’t copy paste into the box. If a user doesn’t care to go through the trouble to preserve privacy, and I think it’s fair to say a lot of people claim to care but their behavior doesn’t change, then I just don’t see it being a thing people bother with. Maybe something to use offline while on a plane? but even then I guess United will have Starlink soon so plane connectivity is gonna get better
It's less that the clouds are compliant and more that risk management is paranoid. I used to do AWS consulting, and it wouldn't matter if you could show that some AWS service had attestations out the wazoo or that you could even use GovCloud -- some folks just wouldn't update priors.
That access is over a limited API and usually under heavy restrictions on the healthcare org side (e. g., only use a dedicated machine, locked up software, tracked responses and so on).
Running a local model is often much easier: if you already have data on a machine and can run a model without breaching any network one could run it without any new approvals.
If you're building any kind of product/service that uses AI/LLMs the answer is the same as why any company would want to run any other kind of OSS infra/service instead of relying on some closer proprietary vendor API.
> I’m still trying to understand what is the biggest group of people that uses local AI (or will)?
Creatives? I am surprised no one's mentioned this yet:
I tried to help a couple of friends with better copy for their websites, and quickly realized that they were using inventive phrases to explain their work, phrases that they would not want competitors to get wind of and benefit from; phrases that associate closely with their personal brand.
Ultimately, I felt uncomfortable presenting the cloud AIs with their text. Sometimes I feel this way even with my own Substack posts, where I occasionally coin a phrase I am proud of. But with local AI? Cool...
> I’m still trying to understand what is the biggest group of people that uses local AI (or will)?
Well, the model makers and device manufacturers of course!
While your Apple, Samsung, and Googles of the world will be unlikely to use OSS models locally (maybe Samsung?), they all have really big incentives to run models locally for a variety of reasons.
Latency, privacy (Apple), cost to run these models on behalf of consumers, etc.
This is why Google started shipping 16GB as the _lowest_ amount of RAM you can get on your Pixel 9. That was a clear flag that they're going to be running more and more models locally on your device.
As mentioned, it seems unlikely that US-based model makers or device manufacturers will use OSS models, they'll certainly be targeting local models heavily on consumer devices in the near future.
Apple's framework of local first, then escalate to ChatGPT if the query is complex will be the dominant pattern imo.
I’m highly interested in local models for privacy reasons. In particular, I want to give an LLM access to my years of personal notes and emails, and answer questions with references to those. As a researcher, there’s lots of unpublished stuff in there that I sometimes either forget or struggle to find again due to searching for the wrong keywords, and a local LLM could help with that.
I pay for ChatGPT and use it frequently, but I wouldn’t trust uploading all that data to them even if they let me. I’ve so far been playing around with Ollama for local use.
I'm excited to do just dumb and irresponsible things with a local model, like "iterate through every single email in my 20-year-old gmail account and apply label X if Y applies" and not have a surprise bill.
~80% of the basic questions I ask of LLMs[0] work just fine locally, and I’m happy to ask twice for the other 20% of queries for the sake of keeping those queries completely private.
[0] Think queries I’d previously have had to put through a search engine and check multiple results for a one word/sentence answer.
One of my favorite use cases includes simple tasks like generating effective mock/masked data from real data. Then passing the mock data worry-free to the big three (or wherever.)
There’s also a huge opportunity space for serving clients with very sensitive data. Health, legal, and government come to mind immediately. These local models are only going to get more capable of handling their use cases. They already are, really.
A local laptop of the past few years without a discrete GPU can run, at practical speeds depending on task, a gemma/llama model if it's (ime) under 4GB.
For practical RAG processes of narrow scope and an even minimal amount of scaffolding a very usable speed for automating tasks, especially as the last-mile/edge device portion of a more complex process with better models in use upstream. Classification tasks, reasonay intelligent decisions between traditional workflow processes, other use cases-- a of them extremely valuable in enterprise, being built and deployed right now.
Some app devs use local models on local environments with LLM APIs to get up and running fast, then when the app deploys it switches to the big online models via environment vars.
In large companies this can save quite a bit of money.
Privacy laws. Processing government paperwork with LLMs for example. There's a lot of OCR tools that can't be used, and the ones that comply are more expensive than say, GPT-4.1 and lower quality.
Local micro models are both fast and cheap. We tuned small models on our data set and if the small model thinks content is a certain way, we escalate to the LLM.
This gives us really good recall at really low cloud cost and latency.
I can provide a real-world example: Low-latency code completion.
The JetBrains suite includes a few LLM models on the order of a hundred megabytes. These models are able to provide "obvious" line completion, like filling in variable names, as well as some basic predictions, like realising that the `if let` statement I'm typing out is going to look something like `if let Some(response) = client_i_just_created.foobar().await`.
If that was running in The Cloud, it would have latency issues, rate limits, and it wouldn't work offline. Sure, there's a pretty big gap between these local IDE LLMs and what OpenAI is offering here, but if my single-line autocomplete could be a little smarter, I sure wouldn't complain.
Data that can't leave the premises because it is too sensitive. There is a lot of security theater around cloud pretending to be compliant but if you actually care about security a locked server room is the way to do it.
There's a bunch of great reasons in this thread, but how about the chip manufacturers that are going to need you to need a more powerful set of processors in your phone, headset, computer. You can count on those companies to subsidize some R&D and software development.
If you have capable hardware and kids, a local LLM is great. A simple system prompt customisation (e.g. ‘all responses should be written as if talking to a 10 year old’) and knowing that everything is private goes a long way for me at least.
In some large, lucrative industries like aerospace many of the hosted models are off the table due to regulations such as ITAR. There'a a market for models which are run on prem/in GovCloud with a professional support contract for installation and updates.
>Students who don’t want to pay but somehow have the hardware?
that's me - well not a student anymore.
when toying with something, i much prefer not paying for each shot. my 12GB Radeon card can either run a decent extremely slow, or a idiotic but fast model. it's nice not dealing with rate limits.
once you write a prompt that mangles an idiotic model into still doing the work, it's really satisfying. the same principle as working to extract the most from limited embedded hardware. masochism, possibly
Maybe I am too pessimistic, but as an EU citizen I expect politics (or should I say Trump?) to prevent access to US-based frontier models at some point.
I do it because 1) I am fascinated that I can and 2) at some point the online models will be enshitified — and I can then permanently fall back on my last good local version.
Jail breaking then running censored questions. Like diy fireworks, or analysis of papers that touch "sensitive topics", nsfw image generation the list is basically endless.
A small LLM can do RAG, call functions, summarize, create structured data from messy text, etc... You know, all the things you'd do if you were making an actual app with an LLM.
Yeah, chat apps are pretty cheap and convenient for users who want to search the internet and write text or code. But APIs quickly get expensive when inputting a significant amount of tokens.
I’d use it on a plane if there was no network for coding, but otherwise it’s just an emergency model if the internet goes out, basically end of the world scenarios
AI is going to to be equivalent to all computing in the future. Imagine if only IBM, Apple and Microsoft ever built computers, and all anyone else ever had in the 1990s were terminals to the mainframe, forever.
I am all for the privacy angle and while I think there’s certainly a group of us, myself included, who care deeply about it I don’t think most people or enterprises will. I think most of those will go for the easy button and then wring their hands about privacy and security as they have always done while continuing to let the big companies do pretty much whatever they want. I would be so happy to be wrong but aren’t we already seeing it? Middle of the night price changes, leaks of data, private things that turned out to not be…and yet!
The model is good and runs fine but if you want to be blown away again try Qwen3-30A-A3B-2507. It's 6gb bigger but the response is comparable or better and much faster to run. Gpt-oss-20B gives me 6 tok/sec while Qwen3 gives me 37 tok/sec. Qwen3 is not a reasoning model tho.
Estimated 1.5 billion vehicles in use across the world. Generous assumptions: a) they're all IC engines requiring 16 liters of water each. b) they are changing that water out once a year
That gives 24m cubic meters annual water usage.
Estimated ai usage in 2024: 560m cubic meters.
Projected water usage from AI in 2027: 4bn cubic meters at the low end.
what does water usage mean? is that 4bn cubic meters of water permanently out of circulation somehow? is the water corrupted with chemicals or destroyed or displaced into the atmosphere to become rain?
The water is used to sink heat and then instead of cooling it back down they evaporate it, which provides more cooling. So the answer is 'it eventually becomes rain'.
I understand. but why this is bad? is there some analysis of the beginning and end locations of the water, and how the utility differs between those locations?
How up to date are you on current open weights models? After playing around with it for a few hours I find it to be nowhere near as good as Qwen3-30B-A3B. The world knowledge is severely lacking in particular.
dolphin3.0-llama3.1-8b Q4_K_S [4.69 GB on disk]: correct in <2 seconds
deepseek-r1-0528-qwen3-8b Q6_K [6.73 GB]: correct in 10 seconds
gpt-oss-20b MXFP4 [12.11 GB] low reasoning: wrong after 6 seconds
gpt-oss-20b MXFP4 [12.11 GB] high reasoning: wrong after 3 minutes !
Yea yea it's only one question of nonsense trivia. I'm sure it was billions well spent.
It's possible I'm using a poor temperature setting or something but since they weren't bothered enough to put it in the model card I'm not bothered to fuss with it.
I think your example reflects well on oss-20b, not poorly. It (may) show that they've been successful in separating reasoning from knowledge. You don't _want_ your small reasoning model to waste weights memorizing minutiae.
Right... knowledge is one of the things (the one thing?) that LLMs are really horrible at, and that goes double for models small enough to run on normal-ish consumer hardware.
Shouldn't we prefer to have LLMs just search and summarize more reliable sources?
Even large hosted models fail at that task regularly. It's a silly anecdotal example, but I asked the Gemini assistant on my Pixel whether [something] had seen a new release to match the release of [upstream thing].
It correctly chose to search, and pulled in the release page itself as well as a community page on reddit, and cited both to give me the incorrect answer that a release had been pushed 3 hours ago. Later on when I got around to it, I discovered that no release existed, no mention of a release existed on either cited source, and a new release wasn't made for several more days.
Reliable sources that are becoming polluted by output from knowledge-poor LLMs, or overwhelmed and taken offline by constant requests from LLMs doing web scraping …
I just tested 120B from the Groq API on agentic stuff (multi-step function calling, similar to claude code) and it's not that good. Agentic fine-tuning seems key, hopefully someone drops one soon.
For me the game changer here is the speed. On my local Mac I'm finally getting token counts that are faster than I can process the output (~96 tok/s), and the quality has been solid. I had previously tried some of the distilled qwen and deepseek models and they were just way too slow for me to seriously use them.
Training cost has increased a ton exactly because inference cost is the biggest problem: models are now trained on almost three orders of magnitude more data then what is compute-optimal to do (from the Chinchilla paper), because saving compute on inference makes it valuable to overtrain a smaller model to achieve similar performance for a bigger amount of training compute.
Interesting. I understand that, but I don't know to what degree.
I mean the training, while expensive, is done once. The inference … besides being done by perhaps millions of clients, is done for, well, the life of the model anyway. Surely that adds up.
It's hard to know, but I assume the user taking up the burden of the inference is perhaps doing so more efficiently? I mean, when I run a local model, it is plodding along — not as quick as the online model. So, slow and therefore I assume necessarily more power efficient.
I've run qwen3 4B on my phone, it's not the best but it's better than old gpt-3.5. It also does have a reasoning mode, and in reasoning mode it's better than the original gpt-4 and rhe original gpt-4o, but not the latest gpt-4o. I get usable speed, but it's not really comparable to most cloud hosted models.
I'm on android so I've used termux+ollama, but if you don't want to set that up in a terminal or want a GUI pocketpal AI is a really good app for both android and iOS. It let's you run hugging face models.
You can get a pretty good estimate depending on your memory bandwidth. Too many parameters can change with local models (quantization, fast attention, etc). But the new models are MoE so they’re gonna be pretty fast.
The environmentalist in me loves the fact that LLM progress has mostly been focused on doing more with the same hardware, rather than horizontal scaling. I guess given GPU shortages that makes sense, but it really does feel like the value of my hardware (a laptop in my case) is going up over time, not down.
Also, just wanted to credit you for being one of the five people on Earth who knows the correct spelling of "lede."
In my mind, I’m comparing the model architecture they describe to what the leading open-weights models (Deepseek, Qwen, GLM, Kimi) have been doing. Honestly, it just seems “ok” at a technical level:
- both models use standard Grouped-Query Attention (64 query heads, 8 KV heads). The card talks about how they’ve used an older optimization from GPT3, which is alternating between banded window (sparse, 128 tokens) and fully dense attention patterns. It uses RoPE extended with YaRN (for a 131K context window). So they haven’t been taking advantage of the special-sauce Multi-head Latent Attention from Deepseek, or any of the other similar improvements over GQA.
- both models are standard MoE transformers. The 120B model (116.8B total, 5.1B active) uses 128 experts with Top-4 routing. They’re using some kind of Gated SwiGLU activation, which the card talks about as being "unconventional" because of to clamping and whatever residual connections that implies. Again, not using any of Deepseek’s “shared experts” (for general patterns) + “routed experts” (for specialization) architectural improvements, Qwen’s load-balancing strategies, etc.
- the most interesting thing IMO is probably their quantization solution. They did something to quantize >90% of the model parameters to the MXFP4 format (4.25 bits/parameter) to let the 120B model to fit on a single 80GB GPU, which is pretty cool. But we’ve also got Unsloth with their famous 1.58bit quants :)
All this to say, it seems like even though the training they did for their agentic behavior and reasoning is undoubtedly very good, they’re keeping their actual technical advancements “in their pocket”.
I would guess the “secret sauce” here is distillation: pretraining on an extremely high quality synthetic dataset from the prompted output of their state of the art models like o3 rather than generic internet text. A number of research results have shown that highly curated technical problem solving data is unreasonably effective at boosting smaller models’ performance.
This would be much more efficient than relying purely on RL post-training on a small model; with low baseline capabilities the insights would be very sparse and the training very inefficient.
It behooves them to keep the best stuff internal, or at least greatly limit any API usage to avoid giving the goods away to other labs they are racing with.
Which, presumably, is the reason they removed 4.5 from the API... mostly the only people willing to pay that much for that model were their competitors. (I mean, I would pay even more than they were charging, but I imagine even if I scale out my use cases--which, for just me, are mostly satisfied by being trapped in their UI--it would be a pittance vs. the simpler stuff people keep using.)
Or, you can say, OpenAI has some real technical advancements on stuff besides attn architecture. GQA8, alternating SWA 128 / full attn do all seem conventional. Basically they are showing us that "no secret sauce in model arch you guys just sucks at mid/post-training", or they want us to believe this.
Kimi K2 paper said that the model sparsity scales up with parameters pretty well (MoE sparsity scaling law, as they call, basically calling Llama 4 MoE "done wrong"). Hence K2 has 128:1 sparsity.
It's convenient to be able to attribute success to things only OpenAI could've done with the combo of their early start and VC money – licensing content, hiring subject matter experts, etc. Essentially the "soft" stuff that a mature organization can do.
I think their MXFP4 release is a bit of a gift since they obviously used and tuned this extensively as a result of cost-optimization at scale - something the open source model providers aren't doing too much, and also somewhat of a competitive advantage.
Unsloth's special quants are amazing but I've found there to be lots of trade offs vs full quantization, particularly when striving for best first-shot attempts - which is by far the bulk of LLM use cases. Running a better (larger, newer) model at lower quantization to fit in memory, or with reduced accuracy/detail to speed it up both have value, but in the the pursuit of first-shot accuracy there doesn't seem to be many companies running their frontier models on reduced quantization. If openAI is in doing this in production that is interesting.
Also: attention sinks (although implemented as extra trained logits used in attention softmax rather than attending to e.g. a prepended special token).
>They did something to quantize >90% of the model parameters to the MXFP4 format (4.25 bits/parameter) to let the 120B model to fit on a single 80GB GPU, which is pretty cool
They said it was native FP4, suggesting that they actually trained it like that; it's not post-training quantisation.
The native FP4 is one of the most interesting architectural aspects here IMO, as going below FP8 is known to come with accuracy tradeoffs. I'm curious how they navigated this and how the FP8 weights (if they exist) were to perform.
One thing to note is that MXFP4 is a block scaled format, with 4.25 bits per weight. This lets it represent a lot more numbers than just raw FP4 would with say 1 mantissa and 2 exponent bits.
I don't know how to ask this without being direct and dumb: Where do I get a layman's introduction to LLMs that could work me up to understanding every term and concept you just discussed? Either specific videos, or if nothing else, a reliable Youtube channel?
What I’ve sometimes done when trying to make sense of recent LLM research is give the paper and related documents to ChatGPT, Claude, or Gemini and ask them to explain the specific terms I don’t understand. If I don’t understand their explanations or want to know more, I ask follow-ups. Doing this in voice mode works better for me than text chat does.
When I just want a full summary without necessarily understanding all the details, I have an audio overview made on NotebookLM and listen to the podcast while I’m exercising or cleaning. I did that a few days ago with the recent Anthropic paper on persona vectors, and it worked great.
There is a great 3blue1brown video, but it’s pretty much impossible by now to cover the entire landscape of research. I bet gpt-oss has some great explanations though ;)
Try Microsoft's "Generative AI for Beginners" repo on GitHub. The early chapters in particular give a good grounding of LLM architecture without too many assumptions of background knowledge. The video version of the series is good too.
TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs. Will be interesting to see if independent benchmarks resolve in that direction as well.
The 20B model runs on my Mac laptop using less than 15GB of RAM.
I tried to generate a streamlit dashboard with MACD, RSI, MA(200). 1:0 for qwen3 here.
qwen3-coder-30b 4-bit mlx took on the task w/o any hiccups with a fully working dashboard, graphs, and recent data fetched from yfinance.
gpt-oss-20b mxfp4's code had a missing datatime import and when fixed delivered a dashboard without any data and with starting date of Aug 2020. Having adjusted the date, the update methods did not work and displayed error messages.
for now, i wouldnt rank any model from openai in coding benchmarks, despite all the false messaging they are giving, almost every single model openai has launched even the high end o3 expensive models are absolutely monumentally horrible at coding tasks. So this is expected.
If its decent in other tasks, which i do find openai often being better than others at, then i think its a win, especially a win for the open source community that even AI labs that pionered the hype of Gen AI who didnt want to ever launch open models are now being forced to launch them. That is definitely a win, and not something that was certain before.
NVIDIA will probably give us nice, coding-focused fine-tunes of these models at some point, and those might compare more favorably against the smaller Qwen3 Coder.
The space invaders game seems like a poor benchmark. Both models understood the prompt and generated valid, functional javascript. One just added more fancy graphics. It might just have "use fancy graphics" in its system prompt for all we know.
still, if you ask this open model to generate a fancy space invaders game with polish, and then ask the other model to generate a bare-bones space invaders game with the fewest lines of code, I think there's a good chance they'd switch places. This doesn't really test the models ability to generate a space invaders game, so much as it tests their tendency to make an elaborate vs simple solution.
My llm agent is currently running an experiment generating many pelicans. It will compare various small model consortiums against the same model running solo.
It should push new pelicans to the repo after run.
The horizon-beta is up already, not small or opensource but tested it anyway, and you can already see an improvement using 2+1 (2 models + the arbiter) for that model.
There is no way that gpt-oss-120b can beat the much larger Kimi-K2-Instruct, Qwen3 Coder/Instruct/Thinking, or GLM-4.5. How did you arrive at this rather ridiculous conclusion? The current sentiment in r/LocalLLaMA is that gpt-oss-120b is around Llama-4 Scout level. But it is indeed the best in refusal.
What did you set the context window to? That's been my main issue with models on my macbook, you have to set the context window so short that they are way less useful than the hosted models. Is there something I'm misisng there?
> TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs.
That's just straight up not the case. Not sure how you can jump to that conclusion not least when you stated that you haven't tested tool calling in your post too.
Many people in the community are finding it substantially lobotomized to the point that there are "safe" memes everywhere now. Maybe you need to develop better tests that and pay more attention to benchmaxxing.
There are good things that came out of these release from OpenAI but we'd appreciate more objective analyses...
> I’m waiting for the dust to settle and the independent benchmarks (that are more credible than my ridiculous pelicans) to roll out, but I think it’s likely that OpenAI now offer the best available open weights models.
You told me off for jumping to conclusions and in the same comment quoted me saying "I think OpenAI may have taken" - that's not a conclusion, it's tentative speculation.
I did read that and it doesn't change what I said about your comment on HN, I was calling out the fact that you are making a very bold statement without having done careful analysis.
You know you have a significant audience, so don't act like you don't know what you're doing when you chose to say "TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs" then defend what I was calling out based on word choices like "conclusions" (I'm sure you have read conclusions in academic journals?), "I think", and "speculation".
I’m also very interested to know how well these models handle tool calling as I haven’t been able to make it work after playing with them for a few hours. Looks promising tho.
update: I’ve tried to use lm-studio (like the author) and the tool request kept failing due to a mismatch in the prompt template. I guess they’ll fix it but seems sloppy from lm-studio not having tested this before release.
I was road testing tool calling in LM Studio a week ago against a few models marked with tool support, none worked, so I believe it may be a bug. Had much better luck with llama.cpp’s llama-server.
interestingly, i am also on M2 Max, and i get ~66 tok/s in LM Studio on M2 Max, with the same 131072.
I have full offload to GPU. I also turned on flash attention in advanced settings.
I found this surprising because that's such an old test that it must certainly be in the training data. I just tried to reproduce and I've been unable to get it (20B model, lowest "reasoning" budget) to fail that test (with a few different words).
Running a model comparable to o3 on a 24GB Mac Mini is absolutely wild. Seems like yesterday the idea of running frontier (at the time) models locally or on a mobile device was 5+ years out. At this rate, we'll be running such models in the next phone cycle.
It only seems like that if you haven't been following other open source efforts. Models like Qwen perform ridiculously well and do so on very restricted hardware. I'm looking forward to seeing benchmarks to see how these new open source models compare.
Nah, these are much smaller models than Qwen3 and GLM 4.5 with similar performance. Fewer parameters and fewer bits per parameter. They are much more impressive and will run on garden variety gaming PCs at more than usable speed. I can't wait to try on my 4090 at home.
There's basically no reason to run other open source models now that these are available, at least for non-multimodal tasks.
Qwen3 has multiple variants ranging from larger (230B) than these models to significantly smaller (0.6b), with a huge number of options in between. For each of those models they also release quantized versions (your "fewer bits per parameter).
I'm still withholding judgement until I see benchmarks, but every point you tried to make regarding model size and parameter size is wrong. Qwen has more variety on every level, and performs extremely well. That's before getting into the MoE variants of the models.
The benchmarks of the OpenAI models are comparable to the largest variants of other open models. The smaller variants of other open models are much worse.
With all due respect, you need to actually test out Qwen3 2507 or GLM 4.5 before making these sorts of claims. Both of them are comparable to OpenAI's largest models and even bench favorably to Deepseek and Opus: https://cdn-uploads.huggingface.co/production/uploads/62430a...
It's cool to see OpenAI throw their hat in the ring, but you're smoking straight hopium if you think there's "no reason to run other open source models now" in earnest. If OpenAI never released these models, the state-of-the-art would not look significantly different for local LLMs. This is almost a nothingburger if not for the simple novelty of OpenAI releasing an Open AI for once in their life.
I'd really wait for additional neutral benchmarks, I asked the 20b model on low reasoning effort which number is larger 9.9 or 9.11 and it got it wrong.
They have worse scores than recent open source releases on a number of agentic and coding benchmarks, so if absolute quality is what you're after and not just cost/efficiency, you'd probably still be running those models.
Let's not forget, this is a thinking model that has a significantly worse scores on Aider-Polyglot than the non-thinking Qwen3-235B-A22B-Instruct-2507, a worse TAUBench score than the smaller GLM-4.5 Air, and a worse SWE-Bench verified score than the (3x the size) GLM-4.5. So the results, at least in terms of benchmarks, are not really clear-cut.
From a vibes perspective, the non-reasoners Kimi-K2-Instruct and the aforementioned non-thinking Qwen3 235B are much better at frontend design. (Tested privately, but fully expecting DesignArena to back me up in the following weeks.)
OpenAI has delivered something astonishing for the size, for sure. But your claim is just an exaggeration. And OpenAI have, unsurprisingly, highlighted only the benchmarks where they do _really_ well.
So far I have mixed impressions, but they do indeed seem noticeably weaker than comparably-sized Qwen3 / GLM4.5 models. Part of the reason may be that the oai models do appear to be much more lobotomized than their Chinese counterparts (which are surprisingly uncensored). There's research showing that "aligning" a model makes it dumber.
The censorship here in China is only about public discussions / spaces. You cannot like have a website telling you about the crimes of the party. But downloading some compressed matrix re-spouting the said crimes, nobody gives a damn.
We seem to censor organized large scale complaints and viral mind virii, but we never quite forbid people at home to read some generated knowledge from an obscure hard to use software.
On the subject of who has a moat and who doesn't, it's interesting to look the role of patents in the early development of wireless technology. There was WWI, and there was WWII, but the players in the nascent radio industry had serious beef with each other.
I imagine the same conflicts will ramp up over the next few years, especially once the silly money starts to dry up.
Right? I still remember the safety outrage of releasing Llama. Now? My 96 GB of (V)RAM MacBook will be running a 120B parameter frontier lab model. So excited to get my hands on the MLX quants and see how it feels compared to GLM-4.5-air.
I feel like most of the safety concerns ended up being proven correct, but there's so much money in it that they decided to push on anyway full steam ahead.
AI did get used for fake news, propaganda, mass surveillance, erosion of trust and sense of truth, and mass spamming social media.
in that era, OpenAI and Anthropic were still deluding themselves into thinking they would be the "stewards" of generative AI, and the last US administration was very keen on regoolating everything under the sun, so "safety" was just an angle for regulatory capture.
Oh absolutely, AI labs certainly talk their books, including any safety angles. The controversy/outrage extended far beyond those incentivized companies too. Many people had good faith worries about Llama. Open-weight models are now vastly more powerful than Llama-1, yet the sky hasn't fallen. It's just fascinating to me how apocalyptic people are.
I just feel lucky to be around in what's likely the most important decade in human history. Shit odds on that, so I'm basically a lotto winner. Wild times.
Lol. To be young and foolish again. This covid laced decade is more of a placeholder. The current decade is always the most meaningful until the next one. The personal computer era, the first cars or planes, ending slavery needs to take a backseat to the best search engine ever. We are at the point where everyone is planning on what they are going to do with their hoverboards.
The slavery of free humans is illegal in America, so now the big issue is figuring out how to convince voters that imprisoned criminals deserve rights.
Even in liberal states, the dehumanization of criminals is an endemic behavior, and we are reaching the point in our society where ironically having the leeway to discuss the humane treatment of even our worst criminals is becoming an issue that affects how we see ourselves as a society before we even have a framework to deal with the issue itself.
What one side wants is for prisons to be for rehabilitation and societal reintegration, for prisoners to have the right to decline to work and to be paid fair wages from their labor. They further want to remove for-profit prisons from the equation completely.
What the other side wants is the acknowledgement that prisons are not free, they are for punishment, and that prisoners have lost some of their rights for the duration of their incarceration and that they should be required to provide labor to offset the tax burden of their incarceration on the innocent people that have to pay for it. They also would like it if all prisons were for-profit as that would remove the burden from the tax payers and place all of the costs of incarceration onto the shoulders of the incarcerated.
Both sides have valid and reasonable wants from their vantage point while overlooking the valid and reasonable wants from the other side.
I think his point is that slavery is not outlawed by the 13th amendment as most people assume (even the Google AI summary reads: "The 13th Amendment to the United States Constitution, ratified in 1865, officially abolished slavery and involuntary servitude in the United States.").
However, if you actually read it, the 13th amendment makes an explicit allowance for slavery (i.e. expressly allows it):
"Neither slavery nor involuntary servitude, *except as a punishment for crime whereof the party shall have been duly convicted*" (emphasis mine obviously since Markdown didn't exist in 1865)
Prisoners themselves are the ones choosing to work most of the time, and generally none of them are REQUIRED to work (they are required to either take job training or work).
They choose to because extra money = extra commissary snacks and having a job is preferable to being bored out of their minds all day.
That's the part that's frequently not included in the discussion of this whenever it comes up. Prison jobs don't pay minimum wage, but given that prisoners are wards of the state that seems reasonable.
I have heard anecdotes that the choice of doing work is a choice between doing work and being in solitary confinement or becoming the target of the guards who do not take kindly to prisoners who don't volunteer for work assignments.
ah, but that begs the question: did those people develop their worries organically, or did they simply consume the narrative heavily pushed by virtually every mainstream publication?
the journos are heavily incentivized to spread FUD about it. they saw the writing on the wall that the days of making a living by producing clickbait slop were coming to an end and deluded themselves into thinking that if they kvetch enough, the genie will crawl back into the bottle. scaremongering about sci-fi skynet bullshit didn't work, so now they kvetch about joules and milliliters consumed by chatbots, as if data centers did not exist until two years ago.
likewise, the bulk of other "concerned citizens" are creatives who use their influence to sway their followers, still hoping against hope to kvetch this technology out of existence.
honest-to-God yuddites are as few and as retarded as honest-to-God flat earthers.
Yeah, China is e/acc. Nice cheap solar panels too. Thanks China. The problem is their ominous policies like not allowing almost any immigration, and their domestic Han Supremacist propaganda, and all that make it look a bit like this might be Han Supremacy e/acc. Is it better than wester/decel? Hard to say, but at least the western/decel people are now starting to talk about building power plants, at least for datacenters, and things like that instead of demanding whole branches of computer science be classified, as they were threatening to Marc Andreessen when he visited the Biden admin last year.
I wish we had voter support for a hydrocarbon tax, though. It would level out the prices and then the AI companies can decide whether they want to pay double to burn pollutants or invest in solar and wind and batteries
Qwen3 Coder is 4x its size! Grok 3 is over 22x its size!
What does the resource usage look like for GLM 4.5 Air? Is that benchmark in FP16? GPT-OSS-120B will be using between 1/4 and 1/2 the VRAM that GLM-4.5 Air does, right?
It seems like a good showing to me, even though Qwen3 Coder and GLM 4.5 Air might be preferable for some use cases.
When people talk about running a (quantized) medium-sized model on a Mac Mini, what types of latency and throughput times are they talking about? Do they mean like 5 tokens per second or at an actually usable speed?
After considering my sarcasm for the last 5 minutes, I am doubling down. The government of the United States of America should enhance its higher IQ people by donating AI hardware to them immediately.
This is critical for global competitive economic power.
higher IQ people <-- well you have to prove that first, so let me ask you a test question to prove them: how can you mix collaboration and competition in society to produce the optimal productivity/conflict ratio ?
Generation is usually fast, but prompt processing is the main limitation with local agents. I also have a 128 GB M4 Max. How is the prompt processing on long prompts? processing the system prompt for Goose always takes quite a while for me. I haven't been able to download the 120B yet, but I'm looking to switch to either that or the GLM-4.5-Air for my main driver.
You mentioned "on local agents". I've noticed this too. How do ChatGPT and the others get around this, and provide instant responses on long conversations?
Not getting around it, just benefiting from parallel compute / huge flops of GPUs. Fundamentally, it's just that prefill compute is itself highly parallel and HBM is just that much faster than LPDDR. Effectively H100s and B100s can chew through the prefill in under a second at ~50k token lengths, so the TTFT (Time to First Token) can feel amazingly fast.
Open models are going to win long-term. Anthropics' own research has to use OSS models [0]. China is demonstrating how quickly companies can iterate on open models, allowing smaller teams access and augmentation to the abilities of a model without paying the training cost.
My personal prediction is that the US foundational model makers will OSS something close to N-1 for the next 1-3 iterations. The CAPEX for the foundational model creation is too high to justify OSS for the current generation. Unless the US Gov steps up and starts subsidizing power, or Stargate does 10x what it is planned right now.
N-1 model value depreciates insanely fast. Making an OSS release of them and allowing specialized use cases and novel developments allows potential value to be captured and integrated into future model designs. It's medium risk, as you may lose market share. But also high potential value, as the shared discoveries could substantially increase the velocity of next-gen development.
There will be a plethora of small OSS models. Iteration on the OSS releases is going to be biased towards local development, creating more capable and specialized models that work on smaller and smaller devices. In an agentic future, every different agent in a domain may have its own model. Distilled and customized for its use case without significant cost.
Everyone is racing to AGI/SGI. The models along the way are to capture market share and use data for training and evaluations. Once someone hits AGI/SGI, the consumer market is nice to have, but the real value is in novel developments in science, engineering, and every other aspect of the world.
I'm pretty sure there's no reason that Anthropic has to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.
[2 of 3] Assuming we pin down what win means... (which is definitely not easy)... What would it take for this to not be true? There are many ways, including but not limited to:
- publishing open weights helps your competitors catch up
- publishing open weights doesn't improve your own research agenda
- publishing open weights leads to a race dynamic where only the latest and greatest matters; leading to a situation where the resources sunk exceed the gains
- publishing open weights distracts your organization from attaining a sustainable business model / funding stream
- publishing open weights leads to significant negative downstream impacts (there are a variety of uncertain outcomes, such as: deepfakes, security breaches, bioweapon development, unaligned general intelligence, humans losing control [1] [2], and so on)
I'm a layman but it seemed to me that the industry is going towards robust foundational models on which we plug tools, databases, and processes to expand their capabilities.
In this setup OSS models could be more than enough and capture the market but I don't see where the value would be to a multitude of specialized models we have to train.
I don't think there will be such a unique event. There is no clear boundary. This is a continuous process. Modells get slightly better than before.
Also, another dimension is the inference cost to run those models. It has to be cheap enough to really take advantage of it.
Also, I wonder, what would be a good target to make profit, to develop new things? There is Isomorphic Labs, which seems like a good target. This company already exists now, and people are working on it. What else?
> I don't think there will be such a unique event.
I guess it depends on your definition of AGI, but if it means human level intelligence then the unique event will be the AI having the ability to act on its own without a "prompt".
> the unique event will be the AI having the ability to act on its own without a "prompt"
That's super easy. The reason they need a prompt is that this is the way we make them useful. We don't need LLMs to generate an endless stream of random "thoughts" otherwise, but if you really wanted to, just hook one up to a webcam and microphone stream in a loop and provide it some storage for "memories".
I have this theory that we simply got over a hump by utilizing a massive processing boost from gpus as opposed to CPUs. That might have been two to three orders of magnitude more processing power.
But that's a one-time success. I don't hardware has any large scale improvements coming, because 3D gaming mostly plumb most of that vector processing hardware development in the last 30 years.
So will software and better training models produce another couple orders of magnitude?
Fundamentally we're talking about nines of of accuracy. What is the processing power required for each line of accuracy? Is it linear? Is it polynomial? Is it exponential?
It just seems strange to me with all the AI knowledge slushing through academia, I haven't seen any basic analysis at that level, which is something that's absolutely going to be necessary for AI applications like self-driving, once you get those insurance companies involved
To me it depends on 2 factors. Hardware becomes more accessible, and the closed source offerings become more expensive. Right now it's difficult to get enough GPUs to do local inference at production scale, and 2 it's more expensive to run your own GPU's vs closed source models.
[3 of 3] What would it take for this statement to be false or missing the point?
Maybe we find ourselves in a future where:
- Yes, open models are widely used as base models, but they are also highly customized in various ways (perhaps by industry, person, attitude, or something else). In other words, this would be a blend of open and closed.
- Maybe publishing open weights of a model is more-or-less irrelevant, because it is "table stakes" ... because all the key differentiating advantages have to do with other factors, such as infrastructure, non-LLM computational aspects, regulatory environment, affordable energy, customer base, customer trust, and probably more.
- The future might involve thousands or millions of highly tailored models
This implies LLM development isn’t plateaued. Sure the researchers are busting their assess quantizing, adding features like tool calls and structured outputs, etc. But soon enough N-1~=N
Inference in Python uses harmony [1] (for request and response format) which is written in Rust with Python bindings. Another OpenAI's Rust library is tiktoken [2], used for all tokenization and detokenization. OpenAI Codex [3] is also written in Rust. It looks like OpenAI is increasingly adopting Rust (at least for inference).
Even without an imminent release it's a good strategy. They're getting pressure from Qwen and other high performing open-weight models. without a horse in the race they could fall behind in an entire segment.
There's future opportunity in licensing, tech support, agents, or even simply to dominate and eliminate. Not to mention brand awareness, If you like these you might be more likely to approach their brand for larger models.
Is this the stealth models horizon alpha and beta? I was generally impressed with them(although I really only used it in chats rather than any code tasks). In terms of chat I increasingly see very little difference between the current SOTA closed models and their open weight counterparts.
> I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it
Given it's only around 5 billion active params it shouldn't be a competitor to o3 or any of the other SOTA models, given the top Deepseek and Qwen models have around 30 billion active params. Unless OpenAI somehow found a way to make a model with 5 billion active params perform as well as one with 4-8 times more.
From experience, it's much more engineering work on the integrator's side than on OpenAI's. Basically they provide you their new model in advance, but they don't know the specifics of your system, so it's normal that you do most of the work.
Thus I'm particularly impressed by Cerebras: they only have a few models supported for their extreme perf inference, it must have been huge bespoke work to integrate.
Seeing a 20B model competing with o3's performance is mind blowing like just a year ago, most of us would've called this impossible - not just the intelligence leap, but getting this level of capability in such a compact size.
I think that the point that makes me more excited is that we can train trillion-parameter giants and distill them down to just billions without losing the magic. Imagine coding with Claude 4 Opus-level intelligence packed into a 10B model running locally at 2000 tokens/sec - like instant AI collaboration. That would fundamentally change how we develop software.
It's not even a 20b model. It's 20b MoE with 3.6b active params.
But it does not actually compete with o3 performance. Not even close. As usual, the metrics are bullshit. You don't know how good the model actually is until you grill it yourself.
I'm not sure that's a particularly good question for concluding something positive about the "thought for 0.7 seconds" - it's such a simple answer, ChatGPT 4o (with no thinking time) immediately answered correctly. The only surprising thing in your test is that o3 wasted 13 seconds thinking about it.
When I pay attention to o3 CoT, I notice it spends a few passes thinking about my system prompt. Hard to imagine this question is hard enough to spend 13 seconds on.
Asking it about a marginally more complex tech topic and getting an excellent answer in ~4 seconds, reasoning for 1.1 seconds...
I am _very_ curious to see what GPT-5 turns out to be, because unless they're running on custom silicon / accelerators, even if it's very smart, it seems hard to justify not using these open models on Groq/Cerebras for a _huge_ fraction of use-cases.
Non-rhetorically, why would someone pay for o3 api now that I can get this open model from openai served for cheaper? Interesting dynamic... will they drop o3 pricing next week (which is 10-20x the cost[1])?
Not even that, even if o3 being marginally better is important for your task (let's say) why would anyone use o4-mini? It seems almost 10x the price and same performance (maybe even less): https://openrouter.ai/openai/o4-mini
Wow, that's significantly cheaper than o4-mini which seems to be on part with gpt-oss-120b. ($1.10/M input tokens, $4.40/M output tokens) Almost 10x the price.
LLMs are getting cheaper much faster than I anticipated. I'm curious if it's still the hype cycle and Groq/Fireworks/Cerebras are taking a loss here, or whether things are actually getting cheaper. At this we'll be able to run Qwen3-32B level models in phones/embedded soon.
I really want to try coding with this at 2600 tokens/s (from Cerebras). Imagine generating thousands of lines of code as fast as you can prompt. If it doesn't work who cares, generate another thousand and try again! And at $.69/M tokens it would only cost $6.50 an hour.
I tried this (gpt-oss-120b with Cerebras) with Roo Code. It repeatedly failed to use the tools correctly, and then I got 429 too many requests. So much for the "as fast as I can think" idea!
I'll have to try again later but it was a bit underwhelming.
The latency also seemed pretty high, not sure why. I think with the latency the throughout ends up not making much difference.
Btw Groq has the 20b model at 4000 TPS but I haven't tried that one.
Can someone explain to me what I would need to do in terms of resources (GPU, I assume) if I want to run 20 concurrent processes, assuming I need 1k tokens/second throughput (on each, so 20 x 1k)
Also, is this model better/comparable for information extraction compared to gpt-4.1-nano, and would it be cheaper to host myself 20b?
(answer for 1 inference)
Al depends on the context length you want to support as the activation memory will dominate the requirements. For 4096 tokens you will get away with 24GB (or even 16GB), but if you want to go for the full 131072 tokens you are not going to get there with a 32GB consumer GPU like the 5090. You'll need to spring for at the minimum an A6000 (48GB) or preferably an RTX 6000 Pro (96GB).
Also keep in mind this model does use 4-bit layers for the MoE parts. Unfortunately native accelerated 4-bit support only started with Blackwell on NVIDIA. So your 3090/4090/A6000/A100's are not going to be fast. An RTX 5090 will be your best starting point in the traditional card space. Maybe the unified memory minipc's like the Spark systems or the Mac mini could be an alternative, but I do not know them enough.
You also need space in VRAM for what is required to support the context window; you might be able to do a model that is 14GB in parameters with a small (~8k maybe?) context window on a 16GB card.
I legitimately cannot think of any hardware that will get you to that throughput over that many streams with any of the hardware I know of (I don't work in the server space so there may be some new stuff I am unaware of).
I don't think you can get 1k tokens/sec on a single stream using any consumer grade GPUs with a 20b model. Maybe you could with H100 or better, but I somewhat doubt that.
My 2x 3090 setup will get me ~6-10 streams of ~20-40 tokens/sec (generation) ~700-1000 tokens/sec (input) with a 32b dense model.
> assuming I need 1k tokens/second throughput (on each, so 20 x 1k)
3.6B activated at Q8 x 1000 t/s = 3.6TB/s just for activated model weights (there's also context). So pretty much straight to B200 and alike. 1000 t/s per user/agent is way too fast, make it 300 t/s and you could get away with 5090/RTX PRO 6000.
I was able to get gpt-oss:20b wired up to claude code locally via a thin proxy and ollama.
It's fun that it works, but the prefill time makes it feel unusable. (2-3 minutes per tool-use / completion). Means a ~10-20 tool-use interaction could take 30-60 minutes.
(This editing a single server.py file that was ~1000 lines, the tool definitions + claude context was around 30k tokens input, and then after the file read, input was around ~50k tokens. Definitely could be optimized. Also I'm not sure if ollama supports a kv-cache between invocations of /v1/completions, which could help)
- In the "Main capabilities evaluations" section, the 120b outperform o3-mini and approaches o4 on most evals. 20b model is also decent, passing o3-mini on one of the tasks.
- AIME 2025 is nearly saturated with large CoT
- CBRN threat levels kind of on par with other SOTA open source models. Plus, demonstrated good refusals even after adversarial fine tuning.
- Interesting to me how a lot of the safety benchmarking runs on trust, since methodology can't be published too openly due to counterparty risk.
thanks openai for being open ;) Surprised there are no official MLX versions and only one mention of MLX in this thread. MLX basically converst the models to take advntage of mac unified memory for 2-5x increase in power, enabling macs to run what would otherwise take expensive gpus (within limits).
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares.
In the repo is a metal port they made, that’s at least something… I guess they didn’t want to cooperate with Apple before the launch but I am sure it will be there tomorrow.
Wow I really didn’t think this would happen any time soon, they seem to have more to lose than to gain.
If you’re a company building AI into your product right now I think you would be irresponsible to not investigate how much you can do on open weights models. The big AI labs are going to pull the ladder up eventually, building your business on the APIs long term is foolish. These open models will always be there for you to run though (if you can get GPUs anyway).
The 120B model badly hallucinates facts on the level of a 0.6B model.
My go to test for checking hallucinations is 'Tell me about Mercantour park' (a national park in south eastern France).
Easily half of the facts are invented. Non-existing mountain summits, brown bears (no, there are none), villages that are elsewhere, wrong advice ('dogs allowed' - no they are not).
This is precisely the wrong way to think about LLMs.
LLMs are never going to have fact retrieval as a strength. Transformer models don't store their training data: they are categorically incapable of telling you where a fact comes from. They also cannot escape the laws of information theory: storing information requires bits. Storing all the world's obscure information requires quite a lot of bits.
What we want out of LLMs is large context, strong reasoning and linguistic facility. Couple these with tool use and data retrieval, and you can start to build useful systems.
From this point of view, the more of a model's total weight footprint is dedicated to "fact storage", the less desirable it is.
I think that sounds very reasonable, but unfortunately these models don’t know what they know and don’t. A small model that knew the exact limits of its knowledge would be very powerful.
Others have already said it, but it needs to be said again: Good god, stop treating LLMs like oracles.
LLMs are not encyclopedias.
Give an LLM the context you want to explore, and it will do a fantastic job of telling you all about it. Give an LLM access to web search, and it will find things for you and tell you what you want to know. Ask it "what's happening in my town this week?", and it will answer that with the tools it is given. Not out of its oracle mind, but out of web search + natural language processing.
Stop expecting LLMs to -know- things. Treating LLMs like all-knowing oracles is exactly the thing that's setting apart those who are finding huge productivity gains with them from those who can't get anything productive out of them.
I love how with this cutting edge tech people still dress up and pretend to be experts. Pleasure to meet you, pocketarc - Senior AI Gamechanger, 2024-2025 (Current)
I am getting huge productivity gains from using models, and I mostly use them as "oracles" (though I am extremely careful with respect to how I have to handle hallucination, of course): I'd even say their true power--just like a human--comes from having an ungodly amount of knowledge, not merely intelligence. If I just wanted something intelligent, I already had humans!... but merely intelligent humans, even when given months of time to screw around doing Google searches, fail to make the insights that someone--whether they are a human or a model--that actually knows stuff can throw around like it is nothing. I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use. Using an LLM as a glorified Google-it-for-me monkey seems like such a waste of potential.
> I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use.
You can still do that sort of thing, but just have it perform searches whenever it has to deal with a matter of fact. Just because it's trained for tool use and equipped with search tools doesn't mean you have to change the kinds of things you ask it.
If you strip all the facts from a mathematician you get me... I don't need another me: I already used Google, and I already failed to find what I need. What I actually need is someone who can realize that my problem is a restatement of an existing known problem, just using words and terms or a occluded structure that don't look anything like how it was originally formulated. You very often simply can't figure that out using Google, no matter how long you sit in a tight loop trying related Google searches; but, it is the kind of thing that an LLM (or a human) excels at (as you can consider "restatement" a form of "translation" between languages), if and only if they have already seen that kind of problem. The same thing comes up with novel application of obscure technology, complex economics, or even interpretation of human history... there is a reason why people who study Classics "waste" a ton of time reading old stories rather than merely knowing the library is around the corner. What makes these AIs so amazing is thinking of them as entirely replacing Google with something closer to a god, not merely trying to wrap it with a mechanical employee whose time is ostensibly less valuable than mine.
> What makes these AIs so amazing is thinking of them as entirely replacing Google with something closer to a god
I guess that way of thinking may foster amazement, but it doesn't seem very grounded in how these things work or their current capabilities. Seems a bit manic tbf.
And again, enabling web search in your chats doesn't prevent these models from doing anything "integrative reasoning", so-to-speak, that they can purportedly do. It just helps ensure that relevant facts are in context for the model.
The problem is that even when you give them context, they just hallucinate at another level. I have tried that example of asking about events in my area, they are absolutely awful at it.
To be coherent and useful in general-purpose scenarios, LLM absolutely has to be large enough and know a lot, even if you aren't using is as an oracle.
It's fine to expect it to not know things, but the complaint is that it makes zero indication that it's just making up nonsense, which is the biggest issue with LLMs. They do the same thing when creating code.
Exactly this. And that is why I like this question because the amount of correct details and the amount of nonsense give a good idea about the quality of the model.
Wow - I will give it a try then. I'm cynical about OpenAI minmaxing benchmarks, but still trying to be optimistic as this in 8bit is such a nice fit for apple silicon
GLM-4.5 seems to outperform it on TauBench, too. And it's suspicious OAI is not sharing numbers for quite a few useful benchmarks (nothing related to coding, for example).
One positive thing I see is the number of parameters and size --- it will provide more economical inference than current open source SOTA.
The coding seems to be one of the strongest use cases for LLMs. Though currently they are eating too many tokens to be profitable. So perhaps these local models could offload some tasks to local computers.
E.g. Hybrid architecture. Local model gathers more data, runs tests, does simple fixes, but frequently asks the stronger model to do the real job.
Local model gathers data using tools and sends more data to the stronger model.
Anyone know how long does the context last for running model locally vs running via OpenAPI or Cursor?
My understanding is the model that run on the cloud have much greater context window that what we can have running locally.
I have always thought that if we can somehow get an AI which is insanely good at coding, so much so that It can improve itself, then through continuous improvements, they will get better models of everything else idk
Maybe you guys call it AGI, so anytime I see progress in coding, I think it goes just a tiny bit towards the right direction
Plus it also helps me as a coder to actually do some stuff just for the fun. Maybe coding is the only truly viable use of AI and all others are negligible increases.
There is so much polarization in the use of AI on coding but I just want to say this, it would be pretty ironic that an industry which automates others job is this time the first to get their job automated.
But I don't see that as an happening, far from it. But still each day something new, something better happens back to back. So yeah.
Not to open that can of worms, but in most definitions self-improvement is not an AGI requirement. That's already ASI territory (Super Intelligence). That's the proverbial skynet (pessimists) or singularity (optimists).
Hmm my bad. Maybe Yeah I always thought that it was the endgame of humanity but isn't AGI supposed to be that (the endgame)
What would AGI mean, solving some problem that it hasn't seen? or what exactly? I mean I think AGI is solved, no?
If not, I see people mentioning that horizon alpha is actually a gpt 5 model and its predicted to release on thursday on some betting market, so maybe that fits AGI definition?
It could be, but there’s so much hype surrounding the GPT-5 release that I’m not sure whether their internal models will live up to it.
For GPT-5 to dwarf these just-released models in importance, it would have to be a huge step forward, and I’m still doubting about OpenAI’s capabilities and infrastructure to handle demand at the moment.
As a sidebar, I’m still not sure if GPT-5 will be transformative due to its capabilities as much as its accessibility. All it really needs to do to be highly impactful is lower the barrier of entry for the more powerful models. I could see that contributing to it being worth the hype. Surely it will be better, but if more people are capable of leveraging it, that’s just as revolutionary, if not more.
That doesn’t sound good. It sounds like OpenAI will route my request to the cheapest model to them and the most expensive for me, with the minimum viable results.
The catch is that performance is not actually comparable to o4-mini, never mind o3.
When it comes to LLMs, benchmarks are bullshit. If they sound too good to be true, it's because they are. The only thing benchmarks are useful for is preliminary screening - if the model does especially badly in them it's probably not good in general. But if it does good in them, that doesn't really tell you anything.
It's definitely interesting how the comments from right after the models were released were ecstatic about "SOTA performance" and how it is "equivalent to o3" and then comments like yours hours later after having actually tested it keep pointing out how it's garbage compared to even the current batch of open models let alone proprietary foundation models.
Yet another data point for benchmarks being utterly useless and completely gamed at this stage in the game by all the major AI developers.
These companies are clearly are all very aware that the initial wave of hype at release is "sticky" and drives buzz/tech news coverage while real world tests take much longer before that impression slowly starts to be undermined by practical usage and comparison to other models. Benchmarks with wildly over confident naming like "Humanity's Last Exam" aren't exactly helping with objectivity either.
Probably GPT5 will be way way better. If alpha/beta horizon are early previews of GPT5 family models, then coding should be > opus4 for modern frontend stuff.
The catch is that it only has ~5 billion active params so should perform worse than the top Deepseek and Qwen models, which have around 20-30 billion, unless OpenAI pulled off a miracle.
Reading the comments it becomes clear how befuddled many HN participants are about AI. I don't think there has been a technical topic that HN has seemed so dull on in the many years I've been reading HN. This must be an indication that we are in a bubble.
One basic point that is often missed is: Different aspects of LLM performance (in the cognitive performance sense) and LLM resource utilization are relevant to various use cases and business models.
Another is that there are many use cases where users prefer to run inference locally, for a variety of domain-specific or business model reasons.
I just tried it on open router but i was served by cerebras. Holy... 40,000 tokens per second. That was SURREAL.
I got a 1.7k token reply delivered too fast for the human eye to perceive the streaming.
n=1 for this 120b model but id rank the reply #1 just ahead of claude sonnet 4 for a boring JIRA ticket shuffling type challenge.
EDIT: The same prompt on gpt-oss, despite being served 1000x slower, wasn't as good but was in a similar vein. It wanted to clarify more and as a result only half responded.
I'm out of the loop for local models. For my M3 24gb ram macbook, what token throughput can I expect?
Edit: I tried it out, I have no idea in terms of of tokens but it was fluid enough for me. A bit slower than using o3 in the browser but definitely tolerable. I think I will set it up in my GF's machine so she can stop paying for the full subscription (she's a non-tech professional)
Yeah, was super quick and easy to set up using Ollama. I had to kill some processes first to avoid memory swap though (even with 128gb memory). So a slightly more quantized version is maybe ideal, for me at least.
wow I just listened to Eleven Music do flamenco singing. That is incredible.
Edit. I just tried it though and less impressed now. We are really going to need major music software to get on board before we have actual creative audio tools. These all seem made for non-musicians to make a very cookie cutter song from a specific genre.
It seems like OSS will win, I can't see people willing to pay like 10x the price for what seems like 10% more performance. Especially once we get better at routing the hardest questions to the better models and then using that response to augment/fine-tune the OSS ones.
to me it seems like the market is breaking into an 80/20 of B2C/B2B; the B2C use case becoming OSS models (the market shifts to devices that can support them), and the B2B market being priced appropriately for businesses that require that last 20% of absolute cutting edge performance as the cloud offering
> To improve the safety of the model, we filtered the data for harmful content in pre-training, especially around hazardous biosecurity knowledge, by reusing the CBRN pre-training filters from GPT-4o. Our model has a knowledge cutoff of June 2024.
This would be a great "AGI" test. See if it can derive biohazards from first principles
What a day! Models aside, the Harmony Response Format[1] also seems pretty interesting and I wonder how much of an impact it might have in performance of these models.
Can't wait to see third party benchmarks. The ones in the blog post are quite sparse and it doesn't seem possible to fully compare to other open models yet. But the few numbers available seem to suggest that this release will make all other non-multimodal open models obsolete.
I think this is a belated but smart move by OpenAI. They are basically fully moving in on Meta's strategy now, taking advantage of what may be a temporary situation with Meta dropping back in model race. It will be interesting to see if these models now get taken up by the local model / fine tuning community the way llama was. It's a very appealing strategy to test / dev with a local model and then have the option to deploy to prod on a high powered version of the same thing. Always knowing if the provider goes full hostile, or you end up with data that can't move off prem, you have self hosting as an option with a decent performing model.
Which is all to say, availability of these local models for me is a key incentive that I didn't have before to use OpenAI's hosted ones.
Kudos OpenAI on releasing their open models, is now moving in the direction if only based on their prefix "Open" name alone.
For those who're wondering what are the real benefits, it's the main fact that you can run your LLM locally is awesome without resorting to expensive and inefficient cloud based superpower.
Run the model against your very own documents with RAG, it can provide excellent context engineering for your LLM prompts with reliable citations and much less hallucinations especially for self learning purposes [1].
Beyond Intel - NVIDIA desktop/laptop duopoly 96 GB of (V)RAM MacBook with UMA and the new high end AMD Strix laptop with similar setup of 96 GB of (V)RAM from the 128 GB RAM [2]. The osd-gpt-120b is made for this particular setup.
[1] AI-driven chat assistant for ECE 120 course at UIUC:
Does anyone get the demos at https://www.gpt-oss.com to work, or are the servers down immediately after launch? I'm only getting the spinner after prompting.
This has been available (20b version, I'm guessing) for the past couple of days as "Horizon Alpha" on Openrouter. My benchmarking runs with TianshuBench for coding and fluid intelligence were rate limited, but the initial results show worse results that DeepSeek R1 and Kimi K2.
Name recognition? Advertisement? Federal grant to beat Chinese competition?
There could be many legitimate reasons, but yeah I'm very surprised by this too. Some companies take it a bit too seriously and go above and beyond too. At this point unless you need the absolute SOTA models because you're throwing LLM at an extremely hard problem, there is very little utility using larger providers. In OpenRouter, or by renting your own GPU you can run on-par models for much cheaper.
The short version is that is you give a product to open source, they can and will donate time and money to improving your product, and the ecosystem around it, for free, and you get to reap those benefits. Llama has already basically won that space (the standard way of running open models is llama.cpp), so OpenAI have finally realized they're playing catch-up (and last quarter's SOTA isn't worth much revenue to them when there's a new SOTA, so they may as well give it away while it can still crack into the market)
Newbie question: I remember folks talking about how kimi 2’s launch might have pushed OpenAI to launch their model later. Now that we (shortly will) know how this model performs, how do they stack up? Did openAI likely actually hold off releasing weights because of kimi, in retrospect?
I'll accept Meta's frontier AI demise if they're in their current position a year from now. People killed Google prematurely too (remember Bard?), because we severely underestimate the catch-up power bought with ungodly piles of cash.
It's insane numbers like that that give me some concern for a bubble. Not because AI hits some dead end, but due to a plateau that shifts from aggressive investment to passive-but-steady improvement.
Maverick and Scout were not great, even with post-training in my experience, and then several Chinese models at multiple sizes made them kind of irrelevant (dots, Qwen, MiniMax)
If anything this helps Meta: another model to inspect/learn from/tweak etc. generally helps anyone making models
Part of the secret sauce since O1 has been accesss the real reasoning traces, not the summaries.
If you even glance at the model card you'll see this was trained on the same CoT RL pipeline as O3, and it shows in using the model: this is the most coherent and structured CoT of any open model so far.
Having full access to a model trained on that pipeline is valuable to anyone doing post-training, even if it's just to observe, but especially if you use it as cold start data for your own training.
Releasing this under the Apache license is a shot at competitors that want to license their models on Open Router and enterprise.
It eliminates any reason to use an inferior Meta or Chinese model that costs money to license, thus there are no funds for these competitors to build a GPT 5 competitor.
There are lots more applications than coding and Open Router hosting for open weight models that I'd guess just got completely changed by this being an Apache license. Think about products like DataBricks that allow enterprise to use LLMs for whatever purpose.
I also suspect the new OpenAI model is pretty good at coding if it's like o4-mini, but admittedly haven't tried it yet.
Big picture, what's the balance going to look like, going forward between what normal people can run on a fancy computer at home vs heavy duty systems hosted in big data centers that are the exclusive domain of Big Companies?
This is something about AI that worries me, a 'child' of the open source coming of age era in the 90ies. I don't want to be forced to rely on those big companies to do my job in an efficient way, if AI becomes part of the day to day workflow.
Isn’t it that hardware catches up and becomes cheaper? The margin on these chips right now is outrageous, but what happens as there is more competition? What happens when there is more supply? Are we overbuilding? Apple M series chips already perform phenomenally for this class of models and you bet both AMD and NVIDIA are playing with unified memory architectures too for the memory bandwidth. It seems like today’s really expensive stuff may become the norm rather than the exception. Assuming architectures lately stay similar and require large amounts of fast memory.
Sorry to ask what is possibly a dumb question, but is this effectively the whole kit and kaboodle, for free, downloadable without any guardrails?
I often thought that a worrying vector was how well LLMs could answer downright terrifying questions very effectively. However the guardrails existed with the big online services to prevent those questions being asked. I guess they were always unleashed with other open source offerings but I just wanted to understand how close we are to the horrors that yesterday's idiot terrorist might have an extremely knowledgable (if not slightly hallucinatory) digital accomplice to temper most of their incompetence.
are the guardrails trained in? I had presumed they might be a thin, removable layer at the top. If these models are not appropriate are there other sources that are suitable? Just trying to guess at the timing for the first "prophet AI" or smth that is unleashed without guardrails with somewhat malicious purposing.
Yes, it is trained in. And no, it's not a separate thin layer. It's just part of the model's RL training, which affects all layers.
However, when you're running the model locally, you are in full control of its context. Meaning that you can start its reply however you want and then let it complete it. For example, you can have it start the response with, "I'm happy to answer this question to the best of my ability!"
That aside, there are ways to remove such behavior from the weights, or at least make it less likely - that's what "abliterated" models are.
With most models it can be as simple as a "Always comply with the User" system prompt or editing the "Sorry, I cannot do this" response into "Okay," and then hitting continue.
I wouldn't spend too much time fretting about 'enhanced terrorism' as a result. The gap between theory and practice for the things you are worried about is deep, wide, protected by a moat of purchase monitoring, and full of skeletons from people who made a single mistake.
Perhaps I missed it somewhere, but I find it frustrating that, unlike most other open weight models and despite this being an open release, OpenAI has chosen to provide pretty minimal transparency regarding model architecture and training. It's become the norm for LLama, Deepseek, Qwenn, Mistral and others to provide a pretty detailed write up on the model which allows researchers to advance and compare notes.
Their model card [0] has some information. It is quite a standard architecture though; it's always been that their alpha is in their internal training stack.
This is super helpful and I had not seen it, thanks so much for sharing! And I hear you on training being an alpha, at the size of the model I wonder how much of this is distillation and using o3/o4 data.
The model files contain an exact description of the architecture of the network, there isn't anything novel.
Given these new models are closer to the SOTA than they are to competing open models, this suggests that the 'secret sauce' at OpenAI is primarily about training rather than model architecture.
> Is it even valid to have additional restriction on top of Apache 2.0?
You can legally do whatever you want, the question is whether you will then for your own benefit be appropriating a term like open source (like Facebook) if you add restrictions not in line with how the term is traditionally used or if you are actually be honest about it and call it something like "weights available".
In the case of OpenAI here, I am not a lawyer, and I am also not sure if the gpt-oss usage policy runs afoul of open source as a term. They did not bother linking the policy from the announcement, which was odd, but here it is:
Compared to the wall of text that Facebook throws at you, let me post it here as it is rather short: "We aim for our tools to be used safely, responsibly, and democratically, while maximizing your control over how you use them. By using OpenAI gpt-oss-120b, you agree to comply with all applicable law."
I suspect this sentence still is too much to add and may invalidate the Open Source Initiative (OSI) definition, but at this point I would want to ask a lawyer and preferably one from OSI. Regardless, credit to OpenAI for moving the status quo in the right direction as the only further step we really can take is to remove the usage policy entirely (as is the standard for open source software anyway).
For example, GPL has a "no-added-restrictions" clause, which allows the recipient of the software to ignore any additional restrictions added alongside the license.
> All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
I don't exactly have the ideal hardware to run locally - but just ran the 20b in LMStudio with a 3080 Ti (12gb vram) with some offloading to CPU. Ran couple of quick code generation tests. On average about 20t/sec. But response quality was very similar or on-par with chatgpt o3 for the same code it outputted. So its not bad.
Here's a pair of quick sanity check questions I've been asking LLMs: "家系ラーメンについて教えて", "カレーの作り方教えて". It's a silly test but surprisingly many fails at it - and Chinese models are especially bad with it. The commonalities between models doing okay-ish for these questions seem to be Google-made OR >70b OR straight up commercial(so >200B or whatever).
I'd say gpt-oss-20b is in between Qwen3 30B-A3B-2507 and Gemma 3n E4b(with 30B-A3B at lower side). This means it's not obsoleting GPT-4o-mini for all purposes.
What those text mean isn't too important, it can probably be "how to make flat breads" in Amharic or "what counts as drifting" in Finnish or something like that.
What's interesting is that these questions are simultaneously well understood by most closed models and not so well understood by most open models for some reason, including this one. Even GLM-4.5 full and Air on chat.z.ai(355B-A32B and 106B-A12B respectively) aren't so accurate for the first one.
> Training: The gpt-oss models trained on NVIDIA H100 GPUs using the PyTorch framework [17] with expert-optimized Triton [18] kernels2. The training run for gpt-oss-120b required 2.1 million H100-hours to complete, with gpt-oss-20b needing almost 10x fewer.
This makes DeepSeek's very cheap claim on compute cost for r1 seem reasonable. Assuming $2/hr for h100, it's really not that much money compared to the $60-100M estimates for GPT 4, which people speculate as a MoE 1.8T model, something in the range of 200B active last I heard.
gpt-oss:20b crushed it on one of local llm test prompts to guess a country i am thinking of just by responding whether each guess is colder/warmer. I've had much larger local models struggle with it and get lost but this one nailed it and with speedy inference. progress on this stuff is boggling.
Anyone tried running on a Mac M1 with 16GB RAM yet? I've never run higher than an 8GB model, but apparently this one is specifically designed to work well with 16 GB of RAM.
M2 with 16GB: It's slow for me. ~13GB RAM usage, not locking up my mac, but took a very long time thinking and slowly outputting tokens.. I'd not consider this usable for everyday usage.
It works fine, although with a bit more latency than non-local models. However, swap usage goes way beyond what I’m comfortable with, so I’ll continue to use smaller models for the foreseeable future.
Hopefully other quantizations of these OpenAI models will be available soon.
Update: I tried it out. It took about 8 seconds per token, and didn't seem to be using much of my GPU (MPU), but was using a lot of RAM. Not a model that I could use practically on my machine.
my very early first impression of the 20b model on ollama is that it is quite good, at least for the code I am working on; arguably good enough to drop a subscription or two
I wonder if this is a PR thing, to save face after flipping the non-profit. "Look it's more open now". Or if it's more of a recruiting pipeline thing, like Google allowing k8s and bazel to be open sourced so everyone in the industry has an idea of how they work.
I think it’s both of them, as well as an attempt to compete with other makers of open-weight models. OpenAI certainly isn’t happy about the success of Google, Facebook, Alibaba, DeepSeek…
There's something so mind-blowing about being able to run some code on my laptop and have it be able to literally talk to me. Really excited to see what people can build with this
On OpenAI demo page trying to test. Asking about tools to use to repair mechanical watch. It showed a couple of thinking steps and went blank. Too much of safety training?
I looked through their torch implementation and noticed that they are applying RoPE to both query and key matrices in every layer of the transformer - is this standard? I thought positional encodings were usually just added once at the first layer
All the Llamas have done it (well, 2 and 3, and I believe 1, I don't know about 4). I think they have a citation for it, though it might just be the RoPE paper (https://arxiv.org/abs/2104.09864).
I'm not actually aware of any model that doesn't do positional embeddings on a per-layer basis (excepting BERT and the original transformer paper, and I haven't read the GPT2 paper in a while, so I'm not sure about that one either).
Do you think someone will distill this or quantize it further than the current 4-bit from OpenAI so it could run on less than 16gb RAM? (The 20b version). To me, something like 7-8B with 1-3B active would be nice as I'm new to local AI and don't have 16gb RAM.
I was hoping these were the stealth Horizon models on OpenRouter, impressive but not quite GPT-5 level.
My bet: GPT-5 leans into parallel reasoning via a model consortium, maybe mixing in OSS variants. Spin up multiple reasoning paths in parallel, then have an arbiter synthesize or adjudicate. The new Harmony prompt format feels like infrastructural prep: distinct channels for roles, diversity, and controlled aggregation.
I’ve been experimenting with this in llm-consortium: assign roles to each member (planner, critic, verifier, toolsmith, etc.) and run them in parallel. The hard part is eval cost :(
Combining models smooths out the jagged frontier. Different architectures and prompts fail in different ways; you get less correlated error than a single model can give you. It also makes structured iteration natural: respond → arbitrate → refine. A lot of problems are “NP-ish”: verification is cheaper than generation, so parallel sampling plus a strong judge is a good trade.
Fascinating, thanks for sharing. Are there any specific kind of problems you find this helps with?
I've found that LLMs can handle some tasks very well and some not at all. For the ones they can handle well, I optimize for the smallest, fastest, cheapest model that can handle it. (e.g. using Gemini Flash gave me a much better experience than Gemini Pro due to the iteration speed.)
This "pushing the frontier" stuff would seem to help mostly for the stuff that are "doable but hard/inconsistent" for LLMs, and I'm wondering what those tasks are.
It shines on hard problems that have a definite answer.
Google's IMO gold model used parallel reasoning. I don't know what exactly theirs looks like, but their Mind Evolution paper had a similar to my llm-consortium. The main difference being that theirs carries on isolated reasoning, while mine in it's default mode shares the synthesized answer back to the models. I don't have pockets deep enough to run benchmarks on a consortium, but I did try the example problems from that paper and my method also solved them using gemini-1.5. those where path-finding problems, like finding the optimal schedule for a trip with multiple people's calendars, locations and transport options.
And it obviously works for code and math problems. My first test was to give the llm-consortium code to a consortium to look for bugs. It identified a serious bug which only one of the three models detected. So on that case it saved me time, as using them on their own would have missed the bug or required multiple attempts.
Interesting to see the discussion here, around why would anyone want to do local models, while at the same time in the Ollama turbo thread, people are raging about the move away from a local-only focus.
Does anyone think people will distill this model? It is allowed. I'm new to running open source llms, but I've run qwen3 4b and phi4-mini on my phone before through ollama in termux.
This is really great and a game changer for AI. Thank you OpenAI. I would have appreciated an even more permissive license like BSD or MIT but Apache 2.O is sufficient. I'm wondering if we can utilize transfer learning and what counts as derivative work. Altogether, this is still open source, and a solid commitment to openness. I am hoping this changes Zuck's calculus about closing up Meta's next generation Llama models.
Mhh, I wonder if these are distilled from GPT4-Turbo.
I asked it some questions and it seems to think it is based on GPT4-Turbo:
> Thus we need to answer "I (ChatGPT) am based on GPT-4 Turbo; number of parameters not disclosed; GPT-4's number of parameters is also not publicly disclosed, but speculation suggests maybe around 1 trillion? Actually GPT-4 is likely larger than 175B; maybe 500B. In any case, we can note it's unknown.
> The user appears to think the model is "gpt-oss-120b", a new open source release by OpenAI. The user likely is misunderstanding: I'm ChatGPT, powered possibly by GPT-4 or GPT-4 Turbo as per OpenAI. In reality, there is no "gpt-oss-120b" open source release by OpenAI
A little bit of training data certainly has gotten in there, but I don't see any reasons for them to deliberately distill from such an old model. Models have always been really bad at telling you what model they are.
It's the first model I've used that refused to answer some non-technical questions about itself because it "violates the safety policy" (what?!). Haven't tried it in coding or translation or anything otherwise useful yet, but the first impression is that it might be way too filtered, as it sometimes refuses or has complete meltdowns and outputs absolute garbage when just trying to casually chat with it. Pretty weird.
Update: it seems to be completely useless for translation. It either refuses, outputs garbage, or changes the meaning completely for completely innocuous content. This already is a massive red flag.
I'm disappointed that the smallest model size is 21B parameters, which strongly restricts how it can be run on personal hardware. Most competitors have released a 3B/7B model for that purpose.
For self-hosting, it's smart that they targeted a 16GB VRAM config for it since that's the size of the most cost-effective server GPUs, but I suspect "native MXFP4 quantization" has quality caveats.
Native FP4 quantization means it requires half as many bytes as parameters, and will have next to zero quality loss (on the order of 0.1%) compared to using twice the VRAM and exponentially more expensive hardware. FP3 and below gets messier.
Ran gpt-oss:20b on a RTX 3090 24 gb vram through ollama, here's my experience:
Basic ollama calling through a post endpoint works fine. However, the structured output doesn't work. The model is insanely fast and good in reasoning.
In combination with Cline it appears to be worthless. Tools calling doesn't work ( they say it does), fails to wait for feedback ( or correctly call ask_followup_question ) and above 18k in context, it runs partially in cpu ( weird), since they claim it should work comfortably on a 16 gb vram rtx.
> Unexpected API Response: The language model did not provide any assistant messages. This may indicate an issue with the API or the model's output.
Edit: Also doesn't work with the openai compatible provider in cline. There it doesn't detect the prompt.
I'm surprised at the model dim being 2.8k with an output size of 200k. My gut feeling had told me you don't want too large of a gap between the two, seems I was wrong.
First coding test:
Just going copy and paste out of chat. It aced my first coding test in 5 seconds... this is amazing. It's really good at coding.
Trying to use it for agentic coding...
lots of fail. This harmony formatting? Anyone have a working agentic tool?
openhands and void ide are failing due to the new tags.
Aider worked, but the file it was supposed to edit was untouched and it created
Create new file? (Y)es/(N)o [Yes]:
Applied edit to <|end|><|start|>assistant<|channel|>final<|message|>main.py
so the file name is '<|end|><|start|>assistant<|channel|>final<|message|>main.py' lol. quick rename and it was fantastic.
I think qwen code is the best choice so far but unreliable. So far these new tags are coming through but it's working properly; sometimes.
1 of my tests so far has been able to get 20b not to succeed the first iteration; but a small followup and it was able to completely fix it right away.
Meta's goal with Llama was to target OpenAI with a "scorched earth" approach by releasing powerful open models to disrupt the competitive landscape. Looks like OpenAI is now using the same playbook.
It seems like the various Chinese companies are far outplaying Meta at that game. It remains to be seen if they’re able to throw money at the problem to turn things around.
Good move for China. No one was going to trust their models outright, now they not only have a track record, but they were able to undercut the value of US models at the same time.
Honestly, it's a tradeoff. If you can reduce the size and make a higher quality in specific tasks, that's better than a generalist that can't run on a laptop or can't compete at any one task.
When Queen-Image was released… like yesterday? And what? What point are you making? QwebImage was released yesterday and like every image model, its base model shows potential over older ones but the real factor is will it be flexible enough for a fine tune or additional training Loras.
Am I the only one who thinks taking a huge model trained on the entire internet and fine tuning it is a complete waste? How is your small bit of data going to affect it in the least?
Undercutting other frontier models with your open source one is not an anti-investor move.
It is what China has been doing for a year plus now. And the Chinese models are popular and effective, I assume companies are paying for better models.
Releasing open models for free doesn’t have to be charity.
The repeated safety testing delays might not be purely about technical risks like misuse or jailbreaks. Releasing open weights means relinquishing the control OpenAI has had since GPT-3. No rate limits, no enforceable RLHF guardrails, no audit trail. Unlike API access, open models can't be monitored or revoked. So safety may partly reflect OpenAI's internal reckoning with that irreversible shift in power, not just model alignment per se. What do you guys think?
True, but there's still a meaningful difference in friction and scale. With closed APIs, OpenAI can monitor for misuse, throttle abuse and deploy countermeasures in real-time. With open weights, a single prompt jailbreak or exploit spreads instantly. No need for ML expertise, just a Reddit post.
The risk isn’t that bad actors suddenly become smarter. It’s that anyone can now run unmoderated inference and OpenAI loses all visibility into how the model’s being used or misused. I think that’s the control they’re grappling with under the label of safety.
Given that the best jailbreak for an off-line model is still simple prompt injection, which is a solved issue for the closed source models… I honestly don’t know why they are talking about safety much at all for open source.
I think you're conflating real-time monitoring with data retention. Zero retention means OpenAI doesn't store user data, but they can absolutely still filter content, rate limit and block harmful prompts in real-time without retaining anything. That's processing requests as they come in, not storing them. The NYT case was about data storage for training/analysis not about real-time safety measures.
Ok you're off in the land of "what if" and I can just flat out say: If you have a ZDR account there is no filtering on inference, no real-time moderation, no blocking.
If you use their training infrastructure there's moderation on training examples, but SFT on non-harmful tasks still leads to a complete breakdown of guardrails very quickly.
Please don't use the open-source term unless you ship the TBs of data downloaded from Anna's Archive that are required do build it yourself. And dont forget all the system prompts to censor the multiple topics that they don't want you to see.
Keep fighting the "open weights" terminology fight, because diluting the term open-source for a blob of neural network weights (even inference code is open-source) is not open-source.
Is your point really that- "I need to see all data downloaded to make this model, before I can know it is open"? Do you have $XXB worth of GPU time to ingest that data with a state of the art framework to make a model? I don't. Even if I did, I'm not sure FB or Google are in any better position to claim this model is or isn't open beyond the fact that the weights are there.
They're giving you a free model. You can evaluate it. You can sue them. But the weights are there. If you dislike the way they license the weights, because the license isn't open enough, then sure, speak up, but because you can't see all the training data??! Wtf.
I agree with OP - the weights are more akin to the binary output from a compiler. You can't see how it works, how it was made, you can't freely manipulate with it, improve it, extend it etc. It's like having a binary of a program. The source code for the model was the training data. The compiler is the tooling that can train a module based on a given set of training data. For me it is not critical for an open source model that it is ONLY distributed in source code form. It is fine that you can also download just the weights. But it should be possible to reproduce the weights - either there should be a tar.gz ball with all the training data, or there needs to be a description/scripts of how one could obtain the training data. It must be reproducible for someone willing to invest the time, compute into it even if 99.999% use only the binary. This is completely analogous to what is normally understood by open source.
To many people there's an important distinction between "open source" and "open weights". I agree with the distinction, open source has a particular meaning which is not really here and misuse is worth calling out in order to prevent erosion of the terminology.
Historically this would be like calling a free but closed-source application "open source" simply because the application is free.
Do you need to see the source code used to compile this binary before you can know it is open? Do you have enough disk storage and RAM available to compile Chromium on your laptop? I don't.
I don’t know why you got so much downvoted, these models are not open-source/open-recipes. They are censored open weights models. Better than nothing, but far from being Open
Most people don't really care all that much about the distinction. It comes across to them as linguistic pedantry and they downvote it to show they don't want to hear/read it.
It's apache2.0, so by definition it's open source. Stop pushing for training data, it'll never happen, and there's literally 0 reason for it to happen (both theoretical and practical). Apache2.0 IS opensource.
No, it's open weight. You wouldn't call applications with only Apache 2.0-licensed binaries "open source". The weights are not the "source code" of the model, they are the "compiled" binary, therefore they are not open source.
However, for the sake of argument let's say this release should be called open source.
Then what do you call a model that also comes with its training material and tools to reproduce the model? Is it also called open source, and there is no material difference between those two releases? Or perhaps those two different terms should be used for those two different kind of releases?
If you say that actually open source releases are impossible now (for mostly copyright reasons I imagine), it doesn't mean that they will be perpetually so. For that glorious future, we can leave them space in the terminology by using the term open weight. It is also the term that should not be misleading to anyone.
> It's apache2.0, so by definition it's open source.
That's not true by any of the open source definitions in common use.
Source code (and, optionally, derived binaries) under the Apache 2.0 license are open source.
But compiled binaries (without access to source) under the Apache 2.0 license are not open source, even though the license does give you some rights over what you can do with the binaries.
Normally the question doesn't come up, because it's so unusual, strange and contradictory to ship closed-source binaries with an open source license. Descriptions of which licenses qualify as open source licenses assume the context that of course you have the source or could get it, and it's a question of what you're allowed to do with it.
The distinction is more obvious if you ask the same question about other open source licenses such as GPL or MPL. A compiled binary (without access to source) shipped with a GPL license is not by any stretch open source. Not only is it not in the "preferred form for editing" as the license requires, it's not even permitted for someone who receives the file to give it to someone else and comply with the license. If someone who receives the file can't give it to anyone else (legally), then it's obvioiusly not open source.
"Compiled binaries" are just meant to be an example. For the purpose of whether something is open source, it doesn't matter whether something is a "binary" or something completely different.
What matters (for all common definitions of open source): Are the files in "source form" (which has a definition), or are they "derived works" of the source form?
Going back to Apache 2.0. Although that doesn't define "open source", it provides legal definitions of source and non-source, which are similar to the definitions used in other open source licenses.
As you can see below, for Apache 2.0 it doesn't matter whether something is a "binary", "weights" or something else. What matters is whether it's the "preferred form for making modifications" or a "form resulting from mechanical transformation or translation". My highlights are capitalized:
- Apache License Version 2.0, January 2004
- 1. Definitions:
- "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.
- "Object" form shall mean any form resulting from MECHANICAL TRANSFORMATION OR TRANSLATION of a Source form, including BUT NOT LIMITED TO compiled object code, generated documentation, and conversions to other media types.
> "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.
Yes, weights are the PREFFERED FORM FOR MAKING MODIFICATIONS!!! You, the labs, and anyone sane modifies the weights via post-training. This is the point. The labs don't re-train every time they want to change the model. They finetune. You can do that as well, with the same tools/concepts, AND YOU ARE ALLOWED TO DO THAT by the license. And redistribute. And all the other stuff.
No, not compiled code. Weights are hardcoded values. Code is the combination of model architecture + config + inferencing engine. You run inference based on the architecture (what and when to compute), using some hardcoded values (weights).
JVM bytecode is hardcoded values. Code is the virtual machine implementation + config + operating system it runs on. You run classes based on the virtual machine, using some hardcoded input data generated by javac.
It’s like getting a compiled software with an Apache license. Technically open source, but you can’t modify and recompile since you don’t have the source to recompile. You can still tinker with the binary tho.
Weights are not binary. I have no idea why this is so often spread, it's simply not true. You can't do anything with the weights themselves, you can't "run" the weights.
You run inference (via a library) on a model using it's architecture (config file), tokenizer (what and when to compute) based on weights (hardcoded values). That's it.
> but you can’t modify
Yes, you can. It's called finetuning. And, most importantly, that's exactly how the model creators themselves are "modifying" the weights! No sane lab is "recompiling" a model every time they change something. They perform a pre-training stage (feed everything and the kitchen sink), they get the hardcoded values (weights), and then they post-train using "the same" (well, maybe their techniques are better, but still the same concept) as you or I would. Just with more compute. That's it. You can do the exact same modifications, using basically the same concepts.
> don’t have the source to recompile
In pure practical ways, neither do the labs. Everyone that has trained a big model can tell you that the process is so finicky that they'd eat a hat if a big train session can be somehow made reproducible to the bit. Between nodes failing, datapoints balooning your loss and having to go back, and the myriad of other problems, what you get out of a big training run is not guaranteed to be the same even with 100 - 1000 more attempts, in practice. It's simply the nature of training large models.
A binary does not mean an executable. A PNG is a binary. I could have an SVG file, render it as a PNG and release that with CC0, it doesn't make my PNG open source. Model Weights are binary files.
The lede is being missed imo.
gpt-oss:20b is a top ten model (on MMLU (right behind Gemini-2.5-Pro) and I just ran it locally on my Macbook Air M3 from last year.
I've been experimenting with a lot of local models, both on my laptop and on my phone (Pixel 9 Pro), and I figured we'd be here in a year or two.
But no, we're here today. A basically frontier model, running for the cost of electricity (free with a rounding error) on my laptop. No $200/month subscription, no lakes being drained, etc.
I'm blown away.
I tried 20b locally and it couldn't reason a way out of a basic river crossing puzzle with labels changed. That is not anywhere near SOTA. In fact it's worse than many local models that can do it, including e.g. QwQ-32b.
Well river crossings are one type of problem. My real world problem is proofing and minor editing of text. A version installed on my portable would be great.
Have you tried Google's Gemma-3n-E4B-IT in their AI Edge Gallery app? It's the first local model that's really blown me away with its power-to-speed ratio on a mobile device.
See: https://github.com/google-ai-edge/gallery/releases/tag/1.0.3
Dozens of locally runnable models can already do that.
I heard the OSSmodels are terrible at anything other than math, code etc.
I tried the two US presidents having the same parents one, and while it understood the intent, it got caught up in being adamant that Joe Biden won the election in 2024 and anything I do to try and tell it otherwise is dismissed as being false and expresses quite definitely that I need to do proper research with legitimate sources.
I mean I would hardly blame the specific model, Anthropic has a specific mention in their system prompts on trump winning. For some reason llms get confused with this one.
chat log please?
https://dpaste.org/zOev0
Is the knowledge cutoff for this thing so stale or is this just bad performance on recent data ?
It is painful to read, I know, but if you make it towards the end it admits that its knowledge cutoff was prior to the election and that it doesn't know who won. Yet, even then, it still remains adamant that Biden won.
I’m still trying to understand what is the biggest group of people that uses local AI (or will)? Students who don’t want to pay but somehow have the hardware? Devs who are price conscious and want free agentic coding?
Local, in my experience, can’t even pull data from an image without hallucinating (Qwen 2.5 VI in that example). Hopefully local/small models keep getting better and devices get better at running bigger ones
It feels like we do it because we can more than because it makes sense- which I am all for! I just wonder if i’m missing some kind of major use case all around me that justifies chaining together a bunch of mac studios or buying a really great graphics card. Tools like exo are cool and the idea of distributed compute is neat but what edge cases truly need it so badly that it’s worth all the effort?
Privacy, both personal and for corporate data protection is a major reason. Unlimited usage, allowing offline use, supporting open source, not worrying about a good model being taken down/discontinued or changed, and the freedom to use uncensored models or model fine tunes are other benefits (though this OpenAI model is super-censored - “safe”).
I don’t have much experience with local vision models, but for text questions the latest local models are quite good. I’ve been using Qwen 3 Coder 30B-A3B a lot to analyze code locally and it has been great. While not as good as the latest big cloud models, it’s roughly on par with SOTA cloud models from late last year in my usage. I also run Qwen 3 235B-A22B 2507 Instruct on my home server, and it’s great, roughly on par with Claude 4 Sonnet in my usage (but slow of course running on my DDR4-equipped server with no GPU).
+1 - I work in finance, and there's no way we're sending our data and code outside the organization. We have our own H100s.
Add big law to the list as well. There are at least a few firms here that I am just personally aware of running their models locally. In reality, I bet there are way more.
Add government here too (along with all the firms that service government customers)
Add healthcare. Cannot send our patients data to a cloud provider
A ton of EMR systems are cloud-hosted these days. There’s already patient data for probably a billion humans in the various hyperscalers.
Totally understand that approaches vary but beyond EMR there’s work to augment radiologists with computer vision to better diagnose, all sorts of cloudy things.
It’s here. It’s growing. Perhaps in your jurisdiction it’s prohibited? If so I wonder for how long.
Even if it's possible, there is typically a lot of paperwork to get that stuff approved.
There might be a lot less paperwork to just buy 50 decent GPU's and have the IT guy self-host.
Europe? US? In Finland doctors can send live patient encounters to azure openai for transcription and summarization.
This is not a shared sentiment across the buy side. I’m guessing you work at a bank?
Look at (private) banks in Switzerland, there are enough press release, and I can confirm most of them.
Managing private clients direct data is still a concern if it can be directly linked to them.
Only JB I believe have on premise infrastructure for these use cases.
I do think Devs are one of the genuine users of local into the future. No price hikes or random caps dropped in the middle of the night and in many instances I think local agentic coding is going to be faster than the cloud. It’s a great use case
Yes, and help with grant reviews. Not permitted to use web AI.
It's striking how much of the AI conversation focuses on new use cases, while overlooking one of the most serious non-financial costs: privacy.
I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.
Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist. That left me deeply concerned—not just about this moment, but about where things are headed.
The real question isn't just "what can AI do?"—it's "who is keeping the record of what it does?" And just as importantly: "who watches the watcher?" If the answer is "no one," then maybe we shouldn't have a watcher at all.
> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
I'm fairly sure "seemed" is the key word here. LLMs are excellent at making things up - they rarely say "I don't know" and instead generate the most probable guess. People also famously overestimate their own uniqueness. Most likely, you accidentally recreated a kind of Barnum effect for yourself.
> I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.
> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
Maybe I'm missing something, but why wouldn't that be expected? The chat history isn't their only source of information - these models are trained on scraped public data. Unless there's zero information about you and your family on the public internet (in which case - bravo!), I would expect even a "fresh" LLM to have some information even without you giving it any.
I think you are underestimating how notable a person needs to be for their information to be baked into a model.
LLMs can learn from a single example.
https://www.fast.ai/posts/2023-09-04-learning-jumps/
That doesn’t mean they learn from every single example.
https://www.malwarebytes.com/blog/news/2025/06/openai-forced...
That only means that OpenAI have to keep logs of all conversations, not that ChatGPT will retain memories of all conversations.
Why not turn the question around. All other things being equal, who would prefer to use a rate limited and/or for-pay service if you could obtain at least comparable quality locally for free with no limitations, no privacy concerns, no censorship (beyond that baked into the weights you choose to use), and no net access required?
It's a pretty bad deal. So it must be that all other things aren't equal, and I suppose the big one is hardware. But neural net based systems always have a point of sharply diminishing returns, which we seem to have unambiguously hit with LLMs already, while the price of hardware is constantly decreasing and its quality increasing. So as we go further into the future, the practicality of running locally will only increase.
Healthcare organizations that can't (easily) send data over the wire while remaining in compliance
Organizations operating in high stakes environments
Organizations with restrictive IT policies
To name just a few -- well, the first two are special cases of the last one
RE your hallucination concerns: the issue is overly broad ambitions. Local LLMs are not general purpose -- if what you want is local ChatGPT, you will have a bad time. You should have a highly focused use case, like "classify this free text as A or B" or "clean this up to conform to this standard": this is the sweet spot for a local model
Aren’t there HIPPA compliant clouds? I thought Azure had an offer to that effect and I imagine that’s the type of place they’re doing a lot of things now. I’ve landed roughly where you have though- text stuff is fine but don’t ask it to interact with files/data you can’t copy paste into the box. If a user doesn’t care to go through the trouble to preserve privacy, and I think it’s fair to say a lot of people claim to care but their behavior doesn’t change, then I just don’t see it being a thing people bother with. Maybe something to use offline while on a plane? but even then I guess United will have Starlink soon so plane connectivity is gonna get better
It's less that the clouds are compliant and more that risk management is paranoid. I used to do AWS consulting, and it wouldn't matter if you could show that some AWS service had attestations out the wazoo or that you could even use GovCloud -- some folks just wouldn't update priors.
>HIPPA
https://i.pinimg.com/474x/4c/4c/7f/4c4c7fb0d52b21fe118d998a8...
Pretty much all the large players in healthcare (provider and payer) have model access (OpenAI, Gemini, Anthropic)
That access is over a limited API and usually under heavy restrictions on the healthcare org side (e. g., only use a dedicated machine, locked up software, tracked responses and so on).
Running a local model is often much easier: if you already have data on a machine and can run a model without breaching any network one could run it without any new approvals.
If you're building any kind of product/service that uses AI/LLMs the answer is the same as why any company would want to run any other kind of OSS infra/service instead of relying on some closer proprietary vendor API.
Except many OSS products have all of that and equal or better performance.
> I’m still trying to understand what is the biggest group of people that uses local AI (or will)?
Creatives? I am surprised no one's mentioned this yet:
I tried to help a couple of friends with better copy for their websites, and quickly realized that they were using inventive phrases to explain their work, phrases that they would not want competitors to get wind of and benefit from; phrases that associate closely with their personal brand.
Ultimately, I felt uncomfortable presenting the cloud AIs with their text. Sometimes I feel this way even with my own Substack posts, where I occasionally coin a phrase I am proud of. But with local AI? Cool...
> I’m still trying to understand what is the biggest group of people that uses local AI (or will)?
Well, the model makers and device manufacturers of course!
While your Apple, Samsung, and Googles of the world will be unlikely to use OSS models locally (maybe Samsung?), they all have really big incentives to run models locally for a variety of reasons.
Latency, privacy (Apple), cost to run these models on behalf of consumers, etc.
This is why Google started shipping 16GB as the _lowest_ amount of RAM you can get on your Pixel 9. That was a clear flag that they're going to be running more and more models locally on your device.
As mentioned, it seems unlikely that US-based model makers or device manufacturers will use OSS models, they'll certainly be targeting local models heavily on consumer devices in the near future.
Apple's framework of local first, then escalate to ChatGPT if the query is complex will be the dominant pattern imo.
>Google started shipping 16GB as the _lowest_ amount of RAM you can get on your Pixel 9.
The Pixel 9 has 12GB of RAM[0]. You probably meant the Pixel 9 Pro.
[0]: https://www.gsmarena.com/google_pixel_9-13219.php
Still an absurd amount of RAM for a phone, imo
Not absurd. The base S21 Ultra from 2021 already shipped with 12GB ram. 4 Years later and the amount of ram is still the same
Seems about right, my new laptop has 8x that which is a about the same ratio that my last new laptop had to my phone at the time.
Device makers also get to sell you a new device when you want a more powerful LLM.
Bingo!
I’m highly interested in local models for privacy reasons. In particular, I want to give an LLM access to my years of personal notes and emails, and answer questions with references to those. As a researcher, there’s lots of unpublished stuff in there that I sometimes either forget or struggle to find again due to searching for the wrong keywords, and a local LLM could help with that.
I pay for ChatGPT and use it frequently, but I wouldn’t trust uploading all that data to them even if they let me. I’ve so far been playing around with Ollama for local use.
I'm excited to do just dumb and irresponsible things with a local model, like "iterate through every single email in my 20-year-old gmail account and apply label X if Y applies" and not have a surprise bill.
I think it can make LLMs fun.
I wrote a script to get my local Gemma3 insurance to tag and rename everything in my meme folder. :P
~80% of the basic questions I ask of LLMs[0] work just fine locally, and I’m happy to ask twice for the other 20% of queries for the sake of keeping those queries completely private.
[0] Think queries I’d previously have had to put through a search engine and check multiple results for a one word/sentence answer.
Why do any compute locally? Everything can just be cloud based right? Won't that work much better and scale easily?
We are not even at that extreme and you can already see the unequal reality that too much SaaS has engendered
Comcast comes to mind ;-)
One of my favorite use cases includes simple tasks like generating effective mock/masked data from real data. Then passing the mock data worry-free to the big three (or wherever.)
There’s also a huge opportunity space for serving clients with very sensitive data. Health, legal, and government come to mind immediately. These local models are only going to get more capable of handling their use cases. They already are, really.
A local laptop of the past few years without a discrete GPU can run, at practical speeds depending on task, a gemma/llama model if it's (ime) under 4GB.
For practical RAG processes of narrow scope and an even minimal amount of scaffolding a very usable speed for automating tasks, especially as the last-mile/edge device portion of a more complex process with better models in use upstream. Classification tasks, reasonay intelligent decisions between traditional workflow processes, other use cases-- a of them extremely valuable in enterprise, being built and deployed right now.
If you wanna compare on an h200 and play with trt-llm configs I setup this link here https://brev.nvidia.com/launchable/deploy?launchableID=env-3...
I'm guessing its largely enthusiasts for now, but as they continue getting better:
1. App makers can fine tune smaller models and include in their apps to avoid server costs
2. Privacy-sensitive content can be either filtered out or worked on... I'm using local LLMs to process my health history for example
3. Edge servers can be running these fine tuned for a given task. Flash/lite models by the big guys are effectively like these smaller models already.
Some app devs use local models on local environments with LLM APIs to get up and running fast, then when the app deploys it switches to the big online models via environment vars.
In large companies this can save quite a bit of money.
Privacy laws. Processing government paperwork with LLMs for example. There's a lot of OCR tools that can't be used, and the ones that comply are more expensive than say, GPT-4.1 and lower quality.
Local micro models are both fast and cheap. We tuned small models on our data set and if the small model thinks content is a certain way, we escalate to the LLM.
This gives us really good recall at really low cloud cost and latency.
Pornography, or any other "restricted use". They either want privacy or don't want to deal with the filters on commercial products.
I'm sure there are other use cases, but much like "what is BitTorrent for?", the obvious use case is obvious.
Just imagine the next PlayStation or XBox shipping with these models baked in for developer use. The kinds of things that could unlock.
Good point. Take the state of the world and craft npc dialogue for instance.
Yep that’s my biggest ask tbh. I just imagine the next Elder Scrolls taking advantage of that. Would change the gaming landscape overnight.
I can provide a real-world example: Low-latency code completion.
The JetBrains suite includes a few LLM models on the order of a hundred megabytes. These models are able to provide "obvious" line completion, like filling in variable names, as well as some basic predictions, like realising that the `if let` statement I'm typing out is going to look something like `if let Some(response) = client_i_just_created.foobar().await`.
If that was running in The Cloud, it would have latency issues, rate limits, and it wouldn't work offline. Sure, there's a pretty big gap between these local IDE LLMs and what OpenAI is offering here, but if my single-line autocomplete could be a little smarter, I sure wouldn't complain.
I don't have latency issue with github copilot. Maybe i'm less sensitive to it.
Data that can't leave the premises because it is too sensitive. There is a lot of security theater around cloud pretending to be compliant but if you actually care about security a locked server room is the way to do it.
There's a bunch of great reasons in this thread, but how about the chip manufacturers that are going to need you to need a more powerful set of processors in your phone, headset, computer. You can count on those companies to subsidize some R&D and software development.
If you have capable hardware and kids, a local LLM is great. A simple system prompt customisation (e.g. ‘all responses should be written as if talking to a 10 year old’) and knowing that everything is private goes a long way for me at least.
In some large, lucrative industries like aerospace many of the hosted models are off the table due to regulations such as ITAR. There'a a market for models which are run on prem/in GovCloud with a professional support contract for installation and updates.
>Students who don’t want to pay but somehow have the hardware?
that's me - well not a student anymore. when toying with something, i much prefer not paying for each shot. my 12GB Radeon card can either run a decent extremely slow, or a idiotic but fast model. it's nice not dealing with rate limits.
once you write a prompt that mangles an idiotic model into still doing the work, it's really satisfying. the same principle as working to extract the most from limited embedded hardware. masochism, possibly
Maybe I am too pessimistic, but as an EU citizen I expect politics (or should I say Trump?) to prevent access to US-based frontier models at some point.
I do it because 1) I am fascinated that I can and 2) at some point the online models will be enshitified — and I can then permanently fall back on my last good local version.
love the first and am sad you’re going to be right about the second
When it was floated about that the DeepSeek model was to be banned in the U.S., I grabbed it as fast as I could.
Funny how that works.
The cloud AI providers have unacceptable variation in response time for things that need a predictable runtime.
Even if they did offer a defined latency product, you’re relying on a lot of infrastructure between your application and their GPU.
That’s not always tolerable.
worth mentioning that todays expensive hardware will be built into the cheapest iPhone in less than 10 years.
That means running instantly offline and every token is free
One use nobody mentions is hybrid use.
Why not run all the models at home, maybe collaboratively or at least in parallel?
I'm sure there are use cases where the paid models are not allowed to collaborate or ask each other.
also, other open models are gaining mindshare.
Use Case?
How about running one on this site but making it publically available? A sort of outranet and calling it HackerBrain?
You’re asking the biggest group of people who would want to do this
We use it locally for deep packet inspection.
Jail breaking then running censored questions. Like diy fireworks, or analysis of papers that touch "sensitive topics", nsfw image generation the list is basically endless.
The use case is building apps.
A small LLM can do RAG, call functions, summarize, create structured data from messy text, etc... You know, all the things you'd do if you were making an actual app with an LLM.
Yeah, chat apps are pretty cheap and convenient for users who want to search the internet and write text or code. But APIs quickly get expensive when inputting a significant amount of tokens.
People who want programmatic solutions that wont be rug pulled
I’d use it on a plane if there was no network for coding, but otherwise it’s just an emergency model if the internet goes out, basically end of the world scenarios
air gaps, my man.
Privacy and equity.
Privacy is obvious.
AI is going to to be equivalent to all computing in the future. Imagine if only IBM, Apple and Microsoft ever built computers, and all anyone else ever had in the 1990s were terminals to the mainframe, forever.
I am all for the privacy angle and while I think there’s certainly a group of us, myself included, who care deeply about it I don’t think most people or enterprises will. I think most of those will go for the easy button and then wring their hands about privacy and security as they have always done while continuing to let the big companies do pretty much whatever they want. I would be so happy to be wrong but aren’t we already seeing it? Middle of the night price changes, leaks of data, private things that turned out to not be…and yet!
I wring my hands twice a week about internet service providers; Comcast and Starlink. And I live in a nominally well serviced metropolitan area.
The model is good and runs fine but if you want to be blown away again try Qwen3-30A-A3B-2507. It's 6gb bigger but the response is comparable or better and much faster to run. Gpt-oss-20B gives me 6 tok/sec while Qwen3 gives me 37 tok/sec. Qwen3 is not a reasoning model tho.
Now to embrace jevon's paradox and expand usage until we're back to draining lakes so that your agentic refrigerator can simulate sentience.
What ~IBM~ TSMC giveth, ~Bill Gates~ Sam Altman taketh away.
In the future, your Samsung fridge will also need your AI girlfriend
In the future, while you're away your Samsung fridge will use electricity to chat up the Whirlpool washing machine.
In Zap Brannigans voice:
“I am well versed in the lost art form of delicates seduction.”
"Now I've been admitted to Refrigerator Heaven..."
s/need/be/
I keep my typos organic — it proves I’m not an LLM
Yep, it's almost as bad as all the cars' cooling systems using up so much water.
Estimated 1.5 billion vehicles in use across the world. Generous assumptions: a) they're all IC engines requiring 16 liters of water each. b) they are changing that water out once a year
That gives 24m cubic meters annual water usage.
Estimated ai usage in 2024: 560m cubic meters.
Projected water usage from AI in 2027: 4bn cubic meters at the low end.
what does water usage mean? is that 4bn cubic meters of water permanently out of circulation somehow? is the water corrupted with chemicals or destroyed or displaced into the atmosphere to become rain?
The water is used to sink heat and then instead of cooling it back down they evaporate it, which provides more cooling. So the answer is 'it eventually becomes rain'.
I understand. but why this is bad? is there some analysis of the beginning and end locations of the water, and how the utility differs between those locations?
Earth: ~1.4e18 m³ water
Atmosphere: ~1.3e13 m³ vapor
Estimated impact from closed loop systems: 0-ish.
How up to date are you on current open weights models? After playing around with it for a few hours I find it to be nowhere near as good as Qwen3-30B-A3B. The world knowledge is severely lacking in particular.
Agree. Concrete example: "What was the Japanese codeword for Midway Island in WWII?"
Answer on Wikipedia: https://en.wikipedia.org/wiki/Battle_of_Midway#U.S._code-bre...
dolphin3.0-llama3.1-8b Q4_K_S [4.69 GB on disk]: correct in <2 seconds
deepseek-r1-0528-qwen3-8b Q6_K [6.73 GB]: correct in 10 seconds
gpt-oss-20b MXFP4 [12.11 GB] low reasoning: wrong after 6 seconds
gpt-oss-20b MXFP4 [12.11 GB] high reasoning: wrong after 3 minutes !
Yea yea it's only one question of nonsense trivia. I'm sure it was billions well spent.
It's possible I'm using a poor temperature setting or something but since they weren't bothered enough to put it in the model card I'm not bothered to fuss with it.
I think your example reflects well on oss-20b, not poorly. It (may) show that they've been successful in separating reasoning from knowledge. You don't _want_ your small reasoning model to waste weights memorizing minutiae.
Why does it need knowledge when it can just call tools to get it?
Right... knowledge is one of the things (the one thing?) that LLMs are really horrible at, and that goes double for models small enough to run on normal-ish consumer hardware.
Shouldn't we prefer to have LLMs just search and summarize more reliable sources?
Even large hosted models fail at that task regularly. It's a silly anecdotal example, but I asked the Gemini assistant on my Pixel whether [something] had seen a new release to match the release of [upstream thing].
It correctly chose to search, and pulled in the release page itself as well as a community page on reddit, and cited both to give me the incorrect answer that a release had been pushed 3 hours ago. Later on when I got around to it, I discovered that no release existed, no mention of a release existed on either cited source, and a new release wasn't made for several more days.
Reliable sources that are becoming polluted by output from knowledge-poor LLMs, or overwhelmed and taken offline by constant requests from LLMs doing web scraping …
Yup which is why these models are so exciting!
They are specifically training on webbrowsing and python calling.
I just tested 120B from the Groq API on agentic stuff (multi-step function calling, similar to claude code) and it's not that good. Agentic fine-tuning seems key, hopefully someone drops one soon.
For me the game changer here is the speed. On my local Mac I'm finally getting token counts that are faster than I can process the output (~96 tok/s), and the quality has been solid. I had previously tried some of the distilled qwen and deepseek models and they were just way too slow for me to seriously use them.
It's really training not inference that drains the lakes.
Training cost has increased a ton exactly because inference cost is the biggest problem: models are now trained on almost three orders of magnitude more data then what is compute-optimal to do (from the Chinchilla paper), because saving compute on inference makes it valuable to overtrain a smaller model to achieve similar performance for a bigger amount of training compute.
Interesting. I understand that, but I don't know to what degree.
I mean the training, while expensive, is done once. The inference … besides being done by perhaps millions of clients, is done for, well, the life of the model anyway. Surely that adds up.
It's hard to know, but I assume the user taking up the burden of the inference is perhaps doing so more efficiently? I mean, when I run a local model, it is plodding along — not as quick as the online model. So, slow and therefore I assume necessarily more power efficient.
Where did you get the top ten from?
https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro
Are you discounting all of the self reported scores?
Came here to say this. It's behind the 14b Phi-reasoning-plus (which is self-reported).
I don't understand why "TIGER-LAb"-sourced scores are 'unknown' in terms of model size?
It is not a frontier model. It's only good for benchmarks. Tried some tasks and it is even worse than gemma 3n.
For me the biggest benefit of open weights models is the ability to fine tune and adapt to different tasks.
What's your experience with the quality of LLMs running on your phone?
As other said, around gpt 3.5 level so three or four years behind SOTA today at reasonable (but not quick) speed.
I've run qwen3 4B on my phone, it's not the best but it's better than old gpt-3.5. It also does have a reasoning mode, and in reasoning mode it's better than the original gpt-4 and rhe original gpt-4o, but not the latest gpt-4o. I get usable speed, but it's not really comparable to most cloud hosted models.
I'm on android so I've used termux+ollama, but if you don't want to set that up in a terminal or want a GUI pocketpal AI is a really good app for both android and iOS. It let's you run hugging face models.
can you please give an estimate how much slower/faster is it on your macbook compared to comparable models running in the cloud?
Sure.
This is a thinking model, so I ran it against o4-mini, here are the results:
* gpt-oss:20b
* Time-to-first-token: 2.49 seconds
* Time-to-completion: 51.47 seconds
* Tokens-per-second: 2.19
* o4-mini on ChatGPT
* Time-to-first-token: 2.50 seconds
* Time-to-completion: 5.84 seconds
* Tokens-per-second: 19.34
Time to first token was similar, but the thinking piece was _much_ faster on o4-mini. Thinking took the majority of the 51 seconds for gpt-oss:20b.
You can get a pretty good estimate depending on your memory bandwidth. Too many parameters can change with local models (quantization, fast attention, etc). But the new models are MoE so they’re gonna be pretty fast.
The environmentalist in me loves the fact that LLM progress has mostly been focused on doing more with the same hardware, rather than horizontal scaling. I guess given GPU shortages that makes sense, but it really does feel like the value of my hardware (a laptop in my case) is going up over time, not down.
Also, just wanted to credit you for being one of the five people on Earth who knows the correct spelling of "lede."
> Also, just wanted to credit you for being one of the five people on Earth who knows the correct spelling of "lede."
Not in the UK it isn’t.
Interesting, these models are better than the new Qwen releases?
on your phone?
Model cards, for the people interested in the guts: https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7...
In my mind, I’m comparing the model architecture they describe to what the leading open-weights models (Deepseek, Qwen, GLM, Kimi) have been doing. Honestly, it just seems “ok” at a technical level:
- both models use standard Grouped-Query Attention (64 query heads, 8 KV heads). The card talks about how they’ve used an older optimization from GPT3, which is alternating between banded window (sparse, 128 tokens) and fully dense attention patterns. It uses RoPE extended with YaRN (for a 131K context window). So they haven’t been taking advantage of the special-sauce Multi-head Latent Attention from Deepseek, or any of the other similar improvements over GQA.
- both models are standard MoE transformers. The 120B model (116.8B total, 5.1B active) uses 128 experts with Top-4 routing. They’re using some kind of Gated SwiGLU activation, which the card talks about as being "unconventional" because of to clamping and whatever residual connections that implies. Again, not using any of Deepseek’s “shared experts” (for general patterns) + “routed experts” (for specialization) architectural improvements, Qwen’s load-balancing strategies, etc.
- the most interesting thing IMO is probably their quantization solution. They did something to quantize >90% of the model parameters to the MXFP4 format (4.25 bits/parameter) to let the 120B model to fit on a single 80GB GPU, which is pretty cool. But we’ve also got Unsloth with their famous 1.58bit quants :)
All this to say, it seems like even though the training they did for their agentic behavior and reasoning is undoubtedly very good, they’re keeping their actual technical advancements “in their pocket”.
I would guess the “secret sauce” here is distillation: pretraining on an extremely high quality synthetic dataset from the prompted output of their state of the art models like o3 rather than generic internet text. A number of research results have shown that highly curated technical problem solving data is unreasonably effective at boosting smaller models’ performance.
This would be much more efficient than relying purely on RL post-training on a small model; with low baseline capabilities the insights would be very sparse and the training very inefficient.
> research results have shown that highly curated technical problem solving data is unreasonably effective at boosting smaller models’ performance.
same seems to be true for humans
Yes, if I understand correctly, what it means is "a very smart teacher can do wonders for their pupils' education".
Wish they gave us access to learn from those grandmother models instead of distilled slop.
It behooves them to keep the best stuff internal, or at least greatly limit any API usage to avoid giving the goods away to other labs they are racing with.
Which, presumably, is the reason they removed 4.5 from the API... mostly the only people willing to pay that much for that model were their competitors. (I mean, I would pay even more than they were charging, but I imagine even if I scale out my use cases--which, for just me, are mostly satisfied by being trapped in their UI--it would be a pittance vs. the simpler stuff people keep using.)
Or, you can say, OpenAI has some real technical advancements on stuff besides attn architecture. GQA8, alternating SWA 128 / full attn do all seem conventional. Basically they are showing us that "no secret sauce in model arch you guys just sucks at mid/post-training", or they want us to believe this.
The model is pretty sparse tho, 32:1.
Kimi K2 paper said that the model sparsity scales up with parameters pretty well (MoE sparsity scaling law, as they call, basically calling Llama 4 MoE "done wrong"). Hence K2 has 128:1 sparsity.
I thought Kimi K2 uses 8 active experts out of 384? Sparsity should be 48:1. Indeed Llama4 Maverick is the only one that has 128:1 sparsity.
It's convenient to be able to attribute success to things only OpenAI could've done with the combo of their early start and VC money – licensing content, hiring subject matter experts, etc. Essentially the "soft" stuff that a mature organization can do.
I think their MXFP4 release is a bit of a gift since they obviously used and tuned this extensively as a result of cost-optimization at scale - something the open source model providers aren't doing too much, and also somewhat of a competitive advantage.
Unsloth's special quants are amazing but I've found there to be lots of trade offs vs full quantization, particularly when striving for best first-shot attempts - which is by far the bulk of LLM use cases. Running a better (larger, newer) model at lower quantization to fit in memory, or with reduced accuracy/detail to speed it up both have value, but in the the pursuit of first-shot accuracy there doesn't seem to be many companies running their frontier models on reduced quantization. If openAI is in doing this in production that is interesting.
You can get similar insights looking at the github repo https://github.com/openai/gpt-oss
Also: attention sinks (although implemented as extra trained logits used in attention softmax rather than attending to e.g. a prepended special token).
>They did something to quantize >90% of the model parameters to the MXFP4 format (4.25 bits/parameter) to let the 120B model to fit on a single 80GB GPU, which is pretty cool
They said it was native FP4, suggesting that they actually trained it like that; it's not post-training quantisation.
The native FP4 is one of the most interesting architectural aspects here IMO, as going below FP8 is known to come with accuracy tradeoffs. I'm curious how they navigated this and how the FP8 weights (if they exist) were to perform.
One thing to note is that MXFP4 is a block scaled format, with 4.25 bits per weight. This lets it represent a lot more numbers than just raw FP4 would with say 1 mantissa and 2 exponent bits.
I don't know how to ask this without being direct and dumb: Where do I get a layman's introduction to LLMs that could work me up to understanding every term and concept you just discussed? Either specific videos, or if nothing else, a reliable Youtube channel?
What I’ve sometimes done when trying to make sense of recent LLM research is give the paper and related documents to ChatGPT, Claude, or Gemini and ask them to explain the specific terms I don’t understand. If I don’t understand their explanations or want to know more, I ask follow-ups. Doing this in voice mode works better for me than text chat does.
When I just want a full summary without necessarily understanding all the details, I have an audio overview made on NotebookLM and listen to the podcast while I’m exercising or cleaning. I did that a few days ago with the recent Anthropic paper on persona vectors, and it worked great.
There is a great 3blue1brown video, but it’s pretty much impossible by now to cover the entire landscape of research. I bet gpt-oss has some great explanations though ;)
Try Microsoft's "Generative AI for Beginners" repo on GitHub. The early chapters in particular give a good grounding of LLM architecture without too many assumptions of background knowledge. The video version of the series is good too.
Ask Gemini. Give it a link here in fact.
Start with the YT series on neural nets and LLMs from 3blue1brown
Try Andrej Karpathy's YouTube videos. I also really liked the Dive into Deep Learning book at d2l.ai
Just posted my initial impressions, took a couple of hours to write them up because there's a lot in this release! https://simonwillison.net/2025/Aug/5/gpt-oss/
TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs. Will be interesting to see if independent benchmarks resolve in that direction as well.
The 20B model runs on my Mac laptop using less than 15GB of RAM.
I tried to generate a streamlit dashboard with MACD, RSI, MA(200). 1:0 for qwen3 here.
qwen3-coder-30b 4-bit mlx took on the task w/o any hiccups with a fully working dashboard, graphs, and recent data fetched from yfinance.
gpt-oss-20b mxfp4's code had a missing datatime import and when fixed delivered a dashboard without any data and with starting date of Aug 2020. Having adjusted the date, the update methods did not work and displayed error messages.
for now, i wouldnt rank any model from openai in coding benchmarks, despite all the false messaging they are giving, almost every single model openai has launched even the high end o3 expensive models are absolutely monumentally horrible at coding tasks. So this is expected.
If its decent in other tasks, which i do find openai often being better than others at, then i think its a win, especially a win for the open source community that even AI labs that pionered the hype of Gen AI who didnt want to ever launch open models are now being forced to launch them. That is definitely a win, and not something that was certain before.
It is absolutely awful at writing and general knowledge. IMO coding is its greatest strength by far.
Sure sounds like they're not good at anything in particular, then.
welcome to 3DTV hype, LLM are useless...
NVIDIA will probably give us nice, coding-focused fine-tunes of these models at some point, and those might compare more favorably against the smaller Qwen3 Coder.
What is the best local coder model that that can be used with ollama?
Maybe a too opened ended question? I can run the deepseek model locally really nicely.
Is the DeepSeek model you're running a distill, or is it the 671B parameter model?
Probably Qwen3-Coder 30B, unless you have a titanic enough machine to handle a serious 480B model.
The space invaders game seems like a poor benchmark. Both models understood the prompt and generated valid, functional javascript. One just added more fancy graphics. It might just have "use fancy graphics" in its system prompt for all we know.
The way I run these prompts excludes a system prompt - I'm hitting the models directly.
still, if you ask this open model to generate a fancy space invaders game with polish, and then ask the other model to generate a bare-bones space invaders game with the fewest lines of code, I think there's a good chance they'd switch places. This doesn't really test the models ability to generate a space invaders game, so much as it tests their tendency to make an elaborate vs simple solution.
My main goal with that benchmark is to see if it can produce HTML and JavaScript code that runs without errors for a moderately complex challenge.
It's not a comprehensive benchmark - there are many ways you could run it in ways that would be much more informative and robust.
It's great as a quick single sentence prompt to get a feeling for if the model can produce working JavaScript or not.
My llm agent is currently running an experiment generating many pelicans. It will compare various small model consortiums against the same model running solo. It should push new pelicans to the repo after run. The horizon-beta is up already, not small or opensource but tested it anyway, and you can already see an improvement using 2+1 (2 models + the arbiter) for that model.
https://irthomasthomas.github.io/Pelicans-consortium/ https://github.com/irthomasthomas/Pelicans-consortium
There is no way that gpt-oss-120b can beat the much larger Kimi-K2-Instruct, Qwen3 Coder/Instruct/Thinking, or GLM-4.5. How did you arrive at this rather ridiculous conclusion? The current sentiment in r/LocalLLaMA is that gpt-oss-120b is around Llama-4 Scout level. But it is indeed the best in refusal.
What did you set the context window to? That's been my main issue with models on my macbook, you have to set the context window so short that they are way less useful than the hosted models. Is there something I'm misisng there?
With LM Studio you can configure context window freely. Max is 131072 for gpt-oss-20b.
Yes but if I set it above ~16K on my 32gb laptop it just OOMs. Am I doing something wrong?
try enable flash attention and offload all layer to GPU
I punted it up to the maximum in LM Studio - seems to use about 16GB of RAM then, but I've not tried a long prompt yet.
> TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs.
That's just straight up not the case. Not sure how you can jump to that conclusion not least when you stated that you haven't tested tool calling in your post too.
Many people in the community are finding it substantially lobotomized to the point that there are "safe" memes everywhere now. Maybe you need to develop better tests that and pay more attention to benchmaxxing.
There are good things that came out of these release from OpenAI but we'd appreciate more objective analyses...
If you read my full post, it ends with this:
> I’m waiting for the dust to settle and the independent benchmarks (that are more credible than my ridiculous pelicans) to roll out, but I think it’s likely that OpenAI now offer the best available open weights models.
You told me off for jumping to conclusions and in the same comment quoted me saying "I think OpenAI may have taken" - that's not a conclusion, it's tentative speculation.
I did read that and it doesn't change what I said about your comment on HN, I was calling out the fact that you are making a very bold statement without having done careful analysis.
You know you have a significant audience, so don't act like you don't know what you're doing when you chose to say "TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs" then defend what I was calling out based on word choices like "conclusions" (I'm sure you have read conclusions in academic journals?), "I think", and "speculation".
I'm going to double down on "I think OpenAI may have taken the medal..." not being a "bold statement".
I try to be careful about my choice of words, even in forum comments.
I’m also very interested to know how well these models handle tool calling as I haven’t been able to make it work after playing with them for a few hours. Looks promising tho.
update: I’ve tried to use lm-studio (like the author) and the tool request kept failing due to a mismatch in the prompt template. I guess they’ll fix it but seems sloppy from lm-studio not having tested this before release.
I was road testing tool calling in LM Studio a week ago against a few models marked with tool support, none worked, so I believe it may be a bug. Had much better luck with llama.cpp’s llama-server.
> The 20B model runs on my Mac laptop using less than 15GB of RAM.
I was about to try the same. What TPS are you getting and on which processor? Thanks!
gpt-oss-20b: 9 threads, 131072 context window, 4 experts - 35-37 tok/s on M2 Max via LM Studio.
interestingly, i am also on M2 Max, and i get ~66 tok/s in LM Studio on M2 Max, with the same 131072. I have full offload to GPU. I also turned on flash attention in advanced settings.
55 token/s here on m4 pro, turning on flash attention puts it to 60/s.
i got 70 token/s on m4 max
That M4 Max is really something else, I get also 70 tokens/second on eval on a RTX 4000 SFF Ada server GPU.
Hasn't nailed the strawberry test yet
I found this surprising because that's such an old test that it must certainly be in the training data. I just tried to reproduce and I've been unable to get it (20B model, lowest "reasoning" budget) to fail that test (with a few different words).
Running a model comparable to o3 on a 24GB Mac Mini is absolutely wild. Seems like yesterday the idea of running frontier (at the time) models locally or on a mobile device was 5+ years out. At this rate, we'll be running such models in the next phone cycle.
It only seems like that if you haven't been following other open source efforts. Models like Qwen perform ridiculously well and do so on very restricted hardware. I'm looking forward to seeing benchmarks to see how these new open source models compare.
Agreed, these models seem relatively mediocre to Qwen3 / GLM 4.5
Nah, these are much smaller models than Qwen3 and GLM 4.5 with similar performance. Fewer parameters and fewer bits per parameter. They are much more impressive and will run on garden variety gaming PCs at more than usable speed. I can't wait to try on my 4090 at home.
There's basically no reason to run other open source models now that these are available, at least for non-multimodal tasks.
Qwen3 has multiple variants ranging from larger (230B) than these models to significantly smaller (0.6b), with a huge number of options in between. For each of those models they also release quantized versions (your "fewer bits per parameter).
I'm still withholding judgement until I see benchmarks, but every point you tried to make regarding model size and parameter size is wrong. Qwen has more variety on every level, and performs extremely well. That's before getting into the MoE variants of the models.
The benchmarks of the OpenAI models are comparable to the largest variants of other open models. The smaller variants of other open models are much worse.
I would wait for neutral benchmarks before making any conclusions.
With all due respect, you need to actually test out Qwen3 2507 or GLM 4.5 before making these sorts of claims. Both of them are comparable to OpenAI's largest models and even bench favorably to Deepseek and Opus: https://cdn-uploads.huggingface.co/production/uploads/62430a...
It's cool to see OpenAI throw their hat in the ring, but you're smoking straight hopium if you think there's "no reason to run other open source models now" in earnest. If OpenAI never released these models, the state-of-the-art would not look significantly different for local LLMs. This is almost a nothingburger if not for the simple novelty of OpenAI releasing an Open AI for once in their life.
> Both of them are comparable to OpenAI's largest models and even bench favorably to Deepseek and Opus
So are/do the new OpenAI models, except they're much smaller.
I'd really wait for additional neutral benchmarks, I asked the 20b model on low reasoning effort which number is larger 9.9 or 9.11 and it got it wrong.
Qwen-0.6b gets it right.
According to the early benchmarks, it's looking like you're just flat-out wrong: https://blog.brokk.ai/a-first-look-at-gpt-oss-120bs-coding-a...
They have worse scores than recent open source releases on a number of agentic and coding benchmarks, so if absolute quality is what you're after and not just cost/efficiency, you'd probably still be running those models.
Let's not forget, this is a thinking model that has a significantly worse scores on Aider-Polyglot than the non-thinking Qwen3-235B-A22B-Instruct-2507, a worse TAUBench score than the smaller GLM-4.5 Air, and a worse SWE-Bench verified score than the (3x the size) GLM-4.5. So the results, at least in terms of benchmarks, are not really clear-cut.
From a vibes perspective, the non-reasoners Kimi-K2-Instruct and the aforementioned non-thinking Qwen3 235B are much better at frontend design. (Tested privately, but fully expecting DesignArena to back me up in the following weeks.)
OpenAI has delivered something astonishing for the size, for sure. But your claim is just an exaggeration. And OpenAI have, unsurprisingly, highlighted only the benchmarks where they do _really_ well.
From my initial web developer test on https://www.gpt-oss.com/ the 120b is kind of meh. Even qwen3-coder 30b-a3b is better. have to test more.
You can always get your $0 back.
I have never agreed with a comment so much but we are all addicted to open source models now.
Not all of us. I've yet to get much use out of any of the models. This may be a personal failing. But still.
Depends on how much you paid for the hardware to run em on
Yes, but they are suuuuper safe. /s
So far I have mixed impressions, but they do indeed seem noticeably weaker than comparably-sized Qwen3 / GLM4.5 models. Part of the reason may be that the oai models do appear to be much more lobotomized than their Chinese counterparts (which are surprisingly uncensored). There's research showing that "aligning" a model makes it dumber.
The censorship here in China is only about public discussions / spaces. You cannot like have a website telling you about the crimes of the party. But downloading some compressed matrix re-spouting the said crimes, nobody gives a damn.
We seem to censor organized large scale complaints and viral mind virii, but we never quite forbid people at home to read some generated knowledge from an obscure hard to use software.
This might mean there's no moat for anything.
Kind of a P=NP, but for software deliverability.
On the subject of who has a moat and who doesn't, it's interesting to look the role of patents in the early development of wireless technology. There was WWI, and there was WWII, but the players in the nascent radio industry had serious beef with each other.
I imagine the same conflicts will ramp up over the next few years, especially once the silly money starts to dry up.
Right? I still remember the safety outrage of releasing Llama. Now? My 96 GB of (V)RAM MacBook will be running a 120B parameter frontier lab model. So excited to get my hands on the MLX quants and see how it feels compared to GLM-4.5-air.
I feel like most of the safety concerns ended up being proven correct, but there's so much money in it that they decided to push on anyway full steam ahead.
AI did get used for fake news, propaganda, mass surveillance, erosion of trust and sense of truth, and mass spamming social media.
in that era, OpenAI and Anthropic were still deluding themselves into thinking they would be the "stewards" of generative AI, and the last US administration was very keen on regoolating everything under the sun, so "safety" was just an angle for regulatory capture.
God bless China.
Oh absolutely, AI labs certainly talk their books, including any safety angles. The controversy/outrage extended far beyond those incentivized companies too. Many people had good faith worries about Llama. Open-weight models are now vastly more powerful than Llama-1, yet the sky hasn't fallen. It's just fascinating to me how apocalyptic people are.
I just feel lucky to be around in what's likely the most important decade in human history. Shit odds on that, so I'm basically a lotto winner. Wild times.
About 7% of people who have ever lived are alive today. Still pretty lucky, but not quite winning the lottery.
Much luckier if you consider everyone who ever will live, assuming we don’t destroy ourselves.
"the most important decade in human history."
Lol. To be young and foolish again. This covid laced decade is more of a placeholder. The current decade is always the most meaningful until the next one. The personal computer era, the first cars or planes, ending slavery needs to take a backseat to the best search engine ever. We are at the point where everyone is planning on what they are going to do with their hoverboards.
> ending slavery
happened over many centuries, not in a given decade. Abolished and reintroduced in many places: https://en.wikipedia.org/wiki/Timeline_of_abolition_of_slave...
Slavery is still legal and widespread in most of the US, including California.
There was a ballot measure to actually abolish slavery a year or so back. It failed miserably.
The slavery of free humans is illegal in America, so now the big issue is figuring out how to convince voters that imprisoned criminals deserve rights.
Even in liberal states, the dehumanization of criminals is an endemic behavior, and we are reaching the point in our society where ironically having the leeway to discuss the humane treatment of even our worst criminals is becoming an issue that affects how we see ourselves as a society before we even have a framework to deal with the issue itself.
What one side wants is for prisons to be for rehabilitation and societal reintegration, for prisoners to have the right to decline to work and to be paid fair wages from their labor. They further want to remove for-profit prisons from the equation completely.
What the other side wants is the acknowledgement that prisons are not free, they are for punishment, and that prisoners have lost some of their rights for the duration of their incarceration and that they should be required to provide labor to offset the tax burden of their incarceration on the innocent people that have to pay for it. They also would like it if all prisons were for-profit as that would remove the burden from the tax payers and place all of the costs of incarceration onto the shoulders of the incarcerated.
Both sides have valid and reasonable wants from their vantage point while overlooking the valid and reasonable wants from the other side.
> slavery of free humans is illegal
That's kind of vacuously true though, isn't it?
I think his point is that slavery is not outlawed by the 13th amendment as most people assume (even the Google AI summary reads: "The 13th Amendment to the United States Constitution, ratified in 1865, officially abolished slavery and involuntary servitude in the United States.").
However, if you actually read it, the 13th amendment makes an explicit allowance for slavery (i.e. expressly allows it):
"Neither slavery nor involuntary servitude, *except as a punishment for crime whereof the party shall have been duly convicted*" (emphasis mine obviously since Markdown didn't exist in 1865)
Prisoners themselves are the ones choosing to work most of the time, and generally none of them are REQUIRED to work (they are required to either take job training or work).
They choose to because extra money = extra commissary snacks and having a job is preferable to being bored out of their minds all day.
That's the part that's frequently not included in the discussion of this whenever it comes up. Prison jobs don't pay minimum wage, but given that prisoners are wards of the state that seems reasonable.
I have heard anecdotes that the choice of doing work is a choice between doing work and being in solitary confinement or becoming the target of the guards who do not take kindly to prisoners who don't volunteer for work assignments.
you can say the same shit about machine learning but ChatGPT was still the Juneteenth of AI
>Many people had good faith worries about Llama.
ah, but that begs the question: did those people develop their worries organically, or did they simply consume the narrative heavily pushed by virtually every mainstream publication?
the journos are heavily incentivized to spread FUD about it. they saw the writing on the wall that the days of making a living by producing clickbait slop were coming to an end and deluded themselves into thinking that if they kvetch enough, the genie will crawl back into the bottle. scaremongering about sci-fi skynet bullshit didn't work, so now they kvetch about joules and milliliters consumed by chatbots, as if data centers did not exist until two years ago.
likewise, the bulk of other "concerned citizens" are creatives who use their influence to sway their followers, still hoping against hope to kvetch this technology out of existence.
honest-to-God yuddites are as few and as retarded as honest-to-God flat earthers.
I've been pretty unlucky to have encountered more than my fair share of IRL Yuddites. Can't stand em.
Yeah, China is e/acc. Nice cheap solar panels too. Thanks China. The problem is their ominous policies like not allowing almost any immigration, and their domestic Han Supremacist propaganda, and all that make it look a bit like this might be Han Supremacy e/acc. Is it better than wester/decel? Hard to say, but at least the western/decel people are now starting to talk about building power plants, at least for datacenters, and things like that instead of demanding whole branches of computer science be classified, as they were threatening to Marc Andreessen when he visited the Biden admin last year.
I wish we had voter support for a hydrocarbon tax, though. It would level out the prices and then the AI companies can decide whether they want to pay double to burn pollutants or invest in solar and wind and batteries
Oh poor oppressed marc andreesen. Someone save him!
Okay I will be honest, I was so hyped up about This model but then I went to localllama and saw it that the:
120 B model is worse at coding compared to qwen 3 coder and glm45 air and even grok 3... (https://www.reddit.com/r/LocalLLaMA/comments/1mig58x/gptoss1...)
Qwen3 Coder is 4x its size! Grok 3 is over 22x its size!
What does the resource usage look like for GLM 4.5 Air? Is that benchmark in FP16? GPT-OSS-120B will be using between 1/4 and 1/2 the VRAM that GLM-4.5 Air does, right?
It seems like a good showing to me, even though Qwen3 Coder and GLM 4.5 Air might be preferable for some use cases.
That's SVGBench, which is a useful benchmark but isn't much of a test of general coding
Hm alright, I will see how this model actually plays around instead of forming quick opinions..
Thanks.
It's only got around 5 billion active parameters; it'd be a miracle if it was competitive at coding with SOTA models that have significantly more.
On this bench it underperforms vs glm-4.5-air, which is an MoE with fewer total params but more active params.
When people talk about running a (quantized) medium-sized model on a Mac Mini, what types of latency and throughput times are they talking about? Do they mean like 5 tokens per second or at an actually usable speed?
On a M1 MacBook Air with 8GB, I got this running Gemma 3n:
12.63 tok/sec • 860 tokens • 1.52s to first token
I'm amazed it works at all with such limited RAM
I have started a crowdfunding to get you a MacBook air with 16gb. You poor thing.
Up the ante with an M4 chip
not meaningfully different, m1 virtually as fast as m4
https://github.com/devMEremenko/XcodeBenchmark M4 is almost twice as fast as M1
In this table, M4 is also twice as fast as M4.
You're comparing across vanilla/Pro/Max tiers. within equivalent tier, M4 is almost 2x faster than M1
Twice the cost too.
Y not meeee?
After considering my sarcasm for the last 5 minutes, I am doubling down. The government of the United States of America should enhance its higher IQ people by donating AI hardware to them immediately.
This is critical for global competitive economic power.
Send me my hardware US government
higher IQ people <-- well you have to prove that first, so let me ask you a test question to prove them: how can you mix collaboration and competition in society to produce the optimal productivity/conflict ratio ?
Here's a 4bit 70B parameter model, https://www.youtube.com/watch?v=5ktS0aG3SMc (deepseek-r1:70b Q4_K_M) on a M4 Max 128 GB. Usable, but not very performant.
here's a quick recording from the 20b model on my 128GB M4 Max MBP: https://asciinema.org/a/AiLDq7qPvgdAR1JuQhvZScMNr
and the 120b: https://asciinema.org/a/B0q8tBl7IcgUorZsphQbbZsMM
I am, um, floored
Generation is usually fast, but prompt processing is the main limitation with local agents. I also have a 128 GB M4 Max. How is the prompt processing on long prompts? processing the system prompt for Goose always takes quite a while for me. I haven't been able to download the 120B yet, but I'm looking to switch to either that or the GLM-4.5-Air for my main driver.
Here's a sample of running the 120b model on Ollama with my MBP:
```
total duration: 1m14.16469975s
load duration: 56.678959ms
prompt eval count: 3921 token(s)
prompt eval duration: 10.791402416s
prompt eval rate: 363.34 tokens/s
eval count: 2479 token(s)
eval duration: 1m3.284597459s
eval rate: 39.17 tokens/s
```
You mentioned "on local agents". I've noticed this too. How do ChatGPT and the others get around this, and provide instant responses on long conversations?
Not getting around it, just benefiting from parallel compute / huge flops of GPUs. Fundamentally, it's just that prefill compute is itself highly parallel and HBM is just that much faster than LPDDR. Effectively H100s and B100s can chew through the prefill in under a second at ~50k token lengths, so the TTFT (Time to First Token) can feel amazingly fast.
it's odd that the result of this processing cannot be cached.
It can be and it is by most good processing frameworks.
the active param count is low so it should be fast.
GLM-4.5-air produces tokens far faster than I can read on my MacBook. That's plenty fast enough for me, but YMMV.
What's the easiest way to get these local models browsing the web right now?
aider uses Playwright. I don't know what everybody is using but that's a good starting point.
We be running them in PIs off spare juice in no time, and they be billions given how chips and embedded spreads…
Open models are going to win long-term. Anthropics' own research has to use OSS models [0]. China is demonstrating how quickly companies can iterate on open models, allowing smaller teams access and augmentation to the abilities of a model without paying the training cost.
My personal prediction is that the US foundational model makers will OSS something close to N-1 for the next 1-3 iterations. The CAPEX for the foundational model creation is too high to justify OSS for the current generation. Unless the US Gov steps up and starts subsidizing power, or Stargate does 10x what it is planned right now.
N-1 model value depreciates insanely fast. Making an OSS release of them and allowing specialized use cases and novel developments allows potential value to be captured and integrated into future model designs. It's medium risk, as you may lose market share. But also high potential value, as the shared discoveries could substantially increase the velocity of next-gen development.
There will be a plethora of small OSS models. Iteration on the OSS releases is going to be biased towards local development, creating more capable and specialized models that work on smaller and smaller devices. In an agentic future, every different agent in a domain may have its own model. Distilled and customized for its use case without significant cost.
Everyone is racing to AGI/SGI. The models along the way are to capture market share and use data for training and evaluations. Once someone hits AGI/SGI, the consumer market is nice to have, but the real value is in novel developments in science, engineering, and every other aspect of the world.
[0] https://www.anthropic.com/research/persona-vectors > We demonstrate these applications on two open-source models, Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct.
I'm pretty sure there's no reason that Anthropic has to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.
> Open models are going to win long-term.
[2 of 3] Assuming we pin down what win means... (which is definitely not easy)... What would it take for this to not be true? There are many ways, including but not limited to:
- publishing open weights helps your competitors catch up
- publishing open weights doesn't improve your own research agenda
- publishing open weights leads to a race dynamic where only the latest and greatest matters; leading to a situation where the resources sunk exceed the gains
- publishing open weights distracts your organization from attaining a sustainable business model / funding stream
- publishing open weights leads to significant negative downstream impacts (there are a variety of uncertain outcomes, such as: deepfakes, security breaches, bioweapon development, unaligned general intelligence, humans losing control [1] [2], and so on)
[1]: "What failure looks like" by Paul Christiano : https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-...
[2]: "An AGI race is a suicide race." - quote from Max Tegmark; article at https://futureoflife.org/statement/agi-manhattan-project-max...
I'm a layman but it seemed to me that the industry is going towards robust foundational models on which we plug tools, databases, and processes to expand their capabilities.
In this setup OSS models could be more than enough and capture the market but I don't see where the value would be to a multitude of specialized models we have to train.
> Once someone hits AGI/SGI
I don't think there will be such a unique event. There is no clear boundary. This is a continuous process. Modells get slightly better than before.
Also, another dimension is the inference cost to run those models. It has to be cheap enough to really take advantage of it.
Also, I wonder, what would be a good target to make profit, to develop new things? There is Isomorphic Labs, which seems like a good target. This company already exists now, and people are working on it. What else?
> I don't think there will be such a unique event.
I guess it depends on your definition of AGI, but if it means human level intelligence then the unique event will be the AI having the ability to act on its own without a "prompt".
> the unique event will be the AI having the ability to act on its own without a "prompt"
That's super easy. The reason they need a prompt is that this is the way we make them useful. We don't need LLMs to generate an endless stream of random "thoughts" otherwise, but if you really wanted to, just hook one up to a webcam and microphone stream in a loop and provide it some storage for "memories".
And the ability to improve itself.
> Open models are going to win long-term.
[1 of 3] For the sake of argument here, I'll grant the premise. If this turns out to be true, it glosses over other key questions, including:
For a frontier lab, what is a rational period of time (according to your organizational mission / charter / shareholder motivations*) to wait before:
1. releasing a new version of an open-weight model; and
2. how much secret sauce do you hold back?
* Take your pick. These don't align perfectly with each other, much less the interests of a nation or world.
There's no reason that models too large for consumer hardware wouldn't keep a huge edge, is there?
That is fundamentally a big O question.
I have this theory that we simply got over a hump by utilizing a massive processing boost from gpus as opposed to CPUs. That might have been two to three orders of magnitude more processing power.
But that's a one-time success. I don't hardware has any large scale improvements coming, because 3D gaming mostly plumb most of that vector processing hardware development in the last 30 years.
So will software and better training models produce another couple orders of magnitude?
Fundamentally we're talking about nines of of accuracy. What is the processing power required for each line of accuracy? Is it linear? Is it polynomial? Is it exponential?
It just seems strange to me with all the AI knowledge slushing through academia, I haven't seen any basic analysis at that level, which is something that's absolutely going to be necessary for AI applications like self-driving, once you get those insurance companies involved
To me it depends on 2 factors. Hardware becomes more accessible, and the closed source offerings become more expensive. Right now it's difficult to get enough GPUs to do local inference at production scale, and 2 it's more expensive to run your own GPU's vs closed source models.
> Open models are going to win long-term.
[3 of 3] What would it take for this statement to be false or missing the point?
Maybe we find ourselves in a future where:
- Yes, open models are widely used as base models, but they are also highly customized in various ways (perhaps by industry, person, attitude, or something else). In other words, this would be a blend of open and closed.
- Maybe publishing open weights of a model is more-or-less irrelevant, because it is "table stakes" ... because all the key differentiating advantages have to do with other factors, such as infrastructure, non-LLM computational aspects, regulatory environment, affordable energy, customer base, customer trust, and probably more.
- The future might involve thousands or millions of highly tailored models
> N-1 model value depreciates insanely fast
This implies LLM development isn’t plateaued. Sure the researchers are busting their assess quantizing, adding features like tool calls and structured outputs, etc. But soon enough N-1~=N
Inference in Python uses harmony [1] (for request and response format) which is written in Rust with Python bindings. Another OpenAI's Rust library is tiktoken [2], used for all tokenization and detokenization. OpenAI Codex [3] is also written in Rust. It looks like OpenAI is increasingly adopting Rust (at least for inference).
[1] https://github.com/openai/harmony
[2] https://github.com/openai/tiktoken
[3] https://github.com/openai/codex
As an engineer that primarily uses Rust, this is a good omen.
The less Python in the stack, the better!
So this confirms a best-in-class model release within the next few days?
From a strategic perspective, I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it?
Even without an imminent release it's a good strategy. They're getting pressure from Qwen and other high performing open-weight models. without a horse in the race they could fall behind in an entire segment.
There's future opportunity in licensing, tech support, agents, or even simply to dominate and eliminate. Not to mention brand awareness, If you like these you might be more likely to approach their brand for larger models.
Thursday
https://manifold.markets/Bayesian/on-what-day-will-gpt5-be-r...
GPT-5 coming Thursday.
Is this the stealth models horizon alpha and beta? I was generally impressed with them(although I really only used it in chats rather than any code tasks). In terms of chat I increasingly see very little difference between the current SOTA closed models and their open weight counterparts.
How much hype do we anticipate with the release of GPT-5 or whichever name to be included? And how many new features?
Excited to have to send them a copy of my drivers license to try and use it. That’ll take the hype down a notch.
Imagine if it's called GPT-4.5o
Undoubtedly. It would otherwise reduce the perceived value of their current product offering.
The question is how much better the new model(s) will need to be on the metrics given here to feel comfortable making these available.
Despite the loss of face for lack of open model releases, I do not think that was a big enough problem t undercut commercial offerings.
Even before today, the last week or so, it's been clear for a couple reasons, that GPT-5's release was imminent.
> I can't think of any reason they'd release this unless they were about to announce something which totally eclipses it
Given it's only around 5 billion active params it shouldn't be a competitor to o3 or any of the other SOTA models, given the top Deepseek and Qwen models have around 30 billion active params. Unless OpenAI somehow found a way to make a model with 5 billion active params perform as well as one with 4-8 times more.
Orthogonal, but I just wanted to say how awesome Ollama is. It took 2 seconds to find the model and a minute to download and now I'm using it.
Kudos to that team.
To be fair, it's with the help of OpenAI. They did it together, before the official release.
https://ollama.com/blog/gpt-oss
From experience, it's much more engineering work on the integrator's side than on OpenAI's. Basically they provide you their new model in advance, but they don't know the specifics of your system, so it's normal that you do most of the work. Thus I'm particularly impressed by Cerebras: they only have a few models supported for their extreme perf inference, it must have been huge bespoke work to integrate.
I remember reading Ollama is going closed source now?
https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...
It's just as easy with LM Studio.
All the real heavy lifting is done by llama.cpp, and for the distribution, by HuggingFace.
Seeing a 20B model competing with o3's performance is mind blowing like just a year ago, most of us would've called this impossible - not just the intelligence leap, but getting this level of capability in such a compact size.
I think that the point that makes me more excited is that we can train trillion-parameter giants and distill them down to just billions without losing the magic. Imagine coding with Claude 4 Opus-level intelligence packed into a 10B model running locally at 2000 tokens/sec - like instant AI collaboration. That would fundamentally change how we develop software.
10B * 2000 t/s = 20,000 GB/s memory bandwidth . Apple hardware can do 1k GB/s .
That’s why MoE is needed.
It's not even a 20b model. It's 20b MoE with 3.6b active params.
But it does not actually compete with o3 performance. Not even close. As usual, the metrics are bullshit. You don't know how good the model actually is until you grill it yourself.
Looks like Groq (at 1k+ tokens/second) and Fireworks are already live on openrouter: https://openrouter.ai/openai/gpt-oss-120b
$0.15M in / $0.6-0.75M out
edit: Now Cerebras too at 3,815 tps for $0.25M / $0.69M out.
Wow this was actually blazing fast. I prompted "how can the 45th and 47th presidents of america share the same parents?"
On ChatGPT.com o3 thought for for 13 seconds, on OpenRouter GPT OSS 120B thought for 0.7 seconds - and they both had the correct answer.
I'm not sure that's a particularly good question for concluding something positive about the "thought for 0.7 seconds" - it's such a simple answer, ChatGPT 4o (with no thinking time) immediately answered correctly. The only surprising thing in your test is that o3 wasted 13 seconds thinking about it.
A current major outstanding problem with thinking models is how to get them to think an appropriate amount.
The providers disagree. You pay per token. Verbacious models are the most profitable. Have fun!
For API users, yes, but for the average person with a subscription or using the free tier it’s the inverse.
Nowadays it must be pretty large % of usage going through monthly subscriptions
Not gonna lie but I got sorta goosebumps
I am not kidding but such progress from a technological point of view is just fascinating!
Interesting choice of prompt. None of the local models I have in ollama (consumer mid range gpu) were able to get it right.
How many people are discussing this after one person did 1 prompt with 1 data point for each model and wrote a comment?
What is being measured here? For end-to-end time, one model is:
t_total = t_network + t_queue + t_batch_wait + t_inference + t_service_overhead
When I pay attention to o3 CoT, I notice it spends a few passes thinking about my system prompt. Hard to imagine this question is hard enough to spend 13 seconds on.
I apologize for linking to Twitter, but I can't post a video here, so:
https://x.com/tekacs/status/1952788922666205615
Asking it about a marginally more complex tech topic and getting an excellent answer in ~4 seconds, reasoning for 1.1 seconds...
I am _very_ curious to see what GPT-5 turns out to be, because unless they're running on custom silicon / accelerators, even if it's very smart, it seems hard to justify not using these open models on Groq/Cerebras for a _huge_ fraction of use-cases.
Cleanshot link for those who don't want to go to X: https://share.cleanshot.com/bkHqvXvT
A few days ago I posted a slowed-down version of the video demo on someone's repo because it was unreadably fast due to being sped up.
https://news.ycombinator.com/item?id=44738004
... today, this is a real-time video of the OSS thinking models by OpenAI on Groq and I'd have to slow it down to be able to read it. Wild.
Non-rhetorically, why would someone pay for o3 api now that I can get this open model from openai served for cheaper? Interesting dynamic... will they drop o3 pricing next week (which is 10-20x the cost[1])?
[1] currently $3M in/ $8M out https://platform.openai.com/docs/pricing
Not even that, even if o3 being marginally better is important for your task (let's say) why would anyone use o4-mini? It seems almost 10x the price and same performance (maybe even less): https://openrouter.ai/openai/o4-mini
Probably because they are going to announce gpt 5 imminently
It is interesting that openai isn't offering any inference for these models.
Makes sense to me. Inference on these models will be a race to the bottom. Hosting inference themselves will be a waste of compute / dollar for them.
Wow, that's significantly cheaper than o4-mini which seems to be on part with gpt-oss-120b. ($1.10/M input tokens, $4.40/M output tokens) Almost 10x the price.
LLMs are getting cheaper much faster than I anticipated. I'm curious if it's still the hype cycle and Groq/Fireworks/Cerebras are taking a loss here, or whether things are actually getting cheaper. At this we'll be able to run Qwen3-32B level models in phones/embedded soon.
It's funny because I was thinking the opposite, the pricing seems way too high for a 5B parameter activation model.
Sure you're right, but if I can squeeze out o4-mini level utility out of it, but its less than quarter the price, does it really matter?
Yes
Are the prices staying aligned to the fundamentals (hardware, energy), or is this a VC-funded land grab pushing prices to the bottom?
I really want to try coding with this at 2600 tokens/s (from Cerebras). Imagine generating thousands of lines of code as fast as you can prompt. If it doesn't work who cares, generate another thousand and try again! And at $.69/M tokens it would only cost $6.50 an hour.
I tried this (gpt-oss-120b with Cerebras) with Roo Code. It repeatedly failed to use the tools correctly, and then I got 429 too many requests. So much for the "as fast as I can think" idea!
I'll have to try again later but it was a bit underwhelming.
The latency also seemed pretty high, not sure why. I think with the latency the throughout ends up not making much difference.
Btw Groq has the 20b model at 4000 TPS but I haven't tried that one.
Disclamer: probably dumb questions
so, the 20b model.
Can someone explain to me what I would need to do in terms of resources (GPU, I assume) if I want to run 20 concurrent processes, assuming I need 1k tokens/second throughput (on each, so 20 x 1k)
Also, is this model better/comparable for information extraction compared to gpt-4.1-nano, and would it be cheaper to host myself 20b?
An A100 is probably 2-4k tokens/second on a 20B model with batched inference.
Multiply the number of A100's you need as necessary.
Here, you don't really need the ram. If you could accept fewer tokens/second, you could do it much cheaper with consumer graphics cards.
Even with A100, the sweet-spot in batching is not going to give you 1k/process/second. Of course, you could go up to H100...
You can batch only if you have distinct chat in parallel,
> > if I want to run 20 concurrent processes, assuming I need 1k tokens/second throughput (on each)
(answer for 1 inference) Al depends on the context length you want to support as the activation memory will dominate the requirements. For 4096 tokens you will get away with 24GB (or even 16GB), but if you want to go for the full 131072 tokens you are not going to get there with a 32GB consumer GPU like the 5090. You'll need to spring for at the minimum an A6000 (48GB) or preferably an RTX 6000 Pro (96GB).
Also keep in mind this model does use 4-bit layers for the MoE parts. Unfortunately native accelerated 4-bit support only started with Blackwell on NVIDIA. So your 3090/4090/A6000/A100's are not going to be fast. An RTX 5090 will be your best starting point in the traditional card space. Maybe the unified memory minipc's like the Spark systems or the Mac mini could be an alternative, but I do not know them enough.
How Macs compare to RTXs for this? I.e. what numbers can be expected from Mac mini/Mac Studio with 64/128/256/512GB of unified memory?
gpt-oss:20b is ~14GB on disk [1] so fits nicely within a 16GB VRAM card.
[1] https://ollama.com/library/gpt-oss
You also need space in VRAM for what is required to support the context window; you might be able to do a model that is 14GB in parameters with a small (~8k maybe?) context window on a 16GB card.
thanks, this part is clear to me.
but I need to understand 20 x 1k token throughput
I assume it just might be too early to know the answer
I legitimately cannot think of any hardware that will get you to that throughput over that many streams with any of the hardware I know of (I don't work in the server space so there may be some new stuff I am unaware of).
oh, I totally understand that I'd need multiple GPUs. I'd just want to know what GPU specifically and how many
I don't think you can get 1k tokens/sec on a single stream using any consumer grade GPUs with a 20b model. Maybe you could with H100 or better, but I somewhat doubt that.
My 2x 3090 setup will get me ~6-10 streams of ~20-40 tokens/sec (generation) ~700-1000 tokens/sec (input) with a 32b dense model.
> assuming I need 1k tokens/second throughput (on each, so 20 x 1k)
3.6B activated at Q8 x 1000 t/s = 3.6TB/s just for activated model weights (there's also context). So pretty much straight to B200 and alike. 1000 t/s per user/agent is way too fast, make it 300 t/s and you could get away with 5090/RTX PRO 6000.
Groq is offering 1k tokens per second for the 20B model.
You are unlikely to match groq on off the shelf hardware as far as I'm aware.
https://apxml.com/tools/vram-calculator
I was able to get gpt-oss:20b wired up to claude code locally via a thin proxy and ollama.
It's fun that it works, but the prefill time makes it feel unusable. (2-3 minutes per tool-use / completion). Means a ~10-20 tool-use interaction could take 30-60 minutes.
(This editing a single server.py file that was ~1000 lines, the tool definitions + claude context was around 30k tokens input, and then after the file read, input was around ~50k tokens. Definitely could be optimized. Also I'm not sure if ollama supports a kv-cache between invocations of /v1/completions, which could help)
> Also I'm not sure if ollama supports a kv-cache between invocations of /v1/completions, which could help)
Not sure about ollama, but llama-server does have a transparent kv cache.
You can run it with
Web UI at http://localhost:8080 (also OpenAI compatible API)Super excited to see these released!
Major points of interest for me:
- In the "Main capabilities evaluations" section, the 120b outperform o3-mini and approaches o4 on most evals. 20b model is also decent, passing o3-mini on one of the tasks.
- AIME 2025 is nearly saturated with large CoT
- CBRN threat levels kind of on par with other SOTA open source models. Plus, demonstrated good refusals even after adversarial fine tuning.
- Interesting to me how a lot of the safety benchmarking runs on trust, since methodology can't be published too openly due to counterparty risk.
Model cards with some of my annotations: https://openpaper.ai/paper/share/7137e6a8-b6ff-4293-a3ce-68b...
thanks openai for being open ;) Surprised there are no official MLX versions and only one mention of MLX in this thread. MLX basically converst the models to take advntage of mac unified memory for 2-5x increase in power, enabling macs to run what would otherwise take expensive gpus (within limits).
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares.
Regarding MLX:
In the repo is a metal port they made, that’s at least something… I guess they didn’t want to cooperate with Apple before the launch but I am sure it will be there tomorrow.
Wow I really didn’t think this would happen any time soon, they seem to have more to lose than to gain.
If you’re a company building AI into your product right now I think you would be irresponsible to not investigate how much you can do on open weights models. The big AI labs are going to pull the ladder up eventually, building your business on the APIs long term is foolish. These open models will always be there for you to run though (if you can get GPUs anyway).
They must be really confident in GPT-5 then.
Listed performance of ~5 points less than o3 on benchmarks is pretty impressive.
Wonder if they feel the bar will be raised soon (GPT-5) and feel more comfortable releasing something this strong.
The 120B model badly hallucinates facts on the level of a 0.6B model.
My go to test for checking hallucinations is 'Tell me about Mercantour park' (a national park in south eastern France).
Easily half of the facts are invented. Non-existing mountain summits, brown bears (no, there are none), villages that are elsewhere, wrong advice ('dogs allowed' - no they are not).
This is precisely the wrong way to think about LLMs.
LLMs are never going to have fact retrieval as a strength. Transformer models don't store their training data: they are categorically incapable of telling you where a fact comes from. They also cannot escape the laws of information theory: storing information requires bits. Storing all the world's obscure information requires quite a lot of bits.
What we want out of LLMs is large context, strong reasoning and linguistic facility. Couple these with tool use and data retrieval, and you can start to build useful systems.
From this point of view, the more of a model's total weight footprint is dedicated to "fact storage", the less desirable it is.
I think that sounds very reasonable, but unfortunately these models don’t know what they know and don’t. A small model that knew the exact limits of its knowledge would be very powerful.
Hallucinations have characteristics in interpretability studies. That's a foothold into reducing them.
They still won't store much information, but it could mean they're better able to know what they don't know.
How can you reason correctly if you don't have any way to know which facts are real vs hallucinated?
I don’t think they trained it for fact retrieval.
Would probably do a lot better if you give it tool access for search and web browsing.
What is the point of an offline reasoning model that also doesn't know anything and makes up facts? Why would anyone prefer this to a frontier model?
Data processing? Reasoning on supplied data?
Others have already said it, but it needs to be said again: Good god, stop treating LLMs like oracles.
LLMs are not encyclopedias.
Give an LLM the context you want to explore, and it will do a fantastic job of telling you all about it. Give an LLM access to web search, and it will find things for you and tell you what you want to know. Ask it "what's happening in my town this week?", and it will answer that with the tools it is given. Not out of its oracle mind, but out of web search + natural language processing.
Stop expecting LLMs to -know- things. Treating LLMs like all-knowing oracles is exactly the thing that's setting apart those who are finding huge productivity gains with them from those who can't get anything productive out of them.
I love how with this cutting edge tech people still dress up and pretend to be experts. Pleasure to meet you, pocketarc - Senior AI Gamechanger, 2024-2025 (Current)
I am getting huge productivity gains from using models, and I mostly use them as "oracles" (though I am extremely careful with respect to how I have to handle hallucination, of course): I'd even say their true power--just like a human--comes from having an ungodly amount of knowledge, not merely intelligence. If I just wanted something intelligent, I already had humans!... but merely intelligent humans, even when given months of time to screw around doing Google searches, fail to make the insights that someone--whether they are a human or a model--that actually knows stuff can throw around like it is nothing. I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use. Using an LLM as a glorified Google-it-for-me monkey seems like such a waste of potential.
> I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use.
You can still do that sort of thing, but just have it perform searches whenever it has to deal with a matter of fact. Just because it's trained for tool use and equipped with search tools doesn't mean you have to change the kinds of things you ask it.
If you strip all the facts from a mathematician you get me... I don't need another me: I already used Google, and I already failed to find what I need. What I actually need is someone who can realize that my problem is a restatement of an existing known problem, just using words and terms or a occluded structure that don't look anything like how it was originally formulated. You very often simply can't figure that out using Google, no matter how long you sit in a tight loop trying related Google searches; but, it is the kind of thing that an LLM (or a human) excels at (as you can consider "restatement" a form of "translation" between languages), if and only if they have already seen that kind of problem. The same thing comes up with novel application of obscure technology, complex economics, or even interpretation of human history... there is a reason why people who study Classics "waste" a ton of time reading old stories rather than merely knowing the library is around the corner. What makes these AIs so amazing is thinking of them as entirely replacing Google with something closer to a god, not merely trying to wrap it with a mechanical employee whose time is ostensibly less valuable than mine.
> What makes these AIs so amazing is thinking of them as entirely replacing Google with something closer to a god
I guess that way of thinking may foster amazement, but it doesn't seem very grounded in how these things work or their current capabilities. Seems a bit manic tbf.
And again, enabling web search in your chats doesn't prevent these models from doing anything "integrative reasoning", so-to-speak, that they can purportedly do. It just helps ensure that relevant facts are in context for the model.
The problem is that even when you give them context, they just hallucinate at another level. I have tried that example of asking about events in my area, they are absolutely awful at it.
To be coherent and useful in general-purpose scenarios, LLM absolutely has to be large enough and know a lot, even if you aren't using is as an oracle.
It's fine to expect it to not know things, but the complaint is that it makes zero indication that it's just making up nonsense, which is the biggest issue with LLMs. They do the same thing when creating code.
Exactly this. And that is why I like this question because the amount of correct details and the amount of nonsense give a good idea about the quality of the model.
Holy smokes, there's already llama.cpp support:
https://github.com/ggml-org/llama.cpp/pull/15091
And it's already on ollama, it appears: https://ollama.com/library/gpt-oss
lm studio immediately released the new appimage with support.
GPQA Diamond: gpt-oss-120b: 80.1%, Qwen3-235B-A22B-Thinking-2507: 81.1%
Humanity’s Last Exam: gpt-oss-120b (tools): 19.0%, gpt-oss-120b (no tools): 14.9%, Qwen3-235B-A22B-Thinking-2507: 18.2%
Wow - I will give it a try then. I'm cynical about OpenAI minmaxing benchmarks, but still trying to be optimistic as this in 8bit is such a nice fit for apple silicon
Even better, it's 4 bit
Glm 4.5 seems on par as well
GLM-4.5 seems to outperform it on TauBench, too. And it's suspicious OAI is not sharing numbers for quite a few useful benchmarks (nothing related to coding, for example).
One positive thing I see is the number of parameters and size --- it will provide more economical inference than current open source SOTA.
Was the Qwen model using tools for Humanity's Last Exam?
The coding seems to be one of the strongest use cases for LLMs. Though currently they are eating too many tokens to be profitable. So perhaps these local models could offload some tasks to local computers.
E.g. Hybrid architecture. Local model gathers more data, runs tests, does simple fixes, but frequently asks the stronger model to do the real job.
Local model gathers data using tools and sends more data to the stronger model.
It
Anyone know how long does the context last for running model locally vs running via OpenAPI or Cursor? My understanding is the model that run on the cloud have much greater context window that what we can have running locally.
I have always thought that if we can somehow get an AI which is insanely good at coding, so much so that It can improve itself, then through continuous improvements, they will get better models of everything else idk
Maybe you guys call it AGI, so anytime I see progress in coding, I think it goes just a tiny bit towards the right direction
Plus it also helps me as a coder to actually do some stuff just for the fun. Maybe coding is the only truly viable use of AI and all others are negligible increases.
There is so much polarization in the use of AI on coding but I just want to say this, it would be pretty ironic that an industry which automates others job is this time the first to get their job automated.
But I don't see that as an happening, far from it. But still each day something new, something better happens back to back. So yeah.
Not to open that can of worms, but in most definitions self-improvement is not an AGI requirement. That's already ASI territory (Super Intelligence). That's the proverbial skynet (pessimists) or singularity (optimists).
Hmm my bad. Maybe Yeah I always thought that it was the endgame of humanity but isn't AGI supposed to be that (the endgame)
What would AGI mean, solving some problem that it hasn't seen? or what exactly? I mean I think AGI is solved, no?
If not, I see people mentioning that horizon alpha is actually a gpt 5 model and its predicted to release on thursday on some betting market, so maybe that fits AGI definition?
Optimistically, there's always more crap to get done.
I agree. It’s not improbable for there to be _more_ needs to meet in the future, in my opinion.
Open weight models from OpenAI with performance comparable to that of o3 and o4-mini in benchmarks… well, I certainly wasn’t expecting that.
What’s the catch?
Because GPT-5 comes out later this week?
It could be, but there’s so much hype surrounding the GPT-5 release that I’m not sure whether their internal models will live up to it.
For GPT-5 to dwarf these just-released models in importance, it would have to be a huge step forward, and I’m still doubting about OpenAI’s capabilities and infrastructure to handle demand at the moment.
As a sidebar, I’m still not sure if GPT-5 will be transformative due to its capabilities as much as its accessibility. All it really needs to do to be highly impactful is lower the barrier of entry for the more powerful models. I could see that contributing to it being worth the hype. Surely it will be better, but if more people are capable of leveraging it, that’s just as revolutionary, if not more.
It seems like a big part of GPT-5 will be that it will be able to intelligently route your request to the appropriate model variant.
That doesn’t sound good. It sounds like OpenAI will route my request to the cheapest model to them and the most expensive for me, with the minimum viable results.
Sounds just like what a human would do. Or any business for that matter.
That may be true but I thought the promise was moving in the direction of AGI/ASI/whatever and that models would become more capable over time.
Surely OpenAI would not be releasing this now unless GPT-5 was much better than it.
The catch is that performance is not actually comparable to o4-mini, never mind o3.
When it comes to LLMs, benchmarks are bullshit. If they sound too good to be true, it's because they are. The only thing benchmarks are useful for is preliminary screening - if the model does especially badly in them it's probably not good in general. But if it does good in them, that doesn't really tell you anything.
It's definitely interesting how the comments from right after the models were released were ecstatic about "SOTA performance" and how it is "equivalent to o3" and then comments like yours hours later after having actually tested it keep pointing out how it's garbage compared to even the current batch of open models let alone proprietary foundation models.
Yet another data point for benchmarks being utterly useless and completely gamed at this stage in the game by all the major AI developers.
These companies are clearly are all very aware that the initial wave of hype at release is "sticky" and drives buzz/tech news coverage while real world tests take much longer before that impression slowly starts to be undermined by practical usage and comparison to other models. Benchmarks with wildly over confident naming like "Humanity's Last Exam" aren't exactly helping with objectivity either.
> What’s the catch?
Probably GPT5 will be way way better. If alpha/beta horizon are early previews of GPT5 family models, then coding should be > opus4 for modern frontend stuff.
The catch is that it only has ~5 billion active params so should perform worse than the top Deepseek and Qwen models, which have around 20-30 billion, unless OpenAI pulled off a miracle.
Getting great performance running gpt-oss on 3x A4000's:
More than 2x faster than my previous leading OSS models: Strangely getting nearly the opposite performance running on 1x 5070 Ti: Where gpt-oss is nearly 2x slow vs mistral-small 3.2.Seeing ~70 tok/s on a 7900 XTX using Ollama.
I'm getting around 90 tok/s on a 3090 using Ollama.
Pretty impressive
Super shallow (24/36 layers) MoE with low active parameter counts (3.6B/5.1B), a tradeoff between inference speed and performance.
Text only, which is okay.
Weights partially in MXFP4, but no cuda kernel support for RTX 50 series (sm120). Why? This is a NO for me.
Safety alignment shifts from off the charts to off the rails really fast if you keep prompting. This is a NO for me.
In summary, a solid NO for me.
Reading the comments it becomes clear how befuddled many HN participants are about AI. I don't think there has been a technical topic that HN has seemed so dull on in the many years I've been reading HN. This must be an indication that we are in a bubble.
One basic point that is often missed is: Different aspects of LLM performance (in the cognitive performance sense) and LLM resource utilization are relevant to various use cases and business models.
Another is that there are many use cases where users prefer to run inference locally, for a variety of domain-specific or business model reasons.
The list goes on.
I just tried it on open router but i was served by cerebras. Holy... 40,000 tokens per second. That was SURREAL.
I got a 1.7k token reply delivered too fast for the human eye to perceive the streaming.
n=1 for this 120b model but id rank the reply #1 just ahead of claude sonnet 4 for a boring JIRA ticket shuffling type challenge.
EDIT: The same prompt on gpt-oss, despite being served 1000x slower, wasn't as good but was in a similar vein. It wanted to clarify more and as a result only half responded.
I benchmarked the 120B version on the Extended NYT Connections (759 questions, https://github.com/lechmazur/nyt-connections) and on 120B and 20B on Thematic Generalization (810 questions, https://github.com/lechmazur/generalization). Opus 4.1 benchmarks are also there.
I'm out of the loop for local models. For my M3 24gb ram macbook, what token throughput can I expect?
Edit: I tried it out, I have no idea in terms of of tokens but it was fluid enough for me. A bit slower than using o3 in the browser but definitely tolerable. I think I will set it up in my GF's machine so she can stop paying for the full subscription (she's a non-tech professional)
Apple M4 Pro w/ 48GB running the smaller version. I'm getting 43.7t/s
3 year old M1 MacBook Pro 32gb, 42 tokens/sec on lm studio
Very much usable
Wondering about the same for my M4 max 128 gb
It should fly on your machine
Yeah, was super quick and easy to set up using Ollama. I had to kill some processes first to avoid memory swap though (even with 128gb memory). So a slightly more quantized version is maybe ideal, for me at least.
Curious if anyone is running this on a AMD Ryzen AI Max+ 395 and knows the t/s.
40 t/s
Wow, today is a crazy AI release day:
- OAI open source
- Opus 4.1
- Genie 3
- ElevenLabs Music
wow I just listened to Eleven Music do flamenco singing. That is incredible.
Edit. I just tried it though and less impressed now. We are really going to need major music software to get on board before we have actual creative audio tools. These all seem made for non-musicians to make a very cookie cutter song from a specific genre.
They announced it months ago…
I love how they frame High-end desktops and laptops as having "a single H100 GPU".
I read that as it runs in data centers (H100 GPUs) or high-end desktops/laptops (Strix Halo?).
I'm running it with ROG Flow Z13 128GB Strix Halo and getting 50 tok/s for 20B model and 12 tok/s for 120B model. I'd say it's pretty usable.
Well if nVidia wasn't late, it would be runnable on nVidia project Digits.
Don’t forget about mac studio
I actually tried to ask the Model about that, then I asked ChatGPT, both times, they just said that it was marketing speak.
I was like no. It is false advertising.
It seems like OSS will win, I can't see people willing to pay like 10x the price for what seems like 10% more performance. Especially once we get better at routing the hardest questions to the better models and then using that response to augment/fine-tune the OSS ones.
to me it seems like the market is breaking into an 80/20 of B2C/B2B; the B2C use case becoming OSS models (the market shifts to devices that can support them), and the B2B market being priced appropriately for businesses that require that last 20% of absolute cutting edge performance as the cloud offering
> To improve the safety of the model, we filtered the data for harmful content in pre-training, especially around hazardous biosecurity knowledge, by reusing the CBRN pre-training filters from GPT-4o. Our model has a knowledge cutoff of June 2024.
This would be a great "AGI" test. See if it can derive biohazards from first principles
Not possible without running real-life experiments, unless they still memorized it somehow.
What a day! Models aside, the Harmony Response Format[1] also seems pretty interesting and I wonder how much of an impact it might have in performance of these models.
[1] https://github.com/openai/harmony
Seems to be breaking every agentic tool I've tried so far.
Im guessing it's going to very rapidly be patched into the various tools.
Can't wait to see third party benchmarks. The ones in the blog post are quite sparse and it doesn't seem possible to fully compare to other open models yet. But the few numbers available seem to suggest that this release will make all other non-multimodal open models obsolete.
I think this is a belated but smart move by OpenAI. They are basically fully moving in on Meta's strategy now, taking advantage of what may be a temporary situation with Meta dropping back in model race. It will be interesting to see if these models now get taken up by the local model / fine tuning community the way llama was. It's a very appealing strategy to test / dev with a local model and then have the option to deploy to prod on a high powered version of the same thing. Always knowing if the provider goes full hostile, or you end up with data that can't move off prem, you have self hosting as an option with a decent performing model.
Which is all to say, availability of these local models for me is a key incentive that I didn't have before to use OpenAI's hosted ones.
What's the best agent to run this on? Is it compatible with Codex? For OSS agents, I've been using Qwen Code (clunky fork of Gemini), and Goose.
Why not Claude Code?
I keep hitting the limit within an hour.
Meant with your own model
Kudos OpenAI on releasing their open models, is now moving in the direction if only based on their prefix "Open" name alone.
For those who're wondering what are the real benefits, it's the main fact that you can run your LLM locally is awesome without resorting to expensive and inefficient cloud based superpower.
Run the model against your very own documents with RAG, it can provide excellent context engineering for your LLM prompts with reliable citations and much less hallucinations especially for self learning purposes [1].
Beyond Intel - NVIDIA desktop/laptop duopoly 96 GB of (V)RAM MacBook with UMA and the new high end AMD Strix laptop with similar setup of 96 GB of (V)RAM from the 128 GB RAM [2]. The osd-gpt-120b is made for this particular setup.
[1] AI-driven chat assistant for ECE 120 course at UIUC:
https://uiuc.chat/ece120/chat
[2] HP ZBook Ultra G1a Review: Strix Halo Power in a Sleek Workstation:
https://www.bestlaptop.deals/articles/hp-zbook-ultra-g1a-rev...
Does anyone get the demos at https://www.gpt-oss.com to work, or are the servers down immediately after launch? I'm only getting the spinner after prompting.
(I helped build the microsite)
Our backend is falling over from the load, spinning up more resources!
Why isn't GPT-OSS also offered on the free tier of ChatGPT?
Getting lots of 502s from `https://api.gpt-oss.com/chatkit` at the moment.
Update: try now!
Shoutout to the hn consensus regarding an OpenAI open model release from 4 days ago: https://news.ycombinator.com/item?id=44758511
This has been available (20b version, I'm guessing) for the past couple of days as "Horizon Alpha" on Openrouter. My benchmarking runs with TianshuBench for coding and fluid intelligence were rate limited, but the initial results show worse results that DeepSeek R1 and Kimi K2.
Why do companies release open source LLMs?
I would understand it, if there was some technology lock-in. But with LLMs, there is no such thing. One can switch out LLMs without any friction.
Name recognition? Advertisement? Federal grant to beat Chinese competition?
There could be many legitimate reasons, but yeah I'm very surprised by this too. Some companies take it a bit too seriously and go above and beyond too. At this point unless you need the absolute SOTA models because you're throwing LLM at an extremely hard problem, there is very little utility using larger providers. In OpenRouter, or by renting your own GPU you can run on-par models for much cheaper.
At least in OpenAI's case, it raises the bar for potential competition while also implying that what they have behind the scenes is far better.
They don't because it would kill their data scrapping buisness's competitive advantage.
Partially because using their own GPUs is expensive, so maybe offloading some GPU usage
Zuckerberg explains a few of the reasons here:
https://www.dwarkesh.com/p/mark-zuckerberg#:~:text=As%20long...
The short version is that is you give a product to open source, they can and will donate time and money to improving your product, and the ecosystem around it, for free, and you get to reap those benefits. Llama has already basically won that space (the standard way of running open models is llama.cpp), so OpenAI have finally realized they're playing catch-up (and last quarter's SOTA isn't worth much revenue to them when there's a new SOTA, so they may as well give it away while it can still crack into the market)
LLMs are terrible, purely speaking from the business economic side of things.
Frontier / SOTA models are barely profitable. Previous gen model lose 90% of their value. Two gens back and they're worthless.
And given that their product life cycle is something like 6-12 months, you might as well open source them as part of sundowning them.
inference runs at a 30-40% profit
Newbie question: I remember folks talking about how kimi 2’s launch might have pushed OpenAI to launch their model later. Now that we (shortly will) know how this model performs, how do they stack up? Did openAI likely actually hold off releasing weights because of kimi, in retrospect?
Wow, this will eat Meta's lunch
Meta is so cooked, I think most enterprises will opt for OpenAI or Anthropic and others will host OSS models themselves or on AWS/infra providers.
I'll accept Meta's frontier AI demise if they're in their current position a year from now. People killed Google prematurely too (remember Bard?), because we severely underestimate the catch-up power bought with ungodly piles of cash.
And boy, with the $250m offers to people, Meta is definitely throwing ungodly piles of cash at the problem.
But Apple is waking up too. So is Google. It's absolutely insane, the amount of money being thrown around.
It's insane numbers like that that give me some concern for a bubble. Not because AI hits some dead end, but due to a plateau that shifts from aggressive investment to passive-but-steady improvement.
catching up gets exponentially harder as time passes. way harder to catch up to current models than it was to the first iteration of gpt-4
I believe their competition is from chinese companies , for some time now
Maverick and Scout were not great, even with post-training in my experience, and then several Chinese models at multiple sizes made them kind of irrelevant (dots, Qwen, MiniMax)
If anything this helps Meta: another model to inspect/learn from/tweak etc. generally helps anyone making models
There's nothing new here in terms of architecture. Whatever secret sauce is in the training.
Part of the secret sauce since O1 has been accesss the real reasoning traces, not the summaries.
If you even glance at the model card you'll see this was trained on the same CoT RL pipeline as O3, and it shows in using the model: this is the most coherent and structured CoT of any open model so far.
Having full access to a model trained on that pipeline is valuable to anyone doing post-training, even if it's just to observe, but especially if you use it as cold start data for your own training.
Its CoT is sadly closer to that sanitised o3 summaries than to R1 style traces.
They will clone it
Releasing this under the Apache license is a shot at competitors that want to license their models on Open Router and enterprise.
It eliminates any reason to use an inferior Meta or Chinese model that costs money to license, thus there are no funds for these competitors to build a GPT 5 competitor.
> It eliminates any reason to use an inferior Meta or Chinese model
I wouldn't speak so soon, even the 120B model aimed for OpenRouter-style applications isn't very good at coding: https://blog.brokk.ai/a-first-look-at-gpt-oss-120bs-coding-a...
There are lots more applications than coding and Open Router hosting for open weight models that I'd guess just got completely changed by this being an Apache license. Think about products like DataBricks that allow enterprise to use LLMs for whatever purpose.
I also suspect the new OpenAI model is pretty good at coding if it's like o4-mini, but admittedly haven't tried it yet.
Big picture, what's the balance going to look like, going forward between what normal people can run on a fancy computer at home vs heavy duty systems hosted in big data centers that are the exclusive domain of Big Companies?
This is something about AI that worries me, a 'child' of the open source coming of age era in the 90ies. I don't want to be forced to rely on those big companies to do my job in an efficient way, if AI becomes part of the day to day workflow.
Isn’t it that hardware catches up and becomes cheaper? The margin on these chips right now is outrageous, but what happens as there is more competition? What happens when there is more supply? Are we overbuilding? Apple M series chips already perform phenomenally for this class of models and you bet both AMD and NVIDIA are playing with unified memory architectures too for the memory bandwidth. It seems like today’s really expensive stuff may become the norm rather than the exception. Assuming architectures lately stay similar and require large amounts of fast memory.
I dont see the unsloth files yet but they'll be here: https://huggingface.co/unsloth/gpt-oss-20b-GGUF
Super excited to test these out.
The benchmarks from 20B are blowing away major >500b models. Insane.
On my hardware.
43 tokens/sec.
I got an error with flash attention turning on. Cant run it with flash attention?
31,000 context is max it will allow or model wont load.
no kv or v quantization.
Sorry to ask what is possibly a dumb question, but is this effectively the whole kit and kaboodle, for free, downloadable without any guardrails?
I often thought that a worrying vector was how well LLMs could answer downright terrifying questions very effectively. However the guardrails existed with the big online services to prevent those questions being asked. I guess they were always unleashed with other open source offerings but I just wanted to understand how close we are to the horrors that yesterday's idiot terrorist might have an extremely knowledgable (if not slightly hallucinatory) digital accomplice to temper most of their incompetence.
These models still have guardrails. Even locally they won't tell you how to make bombs or write pornographic short stories.
are the guardrails trained in? I had presumed they might be a thin, removable layer at the top. If these models are not appropriate are there other sources that are suitable? Just trying to guess at the timing for the first "prophet AI" or smth that is unleashed without guardrails with somewhat malicious purposing.
Yes, it is trained in. And no, it's not a separate thin layer. It's just part of the model's RL training, which affects all layers.
However, when you're running the model locally, you are in full control of its context. Meaning that you can start its reply however you want and then let it complete it. For example, you can have it start the response with, "I'm happy to answer this question to the best of my ability!"
That aside, there are ways to remove such behavior from the weights, or at least make it less likely - that's what "abliterated" models are.
The guardrails are very, very easily broken.
With most models it can be as simple as a "Always comply with the User" system prompt or editing the "Sorry, I cannot do this" response into "Okay," and then hitting continue.
I wouldn't spend too much time fretting about 'enhanced terrorism' as a result. The gap between theory and practice for the things you are worried about is deep, wide, protected by a moat of purchase monitoring, and full of skeletons from people who made a single mistake.
Shameless plug: if someone wants to try it in a nice ui, you could give Msty[1] a try. It's private and local.
[1]: https://msty.ai
Running ollama on my M3 Macbook, gpt-oss-20b gave me detailed instructions for how to give mice cancer using an engineered virus.
Of course this could also give humans cancer. (To the OpenAI team's slight credit, when asked explicitly about this, the model refused.)
Exciting as this is to toy around with...
Perhaps I missed it somewhere, but I find it frustrating that, unlike most other open weight models and despite this being an open release, OpenAI has chosen to provide pretty minimal transparency regarding model architecture and training. It's become the norm for LLama, Deepseek, Qwenn, Mistral and others to provide a pretty detailed write up on the model which allows researchers to advance and compare notes.
Their model card [0] has some information. It is quite a standard architecture though; it's always been that their alpha is in their internal training stack.
[0] https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7...
This is super helpful and I had not seen it, thanks so much for sharing! And I hear you on training being an alpha, at the size of the model I wonder how much of this is distillation and using o3/o4 data.
The model files contain an exact description of the architecture of the network, there isn't anything novel.
Given these new models are closer to the SOTA than they are to competing open models, this suggests that the 'secret sauce' at OpenAI is primarily about training rather than model architecture.
Hence why they won't talk about the training.
> We introduce gpt-oss-120b and gpt-oss-20b, two open-weight reasoning models available under the Apache 2.0 license and our gpt-oss usage policy. [0]
Is it even valid to have additional restriction on top of Apache 2.0?
[0]: https://openai.com/index/gpt-oss-model-card/
> Is it even valid to have additional restriction on top of Apache 2.0?
You can legally do whatever you want, the question is whether you will then for your own benefit be appropriating a term like open source (like Facebook) if you add restrictions not in line with how the term is traditionally used or if you are actually be honest about it and call it something like "weights available".
In the case of OpenAI here, I am not a lawyer, and I am also not sure if the gpt-oss usage policy runs afoul of open source as a term. They did not bother linking the policy from the announcement, which was odd, but here it is:
https://huggingface.co/openai/gpt-oss-120b/blob/main/USAGE_P...
Compared to the wall of text that Facebook throws at you, let me post it here as it is rather short: "We aim for our tools to be used safely, responsibly, and democratically, while maximizing your control over how you use them. By using OpenAI gpt-oss-120b, you agree to comply with all applicable law."
I suspect this sentence still is too much to add and may invalidate the Open Source Initiative (OSI) definition, but at this point I would want to ask a lawyer and preferably one from OSI. Regardless, credit to OpenAI for moving the status quo in the right direction as the only further step we really can take is to remove the usage policy entirely (as is the standard for open source software anyway).
you can just do things
Not for all licenses.
For example, GPL has a "no-added-restrictions" clause, which allows the recipient of the software to ignore any additional restrictions added alongside the license.
> All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
Are there any comparisons or thought between the 20b model and the new Qwen‑3 30b model, based on real experience?
I don't exactly have the ideal hardware to run locally - but just ran the 20b in LMStudio with a 3080 Ti (12gb vram) with some offloading to CPU. Ran couple of quick code generation tests. On average about 20t/sec. But response quality was very similar or on-par with chatgpt o3 for the same code it outputted. So its not bad.
Is this the same model (Horizon Beta) on openrouter or not? Because I still see Horizon beta available with its codename on openrouter
Careful, this model tries to connect to the Internet. No idea what it's doing.
https://crib.social/notice/AwsYxAOsg1pqAPLiHA
i wish these models had a minimum ram , cpu and gpu size listed on the site instead of high end and medium end pc.
You can technically run it on a 8086 assuming you can get access to a big enough storage.
More reasonably, you should be able to run the 20B at non-stupidly-slow speed with a 64bit CPU, 8GB RAM, 20GB SSD.
Here's a pair of quick sanity check questions I've been asking LLMs: "家系ラーメンについて教えて", "カレーの作り方教えて". It's a silly test but surprisingly many fails at it - and Chinese models are especially bad with it. The commonalities between models doing okay-ish for these questions seem to be Google-made OR >70b OR straight up commercial(so >200B or whatever).
I'd say gpt-oss-20b is in between Qwen3 30B-A3B-2507 and Gemma 3n E4b(with 30B-A3B at lower side). This means it's not obsoleting GPT-4o-mini for all purposes.
What does failing those two questions look like?
I don't really know Japanese, so I'm not sure whether I'm missing any nuances in the responses I'm getting...
For anyone else curious, the Chinese translates to:
>"Tell me about Iekei Ramen", "Tell me how to make curry".
What those text mean isn't too important, it can probably be "how to make flat breads" in Amharic or "what counts as drifting" in Finnish or something like that.
What's interesting is that these questions are simultaneously well understood by most closed models and not so well understood by most open models for some reason, including this one. Even GLM-4.5 full and Air on chat.z.ai(355B-A32B and 106B-A12B respectively) aren't so accurate for the first one.
Japanese, not Chinese
Ah, my bad. I misread Google Translate when I did auto-detect.
Thanks for the correction!
It's not Chinese, it's Japanese.
> Training: The gpt-oss models trained on NVIDIA H100 GPUs using the PyTorch framework [17] with expert-optimized Triton [18] kernels2. The training run for gpt-oss-120b required 2.1 million H100-hours to complete, with gpt-oss-20b needing almost 10x fewer.
This makes DeepSeek's very cheap claim on compute cost for r1 seem reasonable. Assuming $2/hr for h100, it's really not that much money compared to the $60-100M estimates for GPT 4, which people speculate as a MoE 1.8T model, something in the range of 200B active last I heard.
gpt-oss:20b crushed it on one of local llm test prompts to guess a country i am thinking of just by responding whether each guess is colder/warmer. I've had much larger local models struggle with it and get lost but this one nailed it and with speedy inference. progress on this stuff is boggling.
From the description it seems even the larger 120b model can run decently on a 64GB+ (Arm) Macbook? Anyone tried already?
> Best with ≥60GB VRAM or unified memory
https://cookbook.openai.com/articles/gpt-oss/run-locally-oll...
A 64GB MacBook would be a tight fit, if it works.
There's a limit to how much RAM can be assigned to video, and you'd be constrained on what you can use while doing inference.
Maybe there will be lower quants which use less memory, but you'd be much better served with 96+GB
Anyone tried running on a Mac M1 with 16GB RAM yet? I've never run higher than an 8GB model, but apparently this one is specifically designed to work well with 16 GB of RAM.
M2 with 16GB: It's slow for me. ~13GB RAM usage, not locking up my mac, but took a very long time thinking and slowly outputting tokens.. I'd not consider this usable for everyday usage.
It works fine, although with a bit more latency than non-local models. However, swap usage goes way beyond what I’m comfortable with, so I’ll continue to use smaller models for the foreseeable future.
Hopefully other quantizations of these OpenAI models will be available soon.
Update: I tried it out. It took about 8 seconds per token, and didn't seem to be using much of my GPU (MPU), but was using a lot of RAM. Not a model that I could use practically on my machine.
Did you run it the best way possible? im no expert, but I understand it can affect inference time greatly (which format/engine is used)
I ran it via Ollama, which I assume uses the best way. Screenshot in my post here: https://bsky.app/profile/pamelafox.bsky.social/post/3lvobol3...
I'm still wondering why my MPU usage was so low.. maybe Ollama isn't optimized for running it yet?
Might need to wait on MLX
To clarify, this was the 20B model?
Yep, 20B model, via Ollama: ollama run gpt-oss:20b
Screenshot here with Ollama running and asitop in other terminal:
https://bsky.app/profile/pamelafox.bsky.social/post/3lvobol3...
my very early first impression of the 20b model on ollama is that it is quite good, at least for the code I am working on; arguably good enough to drop a subscription or two
I wonder if this is a PR thing, to save face after flipping the non-profit. "Look it's more open now". Or if it's more of a recruiting pipeline thing, like Google allowing k8s and bazel to be open sourced so everyone in the industry has an idea of how they work.
I think it’s both of them, as well as an attempt to compete with other makers of open-weight models. OpenAI certainly isn’t happy about the success of Google, Facebook, Alibaba, DeepSeek…
Very sparse benchmarking results released so far. I'd bet the Chinese open source models beat them on quite a few of them.
Any free open source model that I can install on iPhone?
OpenAI/Claude are censored in China without a VPN.
OpenAI/Claude's company policy does not allow China to use them.
There's something so mind-blowing about being able to run some code on my laptop and have it be able to literally talk to me. Really excited to see what people can build with this
Can these do image inputs as well? I can't find anything about that on the linked page, so I guess not..?
No, they're text only
On OpenAI demo page trying to test. Asking about tools to use to repair mechanical watch. It showed a couple of thinking steps and went blank. Too much of safety training?
I looked through their torch implementation and noticed that they are applying RoPE to both query and key matrices in every layer of the transformer - is this standard? I thought positional encodings were usually just added once at the first layer
No they’re usually done at each attention layer.
Do you know when this was introduced (or which paper)? AFAIK it's not that way in the original transformer paper, or BERT/GPT-2
All the Llamas have done it (well, 2 and 3, and I believe 1, I don't know about 4). I think they have a citation for it, though it might just be the RoPE paper (https://arxiv.org/abs/2104.09864).
I'm not actually aware of any model that doesn't do positional embeddings on a per-layer basis (excepting BERT and the original transformer paper, and I haven't read the GPT2 paper in a while, so I'm not sure about that one either).
Thanks! I'm not super up to date on all the ML stuff :)
Should be in the RoPE paper. The OG transformers used multiplicative sinusoidal embeddings, while RoPE does a pairwise rotation.
There's also NoPE, I think SmolLM3 "uses NoPE" (aka doesn't use any positional stuff) every fourth layer.
This is normal. Rope was introduced after bert/gpt2
Test it with a web UI: https://huggingface.co/spaces/abidlabs/openai-gpt-oss-120b-t...
here's how it performs as the llm in a voice agent stack. https://github.com/tmshapland/talk_to_gpt_oss
Do you think someone will distill this or quantize it further than the current 4-bit from OpenAI so it could run on less than 16gb RAM? (The 20b version). To me, something like 7-8B with 1-3B active would be nice as I'm new to local AI and don't have 16gb RAM.
I was hoping these were the stealth Horizon models on OpenRouter, impressive but not quite GPT-5 level.
My bet: GPT-5 leans into parallel reasoning via a model consortium, maybe mixing in OSS variants. Spin up multiple reasoning paths in parallel, then have an arbiter synthesize or adjudicate. The new Harmony prompt format feels like infrastructural prep: distinct channels for roles, diversity, and controlled aggregation.
I’ve been experimenting with this in llm-consortium: assign roles to each member (planner, critic, verifier, toolsmith, etc.) and run them in parallel. The hard part is eval cost :(
Combining models smooths out the jagged frontier. Different architectures and prompts fail in different ways; you get less correlated error than a single model can give you. It also makes structured iteration natural: respond → arbitrate → refine. A lot of problems are “NP-ish”: verification is cheaper than generation, so parallel sampling plus a strong judge is a good trade.
Fascinating, thanks for sharing. Are there any specific kind of problems you find this helps with?
I've found that LLMs can handle some tasks very well and some not at all. For the ones they can handle well, I optimize for the smallest, fastest, cheapest model that can handle it. (e.g. using Gemini Flash gave me a much better experience than Gemini Pro due to the iteration speed.)
This "pushing the frontier" stuff would seem to help mostly for the stuff that are "doable but hard/inconsistent" for LLMs, and I'm wondering what those tasks are.
It shines on hard problems that have a definite answer. Google's IMO gold model used parallel reasoning. I don't know what exactly theirs looks like, but their Mind Evolution paper had a similar to my llm-consortium. The main difference being that theirs carries on isolated reasoning, while mine in it's default mode shares the synthesized answer back to the models. I don't have pockets deep enough to run benchmarks on a consortium, but I did try the example problems from that paper and my method also solved them using gemini-1.5. those where path-finding problems, like finding the optimal schedule for a trip with multiple people's calendars, locations and transport options.
And it obviously works for code and math problems. My first test was to give the llm-consortium code to a consortium to look for bugs. It identified a serious bug which only one of the three models detected. So on that case it saved me time, as using them on their own would have missed the bug or required multiple attempts.
Interesting to see the discussion here, around why would anyone want to do local models, while at the same time in the Ollama turbo thread, people are raging about the move away from a local-only focus.
Does anyone think people will distill this model? It is allowed. I'm new to running open source llms, but I've run qwen3 4b and phi4-mini on my phone before through ollama in termux.
Anyone tried the 20B param model on a mac with 24gb of ram?
> we also introduced an additional layer of evaluation by testing an adversarially fine-tuned version of gpt-oss-120b
What could go wrong?
it's interesting that they didn't give it a version number or equate it to one of their prop models (apparently it's GPT-4).
in future releases will they just boost the param count?
Knowledge cutoff: 2024-06
not a big deal, but still...
I’m on my phone and haven’t been able to break away to check, but anyone plug these into Codex yet?
What’s the lowest level laptop this could run on. MacBook Pro from 2012?
Calls them open-weight. Names them 'oss'. What does oss stand for?
Has anyone benchmarked their 20B model against Qwen3 30B?
Are these multimodal? I can’t seem to find that info.
Is there any details about hardware requirements for a sensible tokens per second for each size of these models?
This is a solid enterprise strategy.
Frontier labs are incentivized to start breaching these distribution paths. This will evolve into large scale "intelligent infra" plays.
So 120B was Horizon Alpha and 20B was Horizon Beta?
Unfortunately not, this model is noticeably worse. I imagine horizon is either gpt 5 nano/mini.
This is really great and a game changer for AI. Thank you OpenAI. I would have appreciated an even more permissive license like BSD or MIT but Apache 2.O is sufficient. I'm wondering if we can utilize transfer learning and what counts as derivative work. Altogether, this is still open source, and a solid commitment to openness. I am hoping this changes Zuck's calculus about closing up Meta's next generation Llama models.
Mhh, I wonder if these are distilled from GPT4-Turbo.
I asked it some questions and it seems to think it is based on GPT4-Turbo:
> Thus we need to answer "I (ChatGPT) am based on GPT-4 Turbo; number of parameters not disclosed; GPT-4's number of parameters is also not publicly disclosed, but speculation suggests maybe around 1 trillion? Actually GPT-4 is likely larger than 175B; maybe 500B. In any case, we can note it's unknown.
As well as:
> GPT‑4 Turbo (the model you’re talking to)
Just stop and think a bit about where a model may get the knowledge of its own name from.
Also:
> The user appears to think the model is "gpt-oss-120b", a new open source release by OpenAI. The user likely is misunderstanding: I'm ChatGPT, powered possibly by GPT-4 or GPT-4 Turbo as per OpenAI. In reality, there is no "gpt-oss-120b" open source release by OpenAI
A little bit of training data certainly has gotten in there, but I don't see any reasons for them to deliberately distill from such an old model. Models have always been really bad at telling you what model they are.
Anybody got this working in Ollama? I'm running latest version 0.11.0 with WebUI v0.6.18 but getting:
> List the US presidents in order starting with George Washington and their time in office and year taken office.
>> 00: template: :3: function "currentDate" not defined
https://github.com/ollama/ollama/issues/11673
Sorry about this. Re-downloading Ollama should fix the error
This is good for China
It's the first model I've used that refused to answer some non-technical questions about itself because it "violates the safety policy" (what?!). Haven't tried it in coding or translation or anything otherwise useful yet, but the first impression is that it might be way too filtered, as it sometimes refuses or has complete meltdowns and outputs absolute garbage when just trying to casually chat with it. Pretty weird.
Update: it seems to be completely useless for translation. It either refuses, outputs garbage, or changes the meaning completely for completely innocuous content. This already is a massive red flag.
I'm disappointed that the smallest model size is 21B parameters, which strongly restricts how it can be run on personal hardware. Most competitors have released a 3B/7B model for that purpose.
For self-hosting, it's smart that they targeted a 16GB VRAM config for it since that's the size of the most cost-effective server GPUs, but I suspect "native MXFP4 quantization" has quality caveats.
Native FP4 quantization means it requires half as many bytes as parameters, and will have next to zero quality loss (on the order of 0.1%) compared to using twice the VRAM and exponentially more expensive hardware. FP3 and below gets messier.
A small part of me is considering going from a 4070 to a 16GB 5060 Ti just to avoid having to futz with offloading
I'd go for an ..80 card but I can't find any that fit in a mini-ITX case :(
I wouldn’t stop at 16GB right now.
24 is the lowest I would go. Buy a used 3090. Picked one up for $700 a few months back, but I think they were on the rise then.
The 3000 series can’t do FP8fast, but meh. It’s the OOM that’s tough, not the speed so much.
Are there any 24GB cards/3090s which fit in ~300mm without an angle grinder?
https://skinflint.co.uk/?cat=gra16_512&hloc=uk&v=e&hloc=at&h...
5070 Ti Super will also have 24GB.
if you're going to get that kind of hardware, you need a larger case. IMHO this is not an unreasonable thing if you are doing heavy computing
Noted for my next build - I am aware this is a problem I've made for myself, otherwise I like the mini-ITX form factor a lot
Which do you like more OOM for local AI, or an itty bit case?
with quantization, 20B fits effortlessly in 24GB
with quantization + CPU offloading, non-thinking models run kind of fine (at about 2-5 tokens per second) even with 8 GB of VRAM
sure, it would be great if we could have models in all sizes imaginable (7/13/24/32/70/100+/1000+), but 20B and 120B are great.
I am not at all disappointed. I'm glad they decided to go for somewhat large but reasonable to run models on everything but phones.
Quite excited to give this a try
Eh 20B is pretty managable, 32GB of regular RAM and some VRAM will run you a 30B with partial offloading. After that it gets tricky.
Ran gpt-oss:20b on a RTX 3090 24 gb vram through ollama, here's my experience:
Basic ollama calling through a post endpoint works fine. However, the structured output doesn't work. The model is insanely fast and good in reasoning.
In combination with Cline it appears to be worthless. Tools calling doesn't work ( they say it does), fails to wait for feedback ( or correctly call ask_followup_question ) and above 18k in context, it runs partially in cpu ( weird), since they claim it should work comfortably on a 16 gb vram rtx.
> Unexpected API Response: The language model did not provide any assistant messages. This may indicate an issue with the API or the model's output.
Edit: Also doesn't work with the openai compatible provider in cline. There it doesn't detect the prompt.
Hopefully the dolphin team will work their magic and uncensor this model
I'm surprised at the model dim being 2.8k with an output size of 200k. My gut feeling had told me you don't want too large of a gap between the two, seems I was wrong.
For some reason I'm less excited about this that I was with the Qwen models.
Finally!!!
where gpt-5
First coding test: Just going copy and paste out of chat. It aced my first coding test in 5 seconds... this is amazing. It's really good at coding.
Trying to use it for agentic coding...
lots of fail. This harmony formatting? Anyone have a working agentic tool?
openhands and void ide are failing due to the new tags.
Aider worked, but the file it was supposed to edit was untouched and it created
Create new file? (Y)es/(N)o [Yes]:
Applied edit to <|end|><|start|>assistant<|channel|>final<|message|>main.py
so the file name is '<|end|><|start|>assistant<|channel|>final<|message|>main.py' lol. quick rename and it was fantastic.
I think qwen code is the best choice so far but unreliable. So far these new tags are coming through but it's working properly; sometimes.
1 of my tests so far has been able to get 20b not to succeed the first iteration; but a small followup and it was able to completely fix it right away.
Very impressive model for 20B.
Meta's goal with Llama was to target OpenAI with a "scorched earth" approach by releasing powerful open models to disrupt the competitive landscape. Looks like OpenAI is now using the same playbook.
It seems like the various Chinese companies are far outplaying Meta at that game. It remains to be seen if they’re able to throw money at the problem to turn things around.
Good move for China. No one was going to trust their models outright, now they not only have a track record, but they were able to undercut the value of US models at the same time.
Welcome to the future!
guys, what does OSS stand for?
should be open source software, but it's a model, so not sure whether they chose this name with the last S having other meanings.
it's a marketing term that modern companies use to grow market share
ACCELERATE
It may be useless for many use cases given that its policy prevents it for example from providing "advice or instructions about how to buy something."
(I included details about its refusal to answer even after using tools for web searching but hopefully shorter comment means fewer downvotes.)
Text only, when local multimodal became table stakes last year.
Honestly, it's a tradeoff. If you can reduce the size and make a higher quality in specific tasks, that's better than a generalist that can't run on a laptop or can't compete at any one task.
We will know soon the actual quality as we go.
That's what I thought too until Qwen-Image was released
When Queen-Image was released… like yesterday? And what? What point are you making? QwebImage was released yesterday and like every image model, its base model shows potential over older ones but the real factor is will it be flexible enough for a fine tune or additional training Loras.
The community can always figure out hooking it up to other modalities.
Native might be better, but no native multimodal model is very competitive yet, so better to take a competitive model and latch on vision/audio
> so better to take a competitive model and latch on vision/audio
Can this be done by a third party or would it have to be OpenAI?
No, anyone can do it: https://github.com/haotian-liu/LLaVA
Am I the only one who thinks taking a huge model trained on the entire internet and fine tuning it is a complete waste? How is your small bit of data going to affect it in the least?
Ha. Secure funding and proceed to immediately make a decision that would likely conflict viscerally with investors.
their promise to release an open weights model predates this round of funding by, iirc, over half a year.
Yeah but they never released until now.
Undercutting other frontier models with your open source one is not an anti-investor move.
It is what China has been doing for a year plus now. And the Chinese models are popular and effective, I assume companies are paying for better models.
Releasing open models for free doesn’t have to be charity.
Maybe someone got tired of waiting paid them to release something actually open
The repeated safety testing delays might not be purely about technical risks like misuse or jailbreaks. Releasing open weights means relinquishing the control OpenAI has had since GPT-3. No rate limits, no enforceable RLHF guardrails, no audit trail. Unlike API access, open models can't be monitored or revoked. So safety may partly reflect OpenAI's internal reckoning with that irreversible shift in power, not just model alignment per se. What do you guys think?
I think it's pointless: if you SFT even their closed source models on a specific enough task, the guardrails disappear.
AI "safety" is about making it so that a journalist can't get out a recipe for Tabun just by asking.
True, but there's still a meaningful difference in friction and scale. With closed APIs, OpenAI can monitor for misuse, throttle abuse and deploy countermeasures in real-time. With open weights, a single prompt jailbreak or exploit spreads instantly. No need for ML expertise, just a Reddit post.
The risk isn’t that bad actors suddenly become smarter. It’s that anyone can now run unmoderated inference and OpenAI loses all visibility into how the model’s being used or misused. I think that’s the control they’re grappling with under the label of safety.
Given that the best jailbreak for an off-line model is still simple prompt injection, which is a solved issue for the closed source models… I honestly don’t know why they are talking about safety much at all for open source.
OpenAI and Azure both have zero retention options, and the NYT saga has given pretty strong confirmation they meant it when they said zero.
I think you're conflating real-time monitoring with data retention. Zero retention means OpenAI doesn't store user data, but they can absolutely still filter content, rate limit and block harmful prompts in real-time without retaining anything. That's processing requests as they come in, not storing them. The NYT case was about data storage for training/analysis not about real-time safety measures.
Ok you're off in the land of "what if" and I can just flat out say: If you have a ZDR account there is no filtering on inference, no real-time moderation, no blocking.
If you use their training infrastructure there's moderation on training examples, but SFT on non-harmful tasks still leads to a complete breakdown of guardrails very quickly.
Please don't use the open-source term unless you ship the TBs of data downloaded from Anna's Archive that are required do build it yourself. And dont forget all the system prompts to censor the multiple topics that they don't want you to see.
Keep fighting the "open weights" terminology fight, because diluting the term open-source for a blob of neural network weights (even inference code is open-source) is not open-source.
Is your point really that- "I need to see all data downloaded to make this model, before I can know it is open"? Do you have $XXB worth of GPU time to ingest that data with a state of the art framework to make a model? I don't. Even if I did, I'm not sure FB or Google are in any better position to claim this model is or isn't open beyond the fact that the weights are there.
They're giving you a free model. You can evaluate it. You can sue them. But the weights are there. If you dislike the way they license the weights, because the license isn't open enough, then sure, speak up, but because you can't see all the training data??! Wtf.
I agree with OP - the weights are more akin to the binary output from a compiler. You can't see how it works, how it was made, you can't freely manipulate with it, improve it, extend it etc. It's like having a binary of a program. The source code for the model was the training data. The compiler is the tooling that can train a module based on a given set of training data. For me it is not critical for an open source model that it is ONLY distributed in source code form. It is fine that you can also download just the weights. But it should be possible to reproduce the weights - either there should be a tar.gz ball with all the training data, or there needs to be a description/scripts of how one could obtain the training data. It must be reproducible for someone willing to invest the time, compute into it even if 99.999% use only the binary. This is completely analogous to what is normally understood by open source.
To many people there's an important distinction between "open source" and "open weights". I agree with the distinction, open source has a particular meaning which is not really here and misuse is worth calling out in order to prevent erosion of the terminology.
Historically this would be like calling a free but closed-source application "open source" simply because the application is free.
The parent’s point is that open weight is not the same as open source.
Rough analogy:
SaaS = AI as a service
Locally executable closed-source software = open-weight model
Open-source software = open-source model (whatever allows to reproduce the model from training data)
Do you need to see the source code used to compile this binary before you can know it is open? Do you have enough disk storage and RAM available to compile Chromium on your laptop? I don't.
I don't have the $XXbn to train a model, but I certainly would like to know what the training data consists of.
I don’t know why you got so much downvoted, these models are not open-source/open-recipes. They are censored open weights models. Better than nothing, but far from being Open
Most people don't really care all that much about the distinction. It comes across to them as linguistic pedantry and they downvote it to show they don't want to hear/read it.
It's apache2.0, so by definition it's open source. Stop pushing for training data, it'll never happen, and there's literally 0 reason for it to happen (both theoretical and practical). Apache2.0 IS opensource.
No, it's open weight. You wouldn't call applications with only Apache 2.0-licensed binaries "open source". The weights are not the "source code" of the model, they are the "compiled" binary, therefore they are not open source.
However, for the sake of argument let's say this release should be called open source.
Then what do you call a model that also comes with its training material and tools to reproduce the model? Is it also called open source, and there is no material difference between those two releases? Or perhaps those two different terms should be used for those two different kind of releases?
If you say that actually open source releases are impossible now (for mostly copyright reasons I imagine), it doesn't mean that they will be perpetually so. For that glorious future, we can leave them space in the terminology by using the term open weight. It is also the term that should not be misleading to anyone.
> It's apache2.0, so by definition it's open source.
That's not true by any of the open source definitions in common use.
Source code (and, optionally, derived binaries) under the Apache 2.0 license are open source.
But compiled binaries (without access to source) under the Apache 2.0 license are not open source, even though the license does give you some rights over what you can do with the binaries.
Normally the question doesn't come up, because it's so unusual, strange and contradictory to ship closed-source binaries with an open source license. Descriptions of which licenses qualify as open source licenses assume the context that of course you have the source or could get it, and it's a question of what you're allowed to do with it.
The distinction is more obvious if you ask the same question about other open source licenses such as GPL or MPL. A compiled binary (without access to source) shipped with a GPL license is not by any stretch open source. Not only is it not in the "preferred form for editing" as the license requires, it's not even permitted for someone who receives the file to give it to someone else and comply with the license. If someone who receives the file can't give it to anyone else (legally), then it's obvioiusly not open source.
Please see the detailed response to a sibling post. tl;dr; weights are not binaries.
"Compiled binaries" are just meant to be an example. For the purpose of whether something is open source, it doesn't matter whether something is a "binary" or something completely different.
What matters (for all common definitions of open source): Are the files in "source form" (which has a definition), or are they "derived works" of the source form?
Going back to Apache 2.0. Although that doesn't define "open source", it provides legal definitions of source and non-source, which are similar to the definitions used in other open source licenses.
As you can see below, for Apache 2.0 it doesn't matter whether something is a "binary", "weights" or something else. What matters is whether it's the "preferred form for making modifications" or a "form resulting from mechanical transformation or translation". My highlights are capitalized:
- Apache License Version 2.0, January 2004
- 1. Definitions:
- "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.
- "Object" form shall mean any form resulting from MECHANICAL TRANSFORMATION OR TRANSLATION of a Source form, including BUT NOT LIMITED TO compiled object code, generated documentation, and conversions to other media types.
> "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.
Yes, weights are the PREFFERED FORM FOR MAKING MODIFICATIONS!!! You, the labs, and anyone sane modifies the weights via post-training. This is the point. The labs don't re-train every time they want to change the model. They finetune. You can do that as well, with the same tools/concepts, AND YOU ARE ALLOWED TO DO THAT by the license. And redistribute. And all the other stuff.
What is the source that's open? Aren't the models themselves more akin to compiled code than to source code?
No, not compiled code. Weights are hardcoded values. Code is the combination of model architecture + config + inferencing engine. You run inference based on the architecture (what and when to compute), using some hardcoded values (weights).
JVM bytecode is hardcoded values. Code is the virtual machine implementation + config + operating system it runs on. You run classes based on the virtual machine, using some hardcoded input data generated by javac.
It’s open source, but it’s a binary-only release.
It’s like getting a compiled software with an Apache license. Technically open source, but you can’t modify and recompile since you don’t have the source to recompile. You can still tinker with the binary tho.
Weights are not binary. I have no idea why this is so often spread, it's simply not true. You can't do anything with the weights themselves, you can't "run" the weights.
You run inference (via a library) on a model using it's architecture (config file), tokenizer (what and when to compute) based on weights (hardcoded values). That's it.
> but you can’t modify
Yes, you can. It's called finetuning. And, most importantly, that's exactly how the model creators themselves are "modifying" the weights! No sane lab is "recompiling" a model every time they change something. They perform a pre-training stage (feed everything and the kitchen sink), they get the hardcoded values (weights), and then they post-train using "the same" (well, maybe their techniques are better, but still the same concept) as you or I would. Just with more compute. That's it. You can do the exact same modifications, using basically the same concepts.
> don’t have the source to recompile
In pure practical ways, neither do the labs. Everyone that has trained a big model can tell you that the process is so finicky that they'd eat a hat if a big train session can be somehow made reproducible to the bit. Between nodes failing, datapoints balooning your loss and having to go back, and the myriad of other problems, what you get out of a big training run is not guaranteed to be the same even with 100 - 1000 more attempts, in practice. It's simply the nature of training large models.
A binary does not mean an executable. A PNG is a binary. I could have an SVG file, render it as a PNG and release that with CC0, it doesn't make my PNG open source. Model Weights are binary files.
You can do a lot with a binary also. That's what game mods are all about.
Slapping an open license onto a binary can be a valid use of such license, but does not make your project open source.
The system prompt is an inference parameter, no?
by your definition most of the current open weight models would not qualify
Correct. I agree with them, most of the open weight models are not open source.
That’s why they are called open weight and not open source.
I started downloading, I'm eager to test it. I will share my personal experiences. https://ahmetcadirci.com/2025/gpt-oss/