I go back and forth on this. A year ago, I was optimistic and I have had 1 case where RL fine tuning a model made sense. But while there are pockets of that, there is a clash with existing industry skills. I work with a lot of machine learning engineers and data scientists and here’s what I observe.
- many, if not most MLEs that got started after LLMs do not generally know anything about machine learning. For lack of clearer industry titles, they are really AI developers or AI devops
- machine learning as a trade is moving toward the same fate as data engineering and analytics. Big companies only want people using platform tools. Some ai products, even in cloud platforms like azure, don’t even give you the evaluation metrics that would be required to properly build ml solutions. Few people seem to have an issue with it.
- fine tuning, especially RL, is packed with nuance and details… lots to monitor, a lot of training signals that need interpretation and data refinement. It’s a much bigger gap than training simpler ML models, which people are also not doing/learning very often.
- The limited number of good use cases means people are not learning those skills from more senior engineers.
- companies have gotten stingy with sme-time and labeling
What confidence do companies have in supporting these solutions in the future? How long will you be around and who will take up the mantle after you leave?
AutoML never really panned out, so I’m less confident that platforming RL will go any better. The unfortunate reality is that companies are almost always willing to pay more for inferior products because it scales. Industry “skills” are mostly experience with proprietary platform products. Sure they might list “pytorch” as a required skill, but 99% of the time, there isn’t hardly anyone at the company that has spent any meaningful time with it. Worse, you can’t use it, because it would be too hard to support.
I'm also seeing teams who expected big gains from fine tuning get incremental or moderate gains. Then they put it in production and regret the action as SOTA marches quickly.
I have avoided fine tuning because the models are currently improving at a rate that exceeds big corporate product development velocity.
Absolutely the first thing you should try is a prompt optimizer. The GEPA optimizer (implemented in DSPy) often outperforms GRPO training[1]. But I think people are usually building with frameworks that aren't machine learning frameworks.
Labels are so essential - even if you're not training anything, being able to quickly and objectively test your system is hugely beneficial - but it's a constant struggle to get them. In the unlikely event you can get budget and priority for an SME to do the work, communicating your requirements to them (the need to apply very consistent rules and make few errors) is difficult and the resulting labels tend to be messy.
More than once I've just done labeling "on my own time" - I don't know the subject as well but I have some idea what makes the neurons happy, and it saves a lot of waiting around.
I've found tuning large models to be consistently difficult to justify. The last few years it seems like you're better off waiting six months for a better foundation model. However, we have a lot of cases where big models are just too expensive and there it can definitely be worthwhile to purpose-train something small.
Eventually someone will make a killing on doing actual outcome measurements instead of just trusting the LLMs, Michael Lewis will write a popular book about it, and the cycle will begin anew...
Finetuning is pretty much necessary for regression tasks. Also useful for classification since you can get the direct probabilities in case you want to do some thresholding.
If people have ideas for use cases where fine-tuning can make a big difference, but don't have the time/resources to try it out yourself yet want to see if it'll work, feel free to share your ideas as I'm currently creating a bunch of examples of this and could use some inspiration, I only have 3 real/confirmed use cases as of right now.
Something that's in my personal backlog is fine-tuning of TrOCR for purse seine observer workbooks. The default TrOCR is expecting English words, and so the FAO species codes used in the workbook result in terrible accuracy. LLMs do poorly in this space because you'll commonly see repeats (e.g. 100 out of 120 samples all have the same species code) which then leads to hallucination.
Many ppl think to fine tune an LLM on domain knowledge means to feed it chunked text of, say, psychology books. That is, of course, a wrong application if your goal is for the model to become an expert psychologist. You want the behavior of applying psychology, but you are training the behavior to write about it. TL;DR, many fine tuning fails are due to wrong dataset curation. On the orher hand, if yiu get the dataset right, you can get a 7B model outperform a 180B one.
There is an awful lot of "looking for my keys under the street light" going around these days. I've seen a bunch of projects proposed that are either based on existing data (but have no useful application of that data) or have a specific application (but lack the data and evaluation required to perform that task). It doesn't matter how good your data is if no one has any use for things like it, and it doesn't matter how neat your application would be if the data doesn't match.
I'm including things like RL metrics as data here, for lack of a better umbrella term, though the number of proposed projects that I've seen that decided that ongoing evaluation of actual effectiveness was a distraction from the more important task of having expensive engineers make expensive servers into expensive heatsinks is maddening.
Have you used PaddleOCR? I'm surprised they're claiming SOTA without comparing against Amazon Textract or Azure doc intelligence (LayoutLM v3 under the hood, as far as I know).
I've played around with doc recognition quite a bit, and as far as I can tell those two are best-in-class.
This comes back to the SLM vs LLM debate (sizes in relative terms), where an SLM can be optimised for a specific task, and out-perform an LLM. But it's not worth it (time, effort) for most tasks unless 1. they are very sensitive to precision or 2. it is ultra-high volume.
Just coming out of founding one of the first LLM fine tuning startups - Lamini - I disagree
Our thesis was that fine tuning would be easier than deep learning for users to adopt because it was starting from a very capable base LLM rather than starting from scratch
However, our main finding with over 20 deployments was that LLM fine tuning is no easier to use than deep learning
The current market situation is that ML engineers who are good enough at deep learning to master fine tuning can found their own AI startup or join Anthropic/OpenAI. They are underpaid building LLM solutions. Expert teams building Claude, GPT, and Qwen will out compete most users who try fine tuning on their own.
RAG, prompt engineering, inference time compute, agents, memory, and SLMs are much easier to use and go very far for most new solutions
They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Otherwise, you should just use gpt5
Preparing a few thousands training examples and pressing fine tune can improve the base LLM in a few situations, but it also can make the LLM worse at other tasks in hard to understand ways that only show up in production because you didn’t build evals that are good enough to catch them. It also has all of the failure modes of deep learning. There is a reason why deep learning training never took off like LLMs did despite many attempts at building startups around it.
There is also a reason why you don’t have general purpose applications. Most users understand that Excel is for data tables and Paint is for images even though some people have fun playing with the boundary and creating Excel paintings.
> They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Depends on what you want to achieve, of course, but I see fine-tuning at the current point in time primarily as a cost-saving measure: Transfer GPT5-levels of skill onto a smaller model, where inference is then faster/cheaper to run.
This of course slows down your innovation cycle, which is why generally this is imo not advisable.
I agree this is the main case where it makes sense.
But a recent trend that cut into the cost savings is that foundation model companies have started releasing small models. So you can build a use case with qwen 235B, then shrink down to 30B, or even all the way down to 0.6B if you really want to.
The smaller models lose some accuracy, but some use cases are solvable even by these smaller and much more efficient models.
It’s quite easy to produce a model that’s better than GPT-5 at arbitrarily small tasks. As of right now, GPT-5 can’t classify a dog by breed based on good photos for all but the most common breeds, which is like an AI-101 project.
Try doing a head to head comparison using all LLM tricks available including prompt engineering, rag, reasoning, inference time compute, multiple agents, tools, etc
Then try the same thing using fine tuning. See which one wins. In ML class we have labeled datasets with breeds of dogs hand labeled by experts like Andrej, in real life users don’t have specific, clearly defined, and high quality labeled data like that.
I’d be interested to be proven wrong
I think it is easy for strong ML teams to fall into this trap because they themselves can get fine tuning to work well. Trying to scale it to a broader market is where it fell apart for us.
This is not to say that no one can do it. There were users who produced good models. The problem we had was where to consistently find these users who were willing to pay for infrastructure.
I’m glad we tried it, but I personally think it is beating a dead horse/llama to try it today
I mean, at the point where you’re writing tools to assist it, we are no longer comparing the performance of 2 LLMs. You’re taking a solution that requires a small amount of expertise, and replacing it with another solution that requires more expertise, and costs more. The question is not “can fine tuning alone do better than every other trick in the book plus a SOTA LLM plus infinite time and money?” The question is: “is fine tuning useful?”
> How can you hire enough people to scale that while making the economics work?
Once you (as in you the person) have the expertise, what you need all the people for exactly? To fine-tuning you need to figure out the architecture, how to train, how to infer, pick together the dataset and then run the training (optionally setup a pipeline so the customer can run the "add more data -> train" process themselves). What in this process you need to hire so many people for?
> Why would they join you rather than founding their own company?
Same as always, in any industry, not everyone wants to lead and not everyone wants to follow.
The problem is that it doesn’t always work and when it does fail it fails silently.
Debugging requires knowing some small detail about your data distribution or how you did gradient clipping which take time and painstakingly detailed experiments to uncover.
What models did you try to find tune? Were the models at the time even good enough to fine tune? Did they suffer from catastrophic forgetting?
We have a lot of more capable open source models now. And my guess is that if you designed models specifically for being fine tuned, they could escape many of the last generation pitfalls.
Companies would love to own their own models instead of renting from a company that seeks to replace them.
We used the best models available and went from the Pythia/gpt2 to Deepseek generations.
One annoying part was switching to new and better models that came out literally every week.
I don’t think it substantially changes anything. If anything I think the release of more advanced models like qwen-next makes things like fp4, moe, and reasoning tokens an even higher barrier of entry.
Fine-tuning is a good technique to have in a toolbox, but in reality, it is feasible only in some use cases. On one hand, many NLP tasks are already easy enough for LLMs to have near perfect accuracy and fine tuning is not needed. On the other hand, really complex tasks are really difficult to fine-tune and clevem data collection might be pretty expensive. Fine-tuning can help with the use cases somewhere in the middle, not too simple, not too complex, feasible for data collection, etc.
An example I just found worked very well with fine-tuning: I wanted to extract any frame that contained a full-screen presentation slide from a various videos I've archived, only when it's full-screen, and also not capture videos, and some other constraints.
Naturally I reached for CLIP+ViT which got me a ~60% success rate out of the box. Then based on that, I created a tiny training script that read `dataset/{slide,no_slide}` and trained a new head based on that. After adding ~100 samples of each, the success rate landed at 95% which was good enough to call it done, and circle back to iterate once I have more data.
I ended up with a 2.2K large "head_weights.safetensors" that increased the accuracy by ~35% which felt really nice.
Could you use LoRA adapters to free up your context with all the stuff that normally has to go into it? Coding standards and fuzzy preferences like "prefer short names" or "prefer functional style", reference materials, MCP definitions, etc.?
For training data, I was thinking you could just put all the stuff into context, then give it some prompts, and see how the responses differ over the baseline context. You could feed that into the fine tuner either as raw prompt and the output from the full-context model, or as like input="refactor {output from base model}", output="{output from full-context model}".
My understanding is that LoRA are composable, so in theory MCPs could be deployed as LoRA adapters. Then toggling on and off would not require any context changes. You just enable or disable the LoRA adapter in the model itself. Seems like this would help with context poisoning too.
2026 will be the year of specialized SLMs...enterprises care about more IP ownership/control, lower costs, and higher quality than the slow and expensive generic models that were not optimized for their use cases.
I discuss a large-scale empirical study of fine-tuning 7B models to outperform GPT-4 called "LoRA Land", and give some arguments in the discussion section making the case for the return of fine-tuning, i.e. what has changed in the past 6 months
This website loads at impressive speeds (from Europe)! Rarely seen anything more snappy. Dynamic loading of content as you scroll, small compressed images without looking like it (webp). Well crafted!
> Finally, companies may have reached the ceiling of what can be achieved with prompting alone. Some want models that know their vocabulary, their tone, their taxonomy, and their compliance rules.
Together with speed and const, this is from my point of view this is the only "case" for the return of fine-tuning here. And this can be managed by context management.
With growing context sizes, first RAG replaced fine-tuning and later even RAG was replaced by just a good-enough prompt preparation for more and more usage pattern.
Sure, speed and costs are important drivers. But like with FPGAs vs. CPUs or GPUs, the development costs and delivery time for high-performance solutions, eliminate the benefit most the time.
There is growing emphasis on efficiency as more companies adopt and scale with LLMs in their products.
Developers might be fine paying GPT-5-Super-AGI-Thinking-Max prices to use the very best models in Cursors, but (despite what some may think about Silicon Valley), businesses do care about efficiency.
And if you can fine-tune an 8b-parameter Llama model on GPT-5 data in < 48 hours and save $100k/mo, you're going to take that opportunity.
The OpenAI fine-tuning api is pretty good - you need to label an evaluation benchmark anyway to systematically iterate on prompts and context, and it’s often creates good results if you give it a 50-100 examples, either beating frontier models or allowing a far cheaper and faster model to catch up.
It requires no local gpus, just creating a json and posting to OpenAI
Lots of caveats here in the following statement: if your application is not fully leaning in to frontier model capabilities, you are probably building a previous generation product.
Fine tuning was never really hard to do locally if you had the hardware. What I’d like to read in an article like this is more details into why they’re making a comeback.
Why would you choose a model where the trained in priors don't match your use case? Also, keep in mind that RL'd in behavior includes things like reasoning and how to answer questions correctly, so you're literally taking smart models and making them dumber by doing SFT. To top it off, SFT only produces really good results when you have traces that closely model the actual behavior you're trying to get the model to display. If you're just trying to fine tune in a knowledge base, a well tuned RAG setup + better prompts win every time.
Because you need a solution for your problem and the available tools are what they are and nothing else and you don't have enough resources to train your own model.
I go back and forth on this. A year ago, I was optimistic and I have had 1 case where RL fine tuning a model made sense. But while there are pockets of that, there is a clash with existing industry skills. I work with a lot of machine learning engineers and data scientists and here’s what I observe.
- many, if not most MLEs that got started after LLMs do not generally know anything about machine learning. For lack of clearer industry titles, they are really AI developers or AI devops
- machine learning as a trade is moving toward the same fate as data engineering and analytics. Big companies only want people using platform tools. Some ai products, even in cloud platforms like azure, don’t even give you the evaluation metrics that would be required to properly build ml solutions. Few people seem to have an issue with it.
- fine tuning, especially RL, is packed with nuance and details… lots to monitor, a lot of training signals that need interpretation and data refinement. It’s a much bigger gap than training simpler ML models, which people are also not doing/learning very often.
- The limited number of good use cases means people are not learning those skills from more senior engineers.
- companies have gotten stingy with sme-time and labeling
What confidence do companies have in supporting these solutions in the future? How long will you be around and who will take up the mantle after you leave?
AutoML never really panned out, so I’m less confident that platforming RL will go any better. The unfortunate reality is that companies are almost always willing to pay more for inferior products because it scales. Industry “skills” are mostly experience with proprietary platform products. Sure they might list “pytorch” as a required skill, but 99% of the time, there isn’t hardly anyone at the company that has spent any meaningful time with it. Worse, you can’t use it, because it would be too hard to support.
I'm also seeing teams who expected big gains from fine tuning get incremental or moderate gains. Then they put it in production and regret the action as SOTA marches quickly.
I have avoided fine tuning because the models are currently improving at a rate that exceeds big corporate product development velocity.
Absolutely the first thing you should try is a prompt optimizer. The GEPA optimizer (implemented in DSPy) often outperforms GRPO training[1]. But I think people are usually building with frameworks that aren't machine learning frameworks.
[1] https://arxiv.org/abs/2507.19457
Labels are so essential - even if you're not training anything, being able to quickly and objectively test your system is hugely beneficial - but it's a constant struggle to get them. In the unlikely event you can get budget and priority for an SME to do the work, communicating your requirements to them (the need to apply very consistent rules and make few errors) is difficult and the resulting labels tend to be messy.
More than once I've just done labeling "on my own time" - I don't know the subject as well but I have some idea what makes the neurons happy, and it saves a lot of waiting around.
I've found tuning large models to be consistently difficult to justify. The last few years it seems like you're better off waiting six months for a better foundation model. However, we have a lot of cases where big models are just too expensive and there it can definitely be worthwhile to purpose-train something small.
Eventually someone will make a killing on doing actual outcome measurements instead of just trusting the LLMs, Michael Lewis will write a popular book about it, and the cycle will begin anew...
I ran a survey on Twitter over the past few days asking for successful case studies that produced economically valuable results from fine-tuning LLMs.
I ask a version of this every six months or so, and usually the results are quite disappointing.
This time I had more credible replies than I have had in the past.
Here's my thread with highlights: https://twitter.com/simonw/status/1979254349235925084
And in a thread viewer for people who aren't signed into Twitter: https://twitter-thread.com/t/1979254349235925084
Some of the most impressive:
Datadog got <500ms latency for their language natural querying feature, https://twitter.com/_brimtown/status/1979669362232463704 and https://docs.datadoghq.com/logs/explorer/search/
Vercel run custom fine-tuned models on v0 for Next.js generation: https://vercel.com/blog/v0-composite-model-family
Shopify have a fine-tuned vision LLM for analyzing product photos: https://shopify.engineering/leveraging-multimodal-llms
I imagine it's pretty bad risk to reward ratio for most companies. Especially when just tossing some stuff into your system prompt is an option.
Finetuning is pretty much necessary for regression tasks. Also useful for classification since you can get the direct probabilities in case you want to do some thresholding.
If people have ideas for use cases where fine-tuning can make a big difference, but don't have the time/resources to try it out yourself yet want to see if it'll work, feel free to share your ideas as I'm currently creating a bunch of examples of this and could use some inspiration, I only have 3 real/confirmed use cases as of right now.
Something that's in my personal backlog is fine-tuning of TrOCR for purse seine observer workbooks. The default TrOCR is expecting English words, and so the FAO species codes used in the workbook result in terrible accuracy. LLMs do poorly in this space because you'll commonly see repeats (e.g. 100 out of 120 samples all have the same species code) which then leads to hallucination.
Many ppl think to fine tune an LLM on domain knowledge means to feed it chunked text of, say, psychology books. That is, of course, a wrong application if your goal is for the model to become an expert psychologist. You want the behavior of applying psychology, but you are training the behavior to write about it. TL;DR, many fine tuning fails are due to wrong dataset curation. On the orher hand, if yiu get the dataset right, you can get a 7B model outperform a 180B one.
Transfer learning is a thing. But the issue with the gap is that the datasets for "applying X" aren't easy to come by.
There is an awful lot of "looking for my keys under the street light" going around these days. I've seen a bunch of projects proposed that are either based on existing data (but have no useful application of that data) or have a specific application (but lack the data and evaluation required to perform that task). It doesn't matter how good your data is if no one has any use for things like it, and it doesn't matter how neat your application would be if the data doesn't match.
I'm including things like RL metrics as data here, for lack of a better umbrella term, though the number of proposed projects that I've seen that decided that ongoing evaluation of actual effectiveness was a distraction from the more important task of having expensive engineers make expensive servers into expensive heatsinks is maddening.
A couple of examples I have seen recently which makes me agree with OP:
- PaddleOCR, a 0.9B model that reaches SOTA accuracy across text, tables, formulas, charts & handwriting. [0]
- A 3B and 8B model which performs HTML to json extraction at GPT-5 level accuracy at 40-80x less cost, and faster inference. [1]
I think it makes sense to fine tune when you're optimizing for a specific task.
[0] https://huggingface.co/papers/2510.14528
[1] https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_buil...
Have you used PaddleOCR? I'm surprised they're claiming SOTA without comparing against Amazon Textract or Azure doc intelligence (LayoutLM v3 under the hood, as far as I know).
I've played around with doc recognition quite a bit, and as far as I can tell those two are best-in-class.
Amazon textract is not great at multi colum layouts in my experience. Docupanda or some azure models beat it. Just my 2 cents.
This comes back to the SLM vs LLM debate (sizes in relative terms), where an SLM can be optimised for a specific task, and out-perform an LLM. But it's not worth it (time, effort) for most tasks unless 1. they are very sensitive to precision or 2. it is ultra-high volume.
Just coming out of founding one of the first LLM fine tuning startups - Lamini - I disagree
Our thesis was that fine tuning would be easier than deep learning for users to adopt because it was starting from a very capable base LLM rather than starting from scratch
However, our main finding with over 20 deployments was that LLM fine tuning is no easier to use than deep learning
The current market situation is that ML engineers who are good enough at deep learning to master fine tuning can found their own AI startup or join Anthropic/OpenAI. They are underpaid building LLM solutions. Expert teams building Claude, GPT, and Qwen will out compete most users who try fine tuning on their own.
RAG, prompt engineering, inference time compute, agents, memory, and SLMs are much easier to use and go very far for most new solutions
Will Anthropic/OpenAI really hire anyone who can fine-tune an LLM?
They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Otherwise, you should just use gpt5
Preparing a few thousands training examples and pressing fine tune can improve the base LLM in a few situations, but it also can make the LLM worse at other tasks in hard to understand ways that only show up in production because you didn’t build evals that are good enough to catch them. It also has all of the failure modes of deep learning. There is a reason why deep learning training never took off like LLMs did despite many attempts at building startups around it.
Andrej karpathy has a rant about it that captures some of the failure modes of fine tuning - https://karpathy.github.io/2019/04/25/recipe/
> but it also can make the LLM worse at other tasks
The problem is easily avoided by not using it for other tasks.
Users often found it hard to know exactly where the boundaries are.
This is a reason why general purpose models shine. You don’t have to carefully characterize a task and put guard rails around it.
There is also a reason why you don’t have general purpose applications. Most users understand that Excel is for data tables and Paint is for images even though some people have fun playing with the boundary and creating Excel paintings.
This is exactly the intuition that leads to excitement about fine tuning.
However, I personally think that this intuition applies to products and interfaces, not to AI.
Intelligence and learning is general. Intelligence without generalization is memorization, which seems to be less useful in practice.
What people use are products and interfaces, not "AI".
> They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Depends on what you want to achieve, of course, but I see fine-tuning at the current point in time primarily as a cost-saving measure: Transfer GPT5-levels of skill onto a smaller model, where inference is then faster/cheaper to run. This of course slows down your innovation cycle, which is why generally this is imo not advisable.
I agree this is the main case where it makes sense.
But a recent trend that cut into the cost savings is that foundation model companies have started releasing small models. So you can build a use case with qwen 235B, then shrink down to 30B, or even all the way down to 0.6B if you really want to.
The smaller models lose some accuracy, but some use cases are solvable even by these smaller and much more efficient models.
It’s quite easy to produce a model that’s better than GPT-5 at arbitrarily small tasks. As of right now, GPT-5 can’t classify a dog by breed based on good photos for all but the most common breeds, which is like an AI-101 project.
Try doing a head to head comparison using all LLM tricks available including prompt engineering, rag, reasoning, inference time compute, multiple agents, tools, etc
Then try the same thing using fine tuning. See which one wins. In ML class we have labeled datasets with breeds of dogs hand labeled by experts like Andrej, in real life users don’t have specific, clearly defined, and high quality labeled data like that.
I’d be interested to be proven wrong
I think it is easy for strong ML teams to fall into this trap because they themselves can get fine tuning to work well. Trying to scale it to a broader market is where it fell apart for us.
This is not to say that no one can do it. There were users who produced good models. The problem we had was where to consistently find these users who were willing to pay for infrastructure.
I’m glad we tried it, but I personally think it is beating a dead horse/llama to try it today
There are tons of problems this simply doesn’t apply to. In the limited API world this may be true but agents are far from reliable
I mean, at the point where you’re writing tools to assist it, we are no longer comparing the performance of 2 LLMs. You’re taking a solution that requires a small amount of expertise, and replacing it with another solution that requires more expertise, and costs more. The question is not “can fine tuning alone do better than every other trick in the book plus a SOTA LLM plus infinite time and money?” The question is: “is fine tuning useful?”
Fair didn’t seem to matter to users who just wanted to build solutions with reasonable time and budget
If your customers can't fine tune, do it for them instead.
How can you hire enough people to scale that while making the economics work?
Why would they join you rather than founding their own company?
> How can you hire enough people to scale that while making the economics work?
Once you (as in you the person) have the expertise, what you need all the people for exactly? To fine-tuning you need to figure out the architecture, how to train, how to infer, pick together the dataset and then run the training (optionally setup a pipeline so the customer can run the "add more data -> train" process themselves). What in this process you need to hire so many people for?
> Why would they join you rather than founding their own company?
Same as always, in any industry, not everyone wants to lead and not everyone wants to follow.
llm.finetune(data) is a leaky abstraction
Read Andrej’s blog that I linked earlier in the thread if you want to understand why.
If it works it works? :shrug:
The problem is that it doesn’t always work and when it does fail it fails silently.
Debugging requires knowing some small detail about your data distribution or how you did gradient clipping which take time and painstakingly detailed experiments to uncover.
I think you are saying to go after the very high end of the market.
That’s fair, one market segment of this is sometimes called sovereign compute.
Another common model that I have seen is to become the deepmind for one very large and important customer.
I think this works.
> How can you hire enough people to scale that while making the economics work?
Pick the right customers.
> Why would they join you rather than founding their own company?
The network effects of having enough resources in one place. For having other teams deal with the training data, infrastructure, deployment, etc.
What models did you try to find tune? Were the models at the time even good enough to fine tune? Did they suffer from catastrophic forgetting?
We have a lot of more capable open source models now. And my guess is that if you designed models specifically for being fine tuned, they could escape many of the last generation pitfalls.
Companies would love to own their own models instead of renting from a company that seeks to replace them.
We used the best models available and went from the Pythia/gpt2 to Deepseek generations.
One annoying part was switching to new and better models that came out literally every week.
I don’t think it substantially changes anything. If anything I think the release of more advanced models like qwen-next makes things like fp4, moe, and reasoning tokens an even higher barrier of entry.
Fine-tuning is a good technique to have in a toolbox, but in reality, it is feasible only in some use cases. On one hand, many NLP tasks are already easy enough for LLMs to have near perfect accuracy and fine tuning is not needed. On the other hand, really complex tasks are really difficult to fine-tune and clevem data collection might be pretty expensive. Fine-tuning can help with the use cases somewhere in the middle, not too simple, not too complex, feasible for data collection, etc.
>Fine-tuning is a good technique to have in a toolbox, but in reality, it is feasible only in some use cases.
Yes, 100s of housands of them
Care to elaborate what are some of those use cases?
What would you say is an example of one of those “middle” tasks it can help with?
An example I just found worked very well with fine-tuning: I wanted to extract any frame that contained a full-screen presentation slide from a various videos I've archived, only when it's full-screen, and also not capture videos, and some other constraints.
Naturally I reached for CLIP+ViT which got me a ~60% success rate out of the box. Then based on that, I created a tiny training script that read `dataset/{slide,no_slide}` and trained a new head based on that. After adding ~100 samples of each, the success rate landed at 95% which was good enough to call it done, and circle back to iterate once I have more data.
I ended up with a 2.2K large "head_weights.safetensors" that increased the accuracy by ~35% which felt really nice.
Could you use LoRA adapters to free up your context with all the stuff that normally has to go into it? Coding standards and fuzzy preferences like "prefer short names" or "prefer functional style", reference materials, MCP definitions, etc.?
For training data, I was thinking you could just put all the stuff into context, then give it some prompts, and see how the responses differ over the baseline context. You could feed that into the fine tuner either as raw prompt and the output from the full-context model, or as like input="refactor {output from base model}", output="{output from full-context model}".
My understanding is that LoRA are composable, so in theory MCPs could be deployed as LoRA adapters. Then toggling on and off would not require any context changes. You just enable or disable the LoRA adapter in the model itself. Seems like this would help with context poisoning too.
2026 will be the year of specialized SLMs...enterprises care about more IP ownership/control, lower costs, and higher quality than the slow and expensive generic models that were not optimized for their use cases.
Here's a blog post I wrote last week on the same topic: https://blog.oumi.ai/p/small-fine-tuned-models-are-all-you
I discuss a large-scale empirical study of fine-tuning 7B models to outperform GPT-4 called "LoRA Land", and give some arguments in the discussion section making the case for the return of fine-tuning, i.e. what has changed in the past 6 months
insightful, thanks
This website loads at impressive speeds (from Europe)! Rarely seen anything more snappy. Dynamic loading of content as you scroll, small compressed images without looking like it (webp). Well crafted!
Magic of a CDN? Plus avoiding JS probably. Haven't checked source though.
> Finally, companies may have reached the ceiling of what can be achieved with prompting alone. Some want models that know their vocabulary, their tone, their taxonomy, and their compliance rules.
Together with speed and const, this is from my point of view this is the only "case" for the return of fine-tuning here. And this can be managed by context management.
With growing context sizes, first RAG replaced fine-tuning and later even RAG was replaced by just a good-enough prompt preparation for more and more usage pattern.
Sure, speed and costs are important drivers. But like with FPGAs vs. CPUs or GPUs, the development costs and delivery time for high-performance solutions, eliminate the benefit most the time.
Creator of inference.net / schematron here.
There is growing emphasis on efficiency as more companies adopt and scale with LLMs in their products.
Developers might be fine paying GPT-5-Super-AGI-Thinking-Max prices to use the very best models in Cursors, but (despite what some may think about Silicon Valley), businesses do care about efficiency.
And if you can fine-tune an 8b-parameter Llama model on GPT-5 data in < 48 hours and save $100k/mo, you're going to take that opportunity.
The OpenAI fine-tuning api is pretty good - you need to label an evaluation benchmark anyway to systematically iterate on prompts and context, and it’s often creates good results if you give it a 50-100 examples, either beating frontier models or allowing a far cheaper and faster model to catch up.
It requires no local gpus, just creating a json and posting to OpenAI
https://platform.openai.com/docs/guides/model-optimization
They don't offer it for GPT-5 series, as a result much of the time fine-tuning Gemini 2.5-Flash is a better deal.
Lots of caveats here in the following statement: if your application is not fully leaning in to frontier model capabilities, you are probably building a previous generation product.
I wrote about this recently as well: https://madiator.substack.com/p/finetuning-is-so-back
Fine tuning was never really hard to do locally if you had the hardware. What I’d like to read in an article like this is more details into why they’re making a comeback.
Curious to hear others’ thoughts on this
Which minimum hardware spec would qualify as making this not really hard to do locally?
And here I am thinking we'd be discussing the teleological argument.
Return? Did it run away?
I don't think anyone thought fine tuning was dead.
There were many comments claiming that from around the end of 2023 to shortly before ChatGPT 5 was launched.
The main claim was that new models were much better than anything you could get your hands on to fine tune.
IMO, intuitively that never made sense. But I never tested it either.
Fine tuning by pretraining over a RL tuned model is dumb AF. RL task tuning works quite well.
You may have no choice in how the model you are fine tuning was trained, and may have no interest in verticals it was RL tuned for.
In any case, platforms like tinker.ai support both SFT and RL.
Why would you choose a model where the trained in priors don't match your use case? Also, keep in mind that RL'd in behavior includes things like reasoning and how to answer questions correctly, so you're literally taking smart models and making them dumber by doing SFT. To top it off, SFT only produces really good results when you have traces that closely model the actual behavior you're trying to get the model to display. If you're just trying to fine tune in a knowledge base, a well tuned RAG setup + better prompts win every time.
Because you need a solution for your problem and the available tools are what they are and nothing else and you don't have enough resources to train your own model.
For some of us fine-tuning is a constant activity...