Tbh I dont really agree with your statements.
Especially with working with data, intention is key.
By using an llm, by definition, you are loosing intention.
And Thai puts you in a position where you have to 1) think of exactly what you look for. 2) able to understand what the llm generated.
You might say it "still less work" and that's true, perhaps, only for the first few times. After a while you _learn_ how to do it, and understand how to _think_ with the language of your data.
With LLMs, you never get this benefit, and also loose your ability to judge the LLM's output properly.
But again, that might be enough on your case, or, you simply don't _know_.
> You did not write a single line of SQL. You did not set up an attribution model. You asked a question, in English, and got a table.
But nobody bothered to check if it was correct. It might seem correct, but I've been burned by queries exactly like these many, many times. What can often happen is that you end up with multiplied rows, and the answer isn't "let's just add a DISTINCT somewhere".
The answer is to look at the base table and the joins. You're joining customers to two (implied) one-to-many tables, charges and email_events. If there are multiple charges rows per customer, or an email can match multiple email_events rows, it can lead to a Cartesian multiplication of the rows since any combination of matches from the base table to the joined tables will be included.
If that's the case, the transactions and revenue values are likely to be inflated, and therefore the pretty pictures you passed along to your boss are wrong.
Same way you do today; you trust whoever wrote the query.
I do not sell a wrapper on top of some LLM; you can absolutely write your SQL directly. There is an engine, there are iceberg tables. You can just live your best life doing your own SQL by hand.
Now if you couldnt do it before and you have a sensible understanding, you can likely do a bit more with the CLI tooling. And if you know a lot more, you can still do that. The queries are not hidden, or abstracted, If you need them they will be saved - transparently in SQL.
So I dont know what is the answer to the question "how do people do things they don't know how to do" ?
I think the author should be introduced to (or reminded of) the tale of the average from the US Air Force [1]. Social reality is high-dimensional and the "normal" thing is actually to be average in some dimensions, but strongly non-average in many others. So a "perfectly average" family would paradoxically be an outlier themselves.
I think this is important, because if his hypothesis is right, then LLMs behave differently here: They really are average in all dimensions. They are the pilots the Air Force thought they had before Daniels made the study.
So if he is right, we'd be changing from a mostly-non-average to a mostly-average society, which would really be a massive change - and probably not a good one IMO.
Not everyone can be average. Half of people will be below average.
I might not agree with the point, but I can see that idea that many things just need to be "good enough" (which we might define as "average") and we save our real expertise for the things that really matter.
I don’t believe this is a meaningful distinction when we’re not going to agree on how to judge performance of software engineers. If this were solely about income, it might be an important distinction.
The article assumes a normal distribution, making the distinction moot
But it is useful to question whether that is true in all cases. The cases that aren't normal-distributed might be exactly the cases where it pays off to be neither average or median
there is a major shortcoming in this assumption; everything we've seen related to the internet and technology in general suggests there is rarely a normal distribution. I think it's way more valuable ato frame the questions as a long tail (pareto) distribution and a "good enough" cut-off point.
For that matter, how does a business differentiate themselves, if people can write their own software? While we're busy trying to replace our employees with AI, our customers are trying to replace our products with AI.
That isn't a sane starting point; if a corporation's strategy is to only hire above average employees they're going to fail. Enron springs to mind. Corporations generally take average people and give them a reasonably well defined scopes of simple work to complete that adds value. The bigger the corporation the more difficulty they have handling even the standard deviation above average differently to the one below; almost everyone just becomes a human resource to be swapped around based on social factors.
The people who need to be above average and exceptionally are senior management and maybe a few bright sparks in middle management. Most of the value-add happens there that builds social machines that then do the work.
> If average is all we need, then anyone can do it.
Pretty much, yes. That is why the range of salaries on offer is pretty compressed compared to the range of returns capitalists get.
> The people who need to be above average and exceptionally are senior management and maybe a few bright sparks in middle management. Most of the value-add happens there that builds social machines that then do the work.
That is the dream. Upper management can get software made without talent.
But is seems to be the greatest ideas in the last 30 years didn’t start in board rooms. They started with a couple coders creating a new idea.
No boardroom could have invented Google. It was so fundamentally different than what other search engines were doing.
We have this myth that upper management is so important. It is as the business grows in size, they are excellent for coordination. But ideas come from people closer to the problems.
At any tech company with leveling guidelines that I have seen, promotions above mid level have never been based on “I codez real gud”. It’s always been based on scope, impact and dealing with ambiguity. It’s stated differently in different companies.
No one has ever differentiated themselves based on how good of a ticket taker they are. Coding especially on the enterprise dev side where most developers work has been being commoditized since 2016 at least and compensation has stagnated since then and hasn’t come near keeping up with inflation.
In 2016, a good solid full stack, mobile or web developer working in the enterprise could make $135K working in a second tier city. That’s $185K inflation adjusted today. Those same companies aren’t paying $185K for the same position.
My one anecdote is that the same company I worked for back then making $125K and some of my coworkers were making $135K just posted a position on LinkedIn with the same requirements (SQL Server + C#) offering $145K fully remote.
> At any tech company with leveling guidelines that I have seen, promotions above mid level have never been based on “I codez real gud”. It’s always been based on scope, impact and dealing with ambiguity. It’s stated differently in different companies.
I 100% agree here.
AI has been a huge boon for me personally, because I stopped spending most of my writing code years ago. I was reviewing code, writing procedures, handling incidents, and generally just looking for pain points across the entire company and solving them before they became critical.
Those skills have transferred directly to working with AI.
That's like saying 'cars were better made in the 1950's because they used tons of steel'. Like they were 'heavier and more robust' - but that doesn't mean better.
Foundations are way better, more robust, especially weatherized. Windows today are like magic compared to windows 100 years ago.
What we do more poorly now is we don't use wood everywhere, aka doors, and certain kinds of workmanship are not there - like winding staircases, mouldings - but you can easily have that if you want to pay for it. That's a choice.
AI is power and leverage, it will make better things as long as it's directed by skilled operators.
Reducing the amount of time I spend on the average code has meant I'm spending more time adding my above-average contributions to the code base. Amdahl's law, basically. Reducing the amount of time spent on one task means the percentage of time spent on the others increases.
How stable that is on the long term, I don't know any more than the next guy, but it is where I'm contributing now.
Average is only a tombstone of someone having failed to do better. And settling for average means pulling down.
When it comes to bs dashboard where "average is all you need", maybe the "better than average" result would be asking yourself if it's even worth doing in the first place?
Why average? I've always taken pride in my work and developed things that went beyond the expectations of the management and of the final users. Now I'm using LLMs a lot and I've been able to do much more than I used to- I find them great coworkers, technically very knowledgeable, patient and fast. I provide the big picture, keep an eye on the architectural soundness and code quality, and design the features. The LLM does the rest. The results are way above average.
how do you know those queries are actually correct without domain knowledge?
Do you know enough about JOINs and how they work to be able to break those big queries down and figure out whether they are doing exactly what you're asking for in English?
Litigation aside for a moment - I'm not sure vide-coded reporting could be much worse than what I've seen from early-career analysts in past companies.
But, you see, our needs are above average because we target above average exists so we only hire from the top 1% of software engineers, blah, blah, yadda, yadda, etc.
The Business simply cannot admit that it’s really doing nothing above average. If they did, investment dries up.
I liken it to the Ikeaficiation of furniture. To a great majority, such as my college self, it was preferable and desirable. As I've made more money, I've wanted something better.
There's a market for both, but the furniture slop of Ikea is dominant.
> But this is a pain, first because, if you do anything that is not selling a product online that people can buy right when they click a button, it is a drag to create those attribution models effectively: is it last click, first click, weighted attribution... who knows. Nobody knows. Everybody gives up and just adds it to a dashboard and pretends it makes sense.
Yes, thinking about your data and how to check it is so annoying. Much better to do something average, see if the result puts you in a good light, and share that insight into your company's working with ~~everyone on the internet~~ your boss.
Rarely have I seen "we help you create meaningless slop more easily" advertised so explicitly. Or is this also average?
I always find it a bit weird to see posts on the front page where all the comments disagree with the central premise of the article. In this case the post is an ad advocating for executing code you didn't write and handing the results to your manager.
It makes me wonder if Hacker News has a silent majority of people who would actually use AI in this way without wanting to admit it, and a vocal minority of people who wouldn't.
I'll admit that there are definitely times where I decide it's fine to roll with it blind. It's not often, not for critical paths, and definitely not where I don't have a good understanding of the blast radius if it fails spectacularly - but you'd be surprised how often it's easier and faster to fix it if it breaks than it would be to make sure it's not broken.
Probably the same way other models learned to surpass human ability while being bootstrapped from human-level data - using reinforcement learning.
The question is, do we have good enough feedback loops for that, and if not, are we going to find them? I would bet they will be found for a lot of use cases.
Because you ask it to improve things and so it produces slightly better than average results - the average person can find things wrong with something, and fix it as well. Then you feed that improved result back in and generate a model where the average is better.
Humans can decide to write above-average code by putting in more effort, writing comprehensive tests, iteratively refactoring, profile-informed optimization, etc.
I think you can have LLMs do that too, and then generate synthetic training data for "high-effort code".
Well state of the art LLMs sure can't consistently produce high quality code outside of small greenfield projects or tiny demos, which is a domain that was always easy even for humans as there are very few constraints to consider, and the context is very small.
Part of the problem is that better code is almost always less code. Where a skilled programmer will introduce a surgical 1-3 LOC diff, an incompetent programmer will introduce 100 LOC. So you'll almost always have a case where the bad code outnumbers the good.
Current LLMs do tend to explode complexity if left to their own devices but I don't think that's an inherent limitation. Mediocre programmers can write good code if they try hard enough and spend enough time on it.
Thats because humans have "understanding" they can use to assess quality, without understanding "trying harder" just means spending more "effort" distilling an average result, at best over a larger sample size.
This tracks. Tasks that used to be a day or two of grunt work are now an hour with Claude.
And there is a lot of that type of work to do if you're trying to grow a business. But, something in there should be trying to be exceptional or else you have no moat. Claude will probably not be able to breeze through that part with the same amount of ease...
It's a post claiming average AI is useful... by a for-profit "data platform with a CLI that LLM agents can use directly". What are they going to do? Criticize the whole industry they are selling to?
I did not post it. I did not intend for it to be posted here - really. It just happened someone did see it and posted it. So I did not advertise anything :-)
yes. Most people are upset and fear losing their job because they feel their job is sub-par. In reality, that's for most of them impostor syndrome, for some could be a wake up call.
This is all fun and games when you work with toy data samples. But most organizations are more complex, they have to match invoices from SAP with opportunities in Hubspot; or they have to consider that little sales territory exception for the sales guy in Munich to calculate the proper commission projection; or they have custom tables in Salesforce with 0 documentation; or... you get my point.
Not all context is documented, and some context has to even be changed because it doesn't make sense.
I find AI very useful, but I think a lot of this AI SQL products are misleading.
A car that starts 50% of the time isn't "average". The average new car starts more or less every time. (And if you said 'modal average', I'd say the modal average new car starts every time).
This is not only average. This is actual magic.
So let's be real: the SQL is average. The joins are average. The chart is average. And that took us less than 5 minutes and that was amazing, that is the entire point.
You did not need a data engineer to model your HubSpot data, or a meeting to agree on whether it should be last-click or first-click or linear or time-decay or whatever.
You needed a query, written fast, on data you already own. Your LLM wrote it. You confirmed it made sense. Your manager got a link.
Honestly, average is clearly magic; prove me wrong.
I'll give it a go. This is generated slop, and the poor, factory-made quality of the writing undercuts every aspect of the argument.
Author here; I suppose the... side eye awkward monkey meme was a bit lost on you; it was written on purpose. Funnily enough. Everything is slop if you want it to be slop. This however, was written by hand my little hands. Now I might be a bad writter - that is indeed another subject.
Tbh I dont really agree with your statements. Especially with working with data, intention is key. By using an llm, by definition, you are loosing intention. And Thai puts you in a position where you have to 1) think of exactly what you look for. 2) able to understand what the llm generated.
You might say it "still less work" and that's true, perhaps, only for the first few times. After a while you _learn_ how to do it, and understand how to _think_ with the language of your data. With LLMs, you never get this benefit, and also loose your ability to judge the LLM's output properly.
But again, that might be enough on your case, or, you simply don't _know_.
> You did not write a single line of SQL. You did not set up an attribution model. You asked a question, in English, and got a table.
But nobody bothered to check if it was correct. It might seem correct, but I've been burned by queries exactly like these many, many times. What can often happen is that you end up with multiplied rows, and the answer isn't "let's just add a DISTINCT somewhere".
The answer is to look at the base table and the joins. You're joining customers to two (implied) one-to-many tables, charges and email_events. If there are multiple charges rows per customer, or an email can match multiple email_events rows, it can lead to a Cartesian multiplication of the rows since any combination of matches from the base table to the joined tables will be included.
If that's the case, the transactions and revenue values are likely to be inflated, and therefore the pretty pictures you passed along to your boss are wrong.
Further reading, and a terrific resource:
https://kb.databasedesignbook.com/posts/sql-joins/#understan...
Ok but… nobody said you didn’t had to check either(?).
How do you check if you don't have any other view into the data but SQL and you don't know SQL?
Same way you do today; you trust whoever wrote the query.
I do not sell a wrapper on top of some LLM; you can absolutely write your SQL directly. There is an engine, there are iceberg tables. You can just live your best life doing your own SQL by hand.
Now if you couldnt do it before and you have a sensible understanding, you can likely do a bit more with the CLI tooling. And if you know a lot more, you can still do that. The queries are not hidden, or abstracted, If you need them they will be saved - transparently in SQL.
So I dont know what is the answer to the question "how do people do things they don't know how to do" ?
I think the author should be introduced to (or reminded of) the tale of the average from the US Air Force [1]. Social reality is high-dimensional and the "normal" thing is actually to be average in some dimensions, but strongly non-average in many others. So a "perfectly average" family would paradoxically be an outlier themselves.
I think this is important, because if his hypothesis is right, then LLMs behave differently here: They really are average in all dimensions. They are the pilots the Air Force thought they had before Daniels made the study.
So if he is right, we'd be changing from a mostly-non-average to a mostly-average society, which would really be a massive change - and probably not a good one IMO.
[1] https://noblestatman.com/uploads/6/6/7/3/66731677/cockpit.fl...
Wow incredibly interesting read, got me thinking about design principles and the "average user"
If average is all we need, then anyone can do it. What value do I add? How does an employee differentiate themselves?
Why didn’t the boss ask the AI for the charts to begin with?
Everyone’s income is going to be below average, because they got fired.
Not everyone can be average. Half of people will be below average.
I might not agree with the point, but I can see that idea that many things just need to be "good enough" (which we might define as "average") and we save our real expertise for the things that really matter.
> Half of people will be below average.
s/average/median
I don’t believe this is a meaningful distinction when we’re not going to agree on how to judge performance of software engineers. If this were solely about income, it might be an important distinction.
The article assumes a normal distribution, making the distinction moot
But it is useful to question whether that is true in all cases. The cases that aren't normal-distributed might be exactly the cases where it pays off to be neither average or median
there is a major shortcoming in this assumption; everything we've seen related to the internet and technology in general suggests there is rarely a normal distribution. I think it's way more valuable ato frame the questions as a long tail (pareto) distribution and a "good enough" cut-off point.
It is almost never true. If you filter people you're going to get a Pareto distribution.
Median is a type of average.
Though usually "average" implies arithmetic mean.
For that matter, how does a business differentiate themselves, if people can write their own software? While we're busy trying to replace our employees with AI, our customers are trying to replace our products with AI.
That isn't a sane starting point; if a corporation's strategy is to only hire above average employees they're going to fail. Enron springs to mind. Corporations generally take average people and give them a reasonably well defined scopes of simple work to complete that adds value. The bigger the corporation the more difficulty they have handling even the standard deviation above average differently to the one below; almost everyone just becomes a human resource to be swapped around based on social factors.
The people who need to be above average and exceptionally are senior management and maybe a few bright sparks in middle management. Most of the value-add happens there that builds social machines that then do the work.
> If average is all we need, then anyone can do it.
Pretty much, yes. That is why the range of salaries on offer is pretty compressed compared to the range of returns capitalists get.
> The people who need to be above average and exceptionally are senior management and maybe a few bright sparks in middle management. Most of the value-add happens there that builds social machines that then do the work.
That is the dream. Upper management can get software made without talent.
But is seems to be the greatest ideas in the last 30 years didn’t start in board rooms. They started with a couple coders creating a new idea.
No boardroom could have invented Google. It was so fundamentally different than what other search engines were doing.
We have this myth that upper management is so important. It is as the business grows in size, they are excellent for coordination. But ideas come from people closer to the problems.
At any tech company with leveling guidelines that I have seen, promotions above mid level have never been based on “I codez real gud”. It’s always been based on scope, impact and dealing with ambiguity. It’s stated differently in different companies.
No one has ever differentiated themselves based on how good of a ticket taker they are. Coding especially on the enterprise dev side where most developers work has been being commoditized since 2016 at least and compensation has stagnated since then and hasn’t come near keeping up with inflation.
In 2016, a good solid full stack, mobile or web developer working in the enterprise could make $135K working in a second tier city. That’s $185K inflation adjusted today. Those same companies aren’t paying $185K for the same position.
My one anecdote is that the same company I worked for back then making $125K and some of my coworkers were making $135K just posted a position on LinkedIn with the same requirements (SQL Server + C#) offering $145K fully remote.
> At any tech company with leveling guidelines that I have seen, promotions above mid level have never been based on “I codez real gud”. It’s always been based on scope, impact and dealing with ambiguity. It’s stated differently in different companies.
I 100% agree here.
AI has been a huge boon for me personally, because I stopped spending most of my writing code years ago. I was reviewing code, writing procedures, handling incidents, and generally just looking for pain points across the entire company and solving them before they became critical.
Those skills have transferred directly to working with AI.
The power saw makes average cuts, it didn't disemploy carpenters, we just made better homes.
We make more homes, but I would say the construction of the average home is worse after the invention of the power saw than before it.
Good gosh no.
That's like saying 'cars were better made in the 1950's because they used tons of steel'. Like they were 'heavier and more robust' - but that doesn't mean better.
Foundations are way better, more robust, especially weatherized. Windows today are like magic compared to windows 100 years ago.
What we do more poorly now is we don't use wood everywhere, aka doors, and certain kinds of workmanship are not there - like winding staircases, mouldings - but you can easily have that if you want to pay for it. That's a choice.
AI is power and leverage, it will make better things as long as it's directed by skilled operators.
Yes, houses got better because materials got better. Windows are better. But the construction of the houses is worse.
The precision of how the wood or material meets is worse (when cut at the site). There is a huge amount of sloppy work in modern construction.
I'm interested in how one would prove that one way or another.
It seems to me that in the past there probably was lots of shoddy workmanship and just no-one paid attention to it.
But I have no proof of that.
The average of quality isn't always available in all people.
Reducing the amount of time I spend on the average code has meant I'm spending more time adding my above-average contributions to the code base. Amdahl's law, basically. Reducing the amount of time spent on one task means the percentage of time spent on the others increases.
How stable that is on the long term, I don't know any more than the next guy, but it is where I'm contributing now.
This says "Editorial" at the top but has no authorship information. Who wrote this?
Nobody wrote this.
Average is only a tombstone of someone having failed to do better. And settling for average means pulling down.
When it comes to bs dashboard where "average is all you need", maybe the "better than average" result would be asking yourself if it's even worth doing in the first place?
Why average? I've always taken pride in my work and developed things that went beyond the expectations of the management and of the final users. Now I'm using LLMs a lot and I've been able to do much more than I used to- I find them great coworkers, technically very knowledgeable, patient and fast. I provide the big picture, keep an eye on the architectural soundness and code quality, and design the features. The LLM does the rest. The results are way above average.
Nobody cares
how do you know those queries are actually correct without domain knowledge?
Do you know enough about JOINs and how they work to be able to break those big queries down and figure out whether they are doing exactly what you're asking for in English?
You don't, and if businesses start using vibed reports for regulated reporting then I guess we'll see soon what the courts say about that
Litigation aside for a moment - I'm not sure vide-coded reporting could be much worse than what I've seen from early-career analysts in past companies.
You don’t. But you can still check ? ¯\_(ツ)_/¯
Average is all you need, if your needs are average.
But, you see, our needs are above average because we target above average exists so we only hire from the top 1% of software engineers, blah, blah, yadda, yadda, etc.
The Business simply cannot admit that it’s really doing nothing above average. If they did, investment dries up.
That is correct. And if you need more you can get it as well.
I liken it to the Ikeaficiation of furniture. To a great majority, such as my college self, it was preferable and desirable. As I've made more money, I've wanted something better.
There's a market for both, but the furniture slop of Ikea is dominant.
This seems like a nice context to mention Sturgeon's law:
> ninety percent of everything is crud
https://en.wikipedia.org/wiki/Sturgeon%27s_law
> But this is a pain, first because, if you do anything that is not selling a product online that people can buy right when they click a button, it is a drag to create those attribution models effectively: is it last click, first click, weighted attribution... who knows. Nobody knows. Everybody gives up and just adds it to a dashboard and pretends it makes sense.
Yes, thinking about your data and how to check it is so annoying. Much better to do something average, see if the result puts you in a good light, and share that insight into your company's working with ~~everyone on the internet~~ your boss.
Rarely have I seen "we help you create meaningless slop more easily" advertised so explicitly. Or is this also average?
I always find it a bit weird to see posts on the front page where all the comments disagree with the central premise of the article. In this case the post is an ad advocating for executing code you didn't write and handing the results to your manager.
It makes me wonder if Hacker News has a silent majority of people who would actually use AI in this way without wanting to admit it, and a vocal minority of people who wouldn't.
I'll admit that there are definitely times where I decide it's fine to roll with it blind. It's not often, not for critical paths, and definitely not where I don't have a good understanding of the blast radius if it fails spectacularly - but you'd be surprised how often it's easier and faster to fix it if it breaks than it would be to make sure it's not broken.
Being average is a just stage LLMs pass through as AI makes its way towards 'expert' and 'super human' levels.
LLMs are trained to predict tokens on highly mediocre code though. How will it exceed its training data?
Probably the same way other models learned to surpass human ability while being bootstrapped from human-level data - using reinforcement learning.
The question is, do we have good enough feedback loops for that, and if not, are we going to find them? I would bet they will be found for a lot of use cases.
Because you ask it to improve things and so it produces slightly better than average results - the average person can find things wrong with something, and fix it as well. Then you feed that improved result back in and generate a model where the average is better.
/end extreme over optimism.
Humans can decide to write above-average code by putting in more effort, writing comprehensive tests, iteratively refactoring, profile-informed optimization, etc.
I think you can have LLMs do that too, and then generate synthetic training data for "high-effort code".
Well state of the art LLMs sure can't consistently produce high quality code outside of small greenfield projects or tiny demos, which is a domain that was always easy even for humans as there are very few constraints to consider, and the context is very small.
Part of the problem is that better code is almost always less code. Where a skilled programmer will introduce a surgical 1-3 LOC diff, an incompetent programmer will introduce 100 LOC. So you'll almost always have a case where the bad code outnumbers the good.
Current LLMs do tend to explode complexity if left to their own devices but I don't think that's an inherent limitation. Mediocre programmers can write good code if they try hard enough and spend enough time on it.
Thats because humans have "understanding" they can use to assess quality, without understanding "trying harder" just means spending more "effort" distilling an average result, at best over a larger sample size.
Who are you to question our faith? /s
The majority of devs are average. What a shocker.
The majority of any filtered group are below average. Imposter syndrome isn't a thing, 80% of people really did just barely make the cutoff.
This tracks. Tasks that used to be a day or two of grunt work are now an hour with Claude.
And there is a lot of that type of work to do if you're trying to grow a business. But, something in there should be trying to be exceptional or else you have no moat. Claude will probably not be able to breeze through that part with the same amount of ease...
This is yet another ad, it's tiring.
It's a post claiming average AI is useful... by a for-profit "data platform with a CLI that LLM agents can use directly". What are they going to do? Criticize the whole industry they are selling to?
I did not post it. I did not intend for it to be posted here - really. It just happened someone did see it and posted it. So I did not advertise anything :-)
yes. Most people are upset and fear losing their job because they feel their job is sub-par. In reality, that's for most of them impostor syndrome, for some could be a wake up call.
adding LLMs to the incompetent doesn’t transform them
if anything it makes the world more dangerous
a reckoning is coming
the top decile will be janitors for the rest
This is all fun and games when you work with toy data samples. But most organizations are more complex, they have to match invoices from SAP with opportunities in Hubspot; or they have to consider that little sales territory exception for the sales guy in Munich to calculate the proper commission projection; or they have custom tables in Salesforce with 0 documentation; or... you get my point.
Not all context is documented, and some context has to even be changed because it doesn't make sense.
I find AI very useful, but I think a lot of this AI SQL products are misleading.
Average is all we need ! I mean, working 50% is enough, right ?
A car that starts 50% of the time ?
A plane that stops on 50% of the flights ?
A pacemaker that beats only 50% of the time ?
David Goodenought said that average is enough ..
A car that starts 50% of the time isn't "average". The average new car starts more or less every time. (And if you said 'modal average', I'd say the modal average new car starts every time).
It is not average today because people in the past tried to do better, not average things
I think that's maybe the point of the article:
"Whereas before, average was expensive in terms of both time and effort, average became cheap."
It is like nails on a chalkboard.
Author here; I suppose the... side eye awkward monkey meme was a bit lost on you; it was written on purpose. Funnily enough. Everything is slop if you want it to be slop. This however, was written by hand my little hands. Now I might be a bad writter - that is indeed another subject.
Another writer trying to redefine a common english word to mean whatever they want it to mean at the time.
Pass.