"Then, within twenty minutes, we started ignoring the cows. … Cows, after you’ve seen them for a while, are boring"
Skill issue. I've been looking at cows for 40 years and am still enchanted by them. Maybe it helps that I think of cows as animals instead of story book illustrations; you'd get lynched if you claimed you got bored of your pet cat after 20 minutes.
Ghibli images are not "cows", they're /an artists style/, and a particular shop that has expressly asked that you *not copy their work*, because it cheapens what humans do.
Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.
Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.
Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.
It works great, but I can’t imagine skipping the refinement process.
> Do people really try to one-shot their AI tasks?
Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.
It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.
Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.
Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.
To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.
I got stuck in that rabbit hole you mention. Ended up ditching AI and just picked up a no/low-code web app builder cause I don’t handle large project contexts in my own head well enough to chunk design into tasks that AI can handle. But the builder I use can separate the backend from the front end which allows for a custom front end template source code to be consumed by an ai agent if you want. I’m hoping I can manage this context better but I still have to design and deploy a module to consume user submitted photos and process with an ai model for instant quote generation
If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.
To win big financially you have to be able to use AI better than others. Even if you use it merely as well as the next person, your productivity has increased, reducing costs, which is a good thing. The bad news for some is that they are not enjoying the parts of the work left over from automation.
I did not speak of "exponential" returns, but it is now feasible for one person to compete with a team, or a small team with a big one, due to co-ordination costs and the difficulty of assembling the right people.
> Generative AI gives us incredible first drafts to work with, but few people want to put in the additional effort it requires to make work that people love
and
> So make your stuff stand out. It doesn't have to be "better." It just has to be different.
With my current project (a game project), I full-vibed as hard as I could to test out the concept, as well as get some of the data files in place and write a tool for managing the data. This went great, and I have made technology choices for AI-coding and have gained enough skill with AI-coding that I can get prettttty far this way. But it does produce a ball-of-mud pattern and a lot of cruft that will cause it to hit a brick wall.
Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).
Where does the product description sit in your project so the ai can reference it? Is it like a summary form that describes what the project basically should do or be used for, asking for a friend
My prediction is that the next differentiator will be response time.
First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)
But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.
I've been thinking something similar about any company that has AI do all it's software dev.
Where's your moat? If you can create the software with prompts so can your competitors.
Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.
A lawyer knowing what model his opposition uses could speculate on their likely strategies.
The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.
Turns out being able to write the software is not the only, or even the most important factor in success.
But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.
Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software
The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"
It's so bizarre to me seeing these comments as a professional software engineer. Like, you do realize that at least 80% of the code written in large companies like Microsoft, Amazon, etc was slop long before AI was ever invented right?
The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.
You are focusing on code. That is the wrong focus. Creating code was never the job. The job was being trustworthy about what I deliver and how.
AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.
To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.
(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)
> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.
I feel that I understand the leverage paradox concept, and the induced demand concept, but I don't understand how they are the same concept. Can you explain the connection a little more?
More leverage = more productivity = more supply of good and services
The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane
Ghibli images are not "cows", they're /an artists style/, and a particular shop that has expressly asked that you *not copy their work*, because it cheapens what humans do.
Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.
Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.
Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.
It works great, but I can’t imagine skipping the refinement process.
> Do people really try to one-shot their AI tasks?
Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.
It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.
Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.
Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.
To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.
I got stuck in that rabbit hole you mention. Ended up ditching AI and just picked up a no/low-code web app builder cause I don’t handle large project contexts in my own head well enough to chunk design into tasks that AI can handle. But the builder I use can separate the backend from the front end which allows for a custom front end template source code to be consumed by an ai agent if you want. I’m hoping I can manage this context better but I still have to design and deploy a module to consume user submitted photos and process with an ai model for instant quote generation
If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.
The cost of losing the race is losing your home and starving. Very intense.
To win big financially you have to be able to use AI better than others. Even if you use it merely as well as the next person, your productivity has increased, reducing costs, which is a good thing. The bad news for some is that they are not enjoying the parts of the work left over from automation.
I don't see how that can be. There is no exponential return on "investing" in using AI real good.
Investing in your understanding and skill, on the other hand, has nearly limitless returns.
I did not speak of "exponential" returns, but it is now feasible for one person to compete with a team, or a small team with a big one, due to co-ordination costs and the difficulty of assembling the right people.
> Generative AI gives us incredible first drafts to work with, but few people want to put in the additional effort it requires to make work that people love
and
> So make your stuff stand out. It doesn't have to be "better." It just has to be different.
equals... craft?
Isn't that what has always mattered a great deal
Out of curiousity isnt this very similar to Jevon's paradox? Or is JP talking about supply/demand vs this being about competitiveness/skill?
With my current project (a game project), I full-vibed as hard as I could to test out the concept, as well as get some of the data files in place and write a tool for managing the data. This went great, and I have made technology choices for AI-coding and have gained enough skill with AI-coding that I can get prettttty far this way. But it does produce a ball-of-mud pattern and a lot of cruft that will cause it to hit a brick wall.
Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).
So far I'm quite happy with this.
Where does the product description sit in your project so the ai can reference it? Is it like a summary form that describes what the project basically should do or be used for, asking for a friend
My prediction is that the next differentiator will be response time.
First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)
But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.
This seems like an unsubstantial article, ironically it might have been written by AI. Here's the entire summary:
AI makes slop
Therefore, spend more time to make the slop "better" or "different"
[No, they do not define what counts as "better" or "different"]
People dislike the word slop because it sounds harsh.
But what’s unique today becomes slop tomorrow, AI or not.
Art has meaning. Old buildings feel special because they’re rare. If there were a thousand Golden Gate Bridges, the first wouldn’t stand out, as much.
Online, reproduction is trivial. With AI, reproducing items in the physical world will get cheaper.
I've been thinking something similar about any company that has AI do all it's software dev.
Where's your moat? If you can create the software with prompts so can your competitors.
Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.
A lawyer knowing what model his opposition uses could speculate on their likely strategies.
The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.
Turns out being able to write the software is not the only, or even the most important factor in success.
I’d suggest reading about competitive moats and where they come from. The ability to replicate another’s software does not destroy their moat.
This article says that the stairs have been turned into an escalator. But I think it’s an escalator to slop.
Therefore, it doesn’t affect my work at all. The only thing that affects my prospects is the hype about AI.
Be a purple cow, the guy says. Seems to me that not using AI makes me a purple cow.
> Therefore, it doesn’t affect my work at all.
But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.
> slop that works
Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software
The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"
Anyone can say that something works. Lots of things look like they work even though they harbor severe and elusive bugs.
It's so bizarre to me seeing these comments as a professional software engineer. Like, you do realize that at least 80% of the code written in large companies like Microsoft, Amazon, etc was slop long before AI was ever invented right?
The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.
You are focusing on code. That is the wrong focus. Creating code was never the job. The job was being trustworthy about what I deliver and how.
AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.
To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.
(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)
> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.
Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.
https://en.wikipedia.org/wiki/Red_Queen%27s_race
This circumstance is more commonly known as the Jevons Paradox
https://en.wikipedia.org/wiki/Jevons_paradox
Also known as induced demand, and why adding a lane on the highway doesn’t help for long
https://en.wikipedia.org/wiki/Induced_demand
I feel that I understand the leverage paradox concept, and the induced demand concept, but I don't understand how they are the same concept. Can you explain the connection a little more?
More leverage = more productivity = more supply of good and services
The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane
TL;DR relative status is zero sum