I don’t think they do. I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given. If you try using Claude with an obscure language or use case, you will notice that effect even more - it will keep pulling towards things it knows that aren’t at all what’s asked or “the best judgement” for what’s needed.
> I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given.
Just like people who get degrees in economics or engineering and engage in such role-play for decades. They're often pretty bad at anything they are not trained on.
Coincidentally, if you put a single American English speaker on a team of native German language speakers you will notice information transference falls apart.
Very normal physical reality things occurring in two substrates, two mediums. As if there is a shared limitation called the rest of the universe attempting to erode our efforts via entropy.
LLM is a distribution of human generated data sets. Since humans have the same incompleteness problems in society this affords enough statistical wiggle room for LLMs to make shit up; humans do it! Look in their data!
We're massively underestimating realities indifference to human existence.
There is no doing any better until we effectively break physics, by that I really mean come upon a game changing discovery that informs us we had physics all wrong to begin with.
Here here. Code has uniquely an incredible volume of data. And incredibly good ways to assess & test it's weights, to immediately find out of its headed the right way on the gradient.
Classifiers and LLMs get very different training and objectives, it's a mistake to draw inference from MNIST for coding agents or LLMs more generally.
Even within coding, their capability varies widely between context and even runs with the same context. They are not better at judgement in coding for all cases, def not
Lost me at the claim AI is good at judgement making, this is the exact opposite of my experience, they make both good and bad decisions with reliability
I think that's also true of people but we are kinder to each other and ourselves when judgement is bad.
How many times have you been in a conversation where you asked the wrong question or stated the wrong thing because you either weren't 100% listening (no one is), or you forgot, or you didn't connect the same dots that others did?
Treating humans differently makes sense because the "badness" of a judgement isn't just the correctness of an outcome, but also the nature of the process that created it, and humans are a different process.
For example, if two otherwise-identical humans yield the same equally-correct answer, we probably will favor the one that reached it through facts and reasoning, as opposed to the one who literally flipped a coin.
The type of decision very much matters, coding is one thing. I met a chap at the bar who ChatGPT had verified his crazy theories and he now outsourced all of his major life decisions to it, very proud and enthusiastic about it all. First IRL case of AI psychosis I have encountered. He was keen for my thoughts, as though I was the first person he met IRL that knew more than the layman about Ai. Hope the questions (contradictions) I left him with helped bring him back a bit.
In other words, a higher-level JIT compiler, meaning it still dynamically generates code based on runtime observations, but the code is in a higher-level language than assembly, and the observations are of a higher-level context than just runtime data types.
I agree this is what the article says, but it's a pretty bad premise. That would only be the case if the primary user interaction with coding agents was "feed in requirements, get a finished product". But we all know it's a more iterative process than that.
We are building this at docflowlabs ie a self-healing system that can respond to customer feedback automatically. And youre right that not all customers know what they want or even how to express it when they do, which is why the agent loop we have facing them is way more discovery-focused than the internal one.
And we currently still have humans in the loop for everything (for now!) - e.g, the agent does not move onto implementation until the root cause has been approved
Cool, I tried something similar over a couple weeks but the problem I ran into was that beyond a fairly low level of complexity, the English spec became more confusing than the code itself. Even for a simple multi-step KYC workflow, it got very convoluted and hard to make it precise, whereas in code it's a couple loops and if/else blocks with no possibility of misinterpretation. Have you encountered that at all, or have any techniques you've found useful in these situations?
That's why I feel like iterative workflows have won out so far. Each step gets you x% closer, so you close in on your goal exponentially, whereas the one-shot approach closes in much slower, and each iteration starts from scratch. The advantage is that then you have a spec for the whole system, though you can also just generate that from the code if you write the code first.
that's right, and agents turning specs into software can go in all sorts of directions especially when we don't control the input.
what we've done to mitigate is essentially backing every entrypoint (customer comment, internal ticket, etc) with a remote claude code session with persistent memory - that session essentially becomes the expert in the case. And we've developed checkpoints that work from experience (e.g. the root cause one) where a human has the opportunity to take over the wheel so to speak and drive in a different direction with all the context/history up to that point.
basically, we are creating a assembly line where agents do most of the work and humans increasingly less and less as we continue to optimize the different parts of assembly
as far as techniques, it's all boring engineering
* Temporal workflow for managing the lifecycle of a session
* complete ownership of the data model e2e. we dont use Linear for example; we built our own ticketing system so we could represent Temporal signals, github webhooks and events from the remote claude sessions exactly how we wanted
* incremental automation gains over and over again. We do a lot of the work manually first (like old fashioned hand coding lol) before trying to automate so we become experts in that piece of the assembly line and it becomes obvious how to incrementally automate...rinse and repeat
We are building this learned software system at Docflow Labs to solve the integration problem in healthcare at scale ie systems only able to chat with other systems via web portals. RPA historically awful to build and maintain so we've needed to build this to stay above water. Happy to answer any questions!
> Code is the policy, deployment is the episode, and the bug report is the reward signal
This is a great quote. I think it makes a ton of sense to view a sufficiently-cheap-and-automated agentic SWE system as a machine learning system rather than traditional coding.
* Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
* I also wonder whether you can use fully-automated agentic SWE/data science in adversarial use-cases where you traditionally have to use ML, such as online moderation. You could set a clear goal to cut down on any undesired content while minimizing false-positives, and the agent would be able to create a self-updating implementation that dynamically responds to adversarial changes. I'm most familiar with video game anti-cheat where I think something like this is very likely possible.
* Perhaps you can use a fully-automated SWE loop, constrained in some way, to develop game enemies and AI opponents which currently requires gruesome amounts of manual work to implement. Those are typically too complex to tackle using traditional ML and you can't naively use RL because the enemies are supposed to be immersive rather than being the best at playing the game by gaming the mechanics. Maybe with a player controller SDK and enough instructions (and live player feedback?), you can get an agent to make a programmatic game AI for you and automatically refine it to be better.
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
Just yesterday I came across a something a sci-fi webcomic author wrote as backstory back in ~2017, where all future AI has auditable logic-chains, due to a disaster in 2061 involving an American AI defense system.
While the overall concept of "turns on its creators" is not new, I still found the "root cause" darkly amusing:
> [...] until the millisecond that Gordon Smith put his hand on a Bible and swore to defend the constitution.
> Thus, when the POTUS changed from Vanderbilt to Smith, a switch flipped. TIARA [Threat Intel Analysis and Response Algorithm] was now aware of an individual with 1) a common surname, 2) a lot of money and resources, 3) the allegiance of thousands of armed soldiers, 4) many alternate aliases (like "POTUS"), 5) frequent travel, 6) bases of operation around the world, 7) mentioned frequently in terrorist chatter, etc, etc, etc.
> And yes, of course, when TIARA launches a drone strike, it notifies a human operator, who can immediately countermand it. This is, unfortunately, not useful when the drone strike mission has a travel time of zero seconds.
> Thousands of intelligent weapons, finding themselves right on top of a known terrorist's assets, immediately did their job and detonated. In less than fifteen minutes, over ten thousand people lost their lives, and the damage was estimated in the trillions of dollars.
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees
I like this train of thought. Research shows that decision trees are equivalent to 1-bit model weights + larger model.
But critically, we only know some classes of problems that are effectively solved by this approach.
So, I guess we are stuck waiting for new science to see what works here. I suspect we will see a lot more work on these topics after the we hit some hard LLM scalability limits.
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
For certain problems I think thats completely right. We still are not going to want that of course for classic ML domains like vision and now coding, etc. But for those domains where software substrate is appropriate, software has a huge interpretability and operability advantage over ML
> We still are not going to want that of course for classic ML domains like vision
It could make sense to decompose one large opaque model into code with decision trees calling out to smaller models having very specific purposes. This is more or less science fiction right now, 'mixture of experts' notwithstanding.
You could potentially get a Turing award by making this work for real ;)
> Neural networks excel at judgment
I don’t think they do. I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given. If you try using Claude with an obscure language or use case, you will notice that effect even more - it will keep pulling towards things it knows that aren’t at all what’s asked or “the best judgement” for what’s needed.
> I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given.
Just like people who get degrees in economics or engineering and engage in such role-play for decades. They're often pretty bad at anything they are not trained on.
Coincidentally, if you put a single American English speaker on a team of native German language speakers you will notice information transference falls apart.
Very normal physical reality things occurring in two substrates, two mediums. As if there is a shared limitation called the rest of the universe attempting to erode our efforts via entropy.
LLM is a distribution of human generated data sets. Since humans have the same incompleteness problems in society this affords enough statistical wiggle room for LLMs to make shit up; humans do it! Look in their data!
We're massively underestimating realities indifference to human existence.
There is no doing any better until we effectively break physics, by that I really mean come upon a game changing discovery that informs us we had physics all wrong to begin with.
The fact there are a lot of people around who don't think (including me at times!) does mean LLMs doing that are thinking.
Much like LLMs writing text like mindless middle managers, it doesn't mean they're intelligent, more that mindless middle managers aren't.
Here here. Code has uniquely an incredible volume of data. And incredibly good ways to assess & test it's weights, to immediately find out of its headed the right way on the gradient.
> And incredibly good ways to assess & test it's weights
What weights are you referring to? How does [Claude?] code do that
Neural nets have been better at classifying handwriting (MNIST) than the best humans for a long time. This is what the author means by judgement.
They are super-human in their ability to classify.
Classifiers and LLMs get very different training and objectives, it's a mistake to draw inference from MNIST for coding agents or LLMs more generally.
Even within coding, their capability varies widely between context and even runs with the same context. They are not better at judgement in coding for all cases, def not
A lot of the context is not even explicit, unlike the case for toy problems like MNIST.
Tell that to all the OCR fuckups I see in all the ebooks I read.
Your ebooks are made with handwriting recognition...? What do you read, the digital version of Dead Sea Scrolls?
Some of them are, most of them are standard typesetting, which you would think would be all the easier to OCR, due to the uniformity.
But because you're curious, there are some fairly famous handwritten books that maintain their handwriting in publication, my favorite being: https://boingboing.net/2020/08/31/getting-started-in-electro...
Old manuscripts are another one, there are a LOT of those. Is that handwriting? Maybe you'd argue it's "hand-printing" because its so meticulous.
Lost me at the claim AI is good at judgement making, this is the exact opposite of my experience, they make both good and bad decisions with reliability
I think that's also true of people but we are kinder to each other and ourselves when judgement is bad.
How many times have you been in a conversation where you asked the wrong question or stated the wrong thing because you either weren't 100% listening (no one is), or you forgot, or you didn't connect the same dots that others did?
Treating humans differently makes sense because the "badness" of a judgement isn't just the correctness of an outcome, but also the nature of the process that created it, and humans are a different process.
For example, if two otherwise-identical humans yield the same equally-correct answer, we probably will favor the one that reached it through facts and reasoning, as opposed to the one who literally flipped a coin.
> I think that's also true of people
Reductionist positions seem to always pop up in these threads.
I think it makes better decisions than me provided I give it enough high-level direction and context.
Sometimes I give it __too much__ direction and it finds the solution I had in mind but not the best.
I'm not into it enough that I'm formally running different personas against each other in a co-operative system but I kind of informally do that.
The type of decision very much matters, coding is one thing. I met a chap at the bar who ChatGPT had verified his crazy theories and he now outsourced all of his major life decisions to it, very proud and enthusiastic about it all. First IRL case of AI psychosis I have encountered. He was keen for my thoughts, as though I was the first person he met IRL that knew more than the layman about Ai. Hope the questions (contradictions) I left him with helped bring him back a bit.
It's going to get a lot worse
In other words, a higher-level JIT compiler, meaning it still dynamically generates code based on runtime observations, but the code is in a higher-level language than assembly, and the observations are of a higher-level context than just runtime data types.
I agree this is what the article says, but it's a pretty bad premise. That would only be the case if the primary user interaction with coding agents was "feed in requirements, get a finished product". But we all know it's a more iterative process than that.
Author here
We are building this at docflowlabs ie a self-healing system that can respond to customer feedback automatically. And youre right that not all customers know what they want or even how to express it when they do, which is why the agent loop we have facing them is way more discovery-focused than the internal one.
And we currently still have humans in the loop for everything (for now!) - e.g, the agent does not move onto implementation until the root cause has been approved
Cool, I tried something similar over a couple weeks but the problem I ran into was that beyond a fairly low level of complexity, the English spec became more confusing than the code itself. Even for a simple multi-step KYC workflow, it got very convoluted and hard to make it precise, whereas in code it's a couple loops and if/else blocks with no possibility of misinterpretation. Have you encountered that at all, or have any techniques you've found useful in these situations?
That's why I feel like iterative workflows have won out so far. Each step gets you x% closer, so you close in on your goal exponentially, whereas the one-shot approach closes in much slower, and each iteration starts from scratch. The advantage is that then you have a spec for the whole system, though you can also just generate that from the code if you write the code first.
that's right, and agents turning specs into software can go in all sorts of directions especially when we don't control the input.
what we've done to mitigate is essentially backing every entrypoint (customer comment, internal ticket, etc) with a remote claude code session with persistent memory - that session essentially becomes the expert in the case. And we've developed checkpoints that work from experience (e.g. the root cause one) where a human has the opportunity to take over the wheel so to speak and drive in a different direction with all the context/history up to that point.
basically, we are creating a assembly line where agents do most of the work and humans increasingly less and less as we continue to optimize the different parts of assembly
as far as techniques, it's all boring engineering
* Temporal workflow for managing the lifecycle of a session
* complete ownership of the data model e2e. we dont use Linear for example; we built our own ticketing system so we could represent Temporal signals, github webhooks and events from the remote claude sessions exactly how we wanted
* incremental automation gains over and over again. We do a lot of the work manually first (like old fashioned hand coding lol) before trying to automate so we become experts in that piece of the assembly line and it becomes obvious how to incrementally automate...rinse and repeat
Author here
We are building this learned software system at Docflow Labs to solve the integration problem in healthcare at scale ie systems only able to chat with other systems via web portals. RPA historically awful to build and maintain so we've needed to build this to stay above water. Happy to answer any questions!
> Code is the policy, deployment is the episode, and the bug report is the reward signal
This is a great quote. I think it makes a ton of sense to view a sufficiently-cheap-and-automated agentic SWE system as a machine learning system rather than traditional coding.
* Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
* I also wonder whether you can use fully-automated agentic SWE/data science in adversarial use-cases where you traditionally have to use ML, such as online moderation. You could set a clear goal to cut down on any undesired content while minimizing false-positives, and the agent would be able to create a self-updating implementation that dynamically responds to adversarial changes. I'm most familiar with video game anti-cheat where I think something like this is very likely possible.
* Perhaps you can use a fully-automated SWE loop, constrained in some way, to develop game enemies and AI opponents which currently requires gruesome amounts of manual work to implement. Those are typically too complex to tackle using traditional ML and you can't naively use RL because the enemies are supposed to be immersive rather than being the best at playing the game by gaming the mechanics. Maybe with a player controller SDK and enough instructions (and live player feedback?), you can get an agent to make a programmatic game AI for you and automatically refine it to be better.
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
Just yesterday I came across a something a sci-fi webcomic author wrote as backstory back in ~2017, where all future AI has auditable logic-chains, due to a disaster in 2061 involving an American AI defense system.
While the overall concept of "turns on its creators" is not new, I still found the "root cause" darkly amusing:
> [...] until the millisecond that Gordon Smith put his hand on a Bible and swore to defend the constitution.
> Thus, when the POTUS changed from Vanderbilt to Smith, a switch flipped. TIARA [Threat Intel Analysis and Response Algorithm] was now aware of an individual with 1) a common surname, 2) a lot of money and resources, 3) the allegiance of thousands of armed soldiers, 4) many alternate aliases (like "POTUS"), 5) frequent travel, 6) bases of operation around the world, 7) mentioned frequently in terrorist chatter, etc, etc, etc.
> And yes, of course, when TIARA launches a drone strike, it notifies a human operator, who can immediately countermand it. This is, unfortunately, not useful when the drone strike mission has a travel time of zero seconds.
> Thousands of intelligent weapons, finding themselves right on top of a known terrorist's assets, immediately did their job and detonated. In less than fifteen minutes, over ten thousand people lost their lives, and the damage was estimated in the trillions of dollars.
[0] https://forwardcomic.com/archive.php?num=200
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees
I like this train of thought. Research shows that decision trees are equivalent to 1-bit model weights + larger model.
But critically, we only know some classes of problems that are effectively solved by this approach.
So, I guess we are stuck waiting for new science to see what works here. I suspect we will see a lot more work on these topics after the we hit some hard LLM scalability limits.
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.
For certain problems I think thats completely right. We still are not going to want that of course for classic ML domains like vision and now coding, etc. But for those domains where software substrate is appropriate, software has a huge interpretability and operability advantage over ML
> We still are not going to want that of course for classic ML domains like vision
It could make sense to decompose one large opaque model into code with decision trees calling out to smaller models having very specific purposes. This is more or less science fiction right now, 'mixture of experts' notwithstanding.
You could potentially get a Turing award by making this work for real ;)
woah that would be crazy