The Zeigarnik effect point is underrated here. I've caught myself lying in bed running prompts in my head after a long session with Claude Code. The fix that's worked for me is treating each session like a pomodoro — hard stop, write down where I left off, walk away. The loop is real.
The write-down-where-you-left-off trick is underrated. I think it works because it gives the brain a kind of "closed" signal the open loop that Zeigarnik describes gets a placeholder, so the mind stops simulating. A physical notebook works better than a digital note for this, at least for me. Something about closing an actual object.
This really hit home for me. Before vibe coding, I'd only go on coding binges maybe once a month (coding for 8 hours straight). Now with AI, the velocity is addictive. My coding binges have become much more frequent but sadly less satisfying.
I've been experimenting a lot with AI-assisted development lately and noticed a strange pattern.
The feedback loop becomes extremely fast: prompt → result → tweak → repeat.
That speed creates a kind of variable reward system where near-misses keep you iterating longer than planned. I also started noticing things like thinking about prompts late at night or waking up early wanting to try "just one more idea".
This post is an attempt to describe some of the psychological effects behind that experience.
Curious if other developers noticed similar patterns when using LLM coding tools.
In my own projects I actually front-load a lot of the planning. The pipeline behind codn.dev is a good example of that — the planning phase was longer than the actual prompting.
I started with a very simple Astro project and then gradually designed the publishing workflow around it: content schema, frontmatter structure, SEO generation, deployment pipeline, etc. Only after that foundation existed did I start using LLMs to help with parts of the implementation.
The pattern I describe in the article is less about "one prompt shipping", and more about what happens once the feedback loop becomes very fast. Even with a plan in place, the prompt → result → tweak cycle can become surprisingly sticky.
The Zeigarnik effect point is underrated here. I've caught myself lying in bed running prompts in my head after a long session with Claude Code. The fix that's worked for me is treating each session like a pomodoro — hard stop, write down where I left off, walk away. The loop is real.
The write-down-where-you-left-off trick is underrated. I think it works because it gives the brain a kind of "closed" signal the open loop that Zeigarnik describes gets a placeholder, so the mind stops simulating. A physical notebook works better than a digital note for this, at least for me. Something about closing an actual object.
This really hit home for me. Before vibe coding, I'd only go on coding binges maybe once a month (coding for 8 hours straight). Now with AI, the velocity is addictive. My coding binges have become much more frequent but sadly less satisfying.
I've been experimenting a lot with AI-assisted development lately and noticed a strange pattern.
The feedback loop becomes extremely fast: prompt → result → tweak → repeat.
That speed creates a kind of variable reward system where near-misses keep you iterating longer than planned. I also started noticing things like thinking about prompts late at night or waking up early wanting to try "just one more idea".
This post is an attempt to describe some of the psychological effects behind that experience.
Curious if other developers noticed similar patterns when using LLM coding tools.
Are you front-loading your planning before you let the agent start coding? That might help.
Good question.
In my own projects I actually front-load a lot of the planning. The pipeline behind codn.dev is a good example of that — the planning phase was longer than the actual prompting.
I started with a very simple Astro project and then gradually designed the publishing workflow around it: content schema, frontmatter structure, SEO generation, deployment pipeline, etc. Only after that foundation existed did I start using LLMs to help with parts of the implementation.
The pattern I describe in the article is less about "one prompt shipping", and more about what happens once the feedback loop becomes very fast. Even with a plan in place, the prompt → result → tweak cycle can become surprisingly sticky.