Learn the strengths and weaknesses of the new technology and add it to your resume.
Become the AI advisor who can help an organization adopt the tech where appropriate and avoid the traps associated with top-down hype- and fomo-driven adoption.
Also who knows where the AI cycle will be in 2-3 years. My sense is by then we will see the cost of tech debt caused by LLM generated code, the cost of the ignorance and naïveté of vibe coding and the cost of VC money wanting its ROI on a subsidized tech.
Start looking for a new role that is better aligned with your expectations. You may find it harder than you expect. In which case, you might be glad you didn't burn your bridges in a pique over AI mandates by the CEO & CTO.
It's the same elsewhere. Some places are actually using it as a way to get rid of people 'resistant to change'. It also remains to be seen what technical skills we need 5 years from now. I did memory management and pointers 15 years ago and I can still do them now.
What I'd suggest is adapt to it, find ways to push back. Obviously things like "delete entire unit test file & have claude generate a new one" is a bad idea. I've seen claude "monkey patching" a system so that it returns true to the tests.
This issue is going to pop up in the future. Experiment with it on the company's dime even if you've checked out emotionally. You are still doing your job - improving code quality and making sure things run.
The new approach seems to be doing TDD. One, as an engineer, you'll know when AI is bullshitting you with mocks. Even when mocks are BS, you can still test the thing they're meant to represent. 2) AI spits more code than anyone can review. The red, green, refactor approach is one way to keep them on the rails.
> I've seen claude "monkey patching" a system so that it returns true to the tests.
I’ve watched Github Copilot do the same thing. I’ve also seen it doubling down on ridiculous things and just spewing crash-laden messes. There seems to be a low upper ceiling on how “competent” it is, which makes sense.
I'm a senior engineer with 20+ (oof) years of industry experience. I appreciate that this sucks and you don't want to do it. I wouldn't either. That said, it's a hirer's market out there right now. There will be plenty of people who will be happy to take your position while you're looking for something you prefer.
My opinion is that we're going to have about 5 years of this. Managers and C-suite folks are going to do their absolute darnedest to replace and supplement people with AI tools before they figure out it's not going to work. While I appreciate the differences, I remember seeing this ~6-7 years ago with blockchain at my last role. It'll work itself out. In the mean time, you get to contribute to the situation, instead of simply not being present. It's not going to be fun of course.
I don't think we're ever going back from this. There's an entire generation of new coders, and new managers who are growing up with this stuff. It's part of their experience, and suggesting they not use it is going to be akin to asking if you can use a typewriter instead of a computer with a word processor. Some companies will take longer to adopt, but it's coming...
I feel I'm sort of stuck in the opposite situation of OP. I manage a few massive codebases that I simply cannot trust an AI to go mucking around with. The only type of serious AI coding experience I could get at this point would be to branch one of these and start experimenting on my own dime to see how good or bad the actual experience is. And that doesn't really seem worth it, because I know what I want to do with them (what's on the feature list that I'm being paid to develop)... and it feels like it would take more time to talk to an LLM and get it perfectly dialed in on any given feature, and ensure it was correct, than it would take to write it myself. And I'm not getting paid for it.
I feel like I'd never use Claude seriously unless someone demanded I used it from day one on a greenfield project. And so while I get to keep evolving my coding skills, I'm a little worried that my "AI skills" will lag behind.
I do a lot of non-work AI stuff on my own, from pair programming with AI, asking it to generate whole things, to just asking it to clarify a general approach to a problem.
FWIW, in a work environment (and I have not been given the go-ahead to start this at my work) I would start by supplementing my codebase. Add a new feature via AI coding, or maybe reworking some existing function. Start small.
With all due respect, and I’m particularly anti-LLM, you sound exactly like someone who has never tried the tech.
You can use LLMs without letting them run wild on the entire codebase. You have git, so you can see every minute change it makes. You can limit what files it’s allowed to change and how much context you give it.
You don’t have to give it root on your machine to make it useful. You don’t have to “Jesus, Take the Wheel”. It is possible to try it out at a smaller scale, even on critical code.
Is it worth leaving? Hard to say for your specific situation, there's thousands of variables that no one here will ever know. Unless you are a superstar or independently wealthy, it's typically a bad idea to leave a job before you have something else lined up.
Is it worth looking? Absolutely! It will be much easier to make a decision when you're comparing your current position to a job offer, rather than comparing your current position to an unknown.
I would also add, no matter what you feel about your current job, it's always a good idea to keep feelers out there for new positions. The fastest way up the rank and salary ladders is moving to new positions. It will always outpace internal promotions.
Assume OP is talking about grabbing a new rope before letting go of another one; otherwise we are mixing generalities about career moves not specific to the issue in the topic.
Three years ago I left my job with VERY high salary because I was starting to burn out and took two months off.
From my experience, if you're burnt out or starting to burn out then leave, otherwise I recommend staying until you secure another job.
Regarding the situation, they want to delete the tests? Fine, you have git right? Replace it, and let everything set on fire, quietly enjoy the chaos and at some point revert the changes. Or don't, you're leaving anyway.
I can't stop thinking what happened when CASE tools, WYSIWIG, UML, Model Driven Architecture/Development, etc was pushed into devs. I know, it's a different phenomenon (that was a graphical visual push, this keeps the text).
I'm basically retired now and I'm really glad about the timing - I would not want to be in this field if I were in my 30s, 40s or 50s the way things are going. I think what's happening at your company is happening in lots of companies right now so I don't think you'll be able to jump ship and end up somewhere else where it's not happening. You can hope for a backlash - and it might come. In the meantime, go ahead and vibecode being careful about the areas you do it in - they seem pretty good at coming up with testcases, for example. Maybe don't let your coding agent have full editing permissions. Have it give you suggestions for what it would do in the code and evaluate them closely before letting the edits happen (pushing back when needed).
I know with only 5 years experience this may not be obvious, but this is only the first of many “revolutionary” technologies making everyone around you lose their minds that you’ll have to deal with in your career. Like every other such technology, I recommend that you engage with it, understand it, relate that experience to what your employer does, and be the voice of knowledgeable pragmatism about where to use it. In other words, be an engineer.
If that can’t be done where you
are, or isn’t valued, you’re in the wrong place.
I’ve been through this with (including but not limited to) PCs, OOP, client-server, SOA, XML, NoSQL, blockchain, “big data”, and indeed, multiple definitions of “AI”. Turns out all but one of those were actually somewhat useful in the end, when applied properly, but they didn’t eliminate the industry. Just roll with it.
> I know with only 5 years experience this may not be obvious, but this is only the first of many “revolutionary” technologies making everyone around you lose their minds that you’ll have to deal with in your career.
While this has some truth, the size of the current "revolution" makes all the others look tiny, especially in terms of how it affects a programmer's day job. Nor did most of those "revolutions" affect every field of programming at once, like this one does. The percent of programmers actually impacted by blockchain is probably in the low single-digits. The percent of programmers using some version of AI tooling 3 years into this is probably >50%, and the more impactful tools will be used more very soon is my gues.
I’d stay and actually try the vibe coding, but if it’s not working, only a bit.
For example, try deleting one failing unit test and re-generate it with Claude. Then if it turns out mostly worthless, scrap it and restore the original test. Maybe the entire test is correct (and easy to verify), maybe you can take pieces from it, maybe it’s unsalvageable; if it doesn’t save time, write tests manually from then on until the next major AI improvement.
Worst case, CEO fires you for not vibe-coding enough. Best case, you find a way for them to make your life easier. My prediction (based on some but not much experience) is that you spend only a small amount of time trying the AI tools, occasionally they impress you, usually they fail, but even then it’s interesting and fun to see what they do.
EDIT: as for dealing with the spaghetti when others use AI; wait for that to become a problem before quitting over it. And of course you can look for opportunities now.
> The CEO & CTO are both obsessed with it and promote things like "delete entire unit test file & have claude generate a new one" rather than manually address test failures.
> "delete entire unit test file & have claude generate a new one" rather than manually address test failures.
First thought, "wat", what if the code is broken, not the tests...
Second thought, if the entire unit test file is getting generated by claude without significant oversight like this suggests... I suppose its probably the tests that are broken.
---
As for your own situation. Looking for a new job because you aren't happy with the process at your current job is completely reasonable.
I'm not sure that you're right that this workflow will cause your technical growth to stall though - the freedom to experiment with strange new (probably ineffective) workflows on someone else's dime might well be beneficial in many ways. But if you're not happy doing this, and you have the skills and network to find a new job, why wouldn't you?
Play along, but keep relatively detailed yet abstracted notes on how the AI code is failing. Keep these notes private ... type them into a notes program on your personal device so that you can draw upon them in the future if needed.
My workplace has execs saying similar things unfortunately. it's even in some company goals that we will be using it. pretty commonly known company too.
25 years of experience here. AI is the real deal, and it should be the primary way you’re coding now. Everyone who doesn’t embrace it is about to become a dinosaur overnight.
They’re going to pay you to learn to work with the thing you need to learn to work with anyway? Be smart. Take the deal.
That said, it’s a free country, you can quit any time for any reason.
If you are not at least using these tools, more so than vibecoding but understanding how to improve your craft, you are going to lose. I’m now 100% behind AI gen code as it has given me 10x powers. Ask the right prompts, get the best code. AI, if you reject it, yngmi
I suspect that if you don't change the /sector/ of computing you are working in, you may run into the same thing elsewhere.
AI is being adopted mainly where it works, and where it works is where regurgitated code cobbed-together from what has been seen before is sufficient to get the job done.
Assume that your CEO and CTO are not complete idiots; that they have some rational argument for believing that the approach will work. It's also possible they are gambling on an experiment; if it fails, they will back off on it.
If you want to avoid being told to use AI, you have to work on legacy tech stacks that AI doesn't understand; algorithmically complex code; critical infrastructure code where one bad bit stops multiple machines and applications; safety-critical embedded where AI slop could maim and kill and so is out of the question, etc.
The market is not easy right now. I would not leave unless you have something definite lined up.
> 1. Be pushed into a workflow that will cause my technical growth to stall or degrade
Whether your growth stalls or degrades is up to you, but in my country your employer's ability to tell how you how to produce/deliver the work (not just the outcome desired) is the difference between being an employee and contractor
You should remain open to new things in this industry. Hate it or not, AI is currently the new thing in our line of work.
> 2. Be overseeing a bunch of AI-generated spaghetti 2-3 years from now
How you implement code, including human review and understanding of code, is key. I have never copy and pasted code into development from an LLM/AI helper. I've certainly asked it questions about the code, tested the code output, had it add comments to help me understand the code it wrote and produce alternate methods that better fit my needs, etc.
"No spaghetti" in the codebase will prevent having to take care of it, but that doesn't mean small modular components, troubleshooting, general ideation of different approaches to see what can scale, etc. isn't going to be really helpful.
> I'm a 'senior engineer' with ~5 years of industry experience and am considering moving on from this company
5 years is not what I would consider a big bargaining chip in today's market full of seasoned developers, including those who started when they were in middle school and are applying for the same jobs as you would be.
Can you work with your employer to effectively introduce some AI tools and workflows to help ideas, changes, revisions, new features, or even documentation?
Don't jump until it is safe, and remember the next place is likely just slower or one leadership away from asking their employees the same thing your employer is.
>You should remain open to new things in this industry
I'm open to new things. I've seen demo's, attended presentations, and spent a long time toying around with it myself. I have not been convinced there is any meat there, not in it's current iteration. LLM's are designed to make things that "look" like human output and thus are very good at hiding bugs. It's ok at getting the first 20% of the project done, but that was never the hard part. It's always been the last 20%, and modern LLM's simply cannot do it. Not on large scale projects.
New things have come and gone. So far the only thing I'm convinced of is, it's easier to get funding when you can claim you use AI. That's it.
> I have never copy and pasted code into development from an LLM/AI helper
Well that's simply a different reality from what my employer is encouraging. So not relevant. They not only want us to copy-and-paste, they want us to delete otherwise functional code to make it easier to paste in AI generated stuff.
Asking questions is fine, that's much much closer to an augmented search engine than prompt engineering. You're describing something different from what this post is about.
>5 years is not what I would consider a big bargaining chip
I'm not bragging. I'm giving context. If I was 0 yoe or 20 yoe, those would be relevant too. And for what it's worth, I also started in middle school.
>one leadership away from asking their employees the same thing your employer is
I didn't think you were bragging, and I hope I didn't come across as trying to put you in your place.
I'm responding with market context. The market is upended right now with no end in sight. Also, most employers if not meaningfully all, will or are involving AI. Many, if not most, people applying for decent positions right now have 3x the experience and are very willing to do whatever.
Don't let your principles end you up sleeping in your car.
> LLM's are designed to make things that "look" like human output and thus are very good at hiding bugs.
This can be true, definitely was more often true in the past. But there is a time and a place for human expression, and probably isn't in code. Your human expression is likely helped by tools. I doubt you're writing in Notepad, but your IDE doesn't get thrown out the window because it can't fully replace you or write code for you.
IF you are being blindly told to copy/paste from an LLM, then use that as part of your ideation and work from there, using AI tools as much as you can in ways that work. Become a leader in this new frontier by delving in (just kidding, that's meta about another article trending on AI)
> They not only want us to copy-and-paste, they want us to delete otherwise functional code to make it easier to paste in AI generated stuff.
Your post needs more detail if you want people to reply to your exact situation, but I think you can make clear arguments against doing this, then do this for 3 weeks, followed by the obvious: backtracking.
Leaders are by nature often encouraged to try new things. Standing in their way won't help you, but you can warn them, do it, then help them get back on track. By being a team member in this way, you are not in charge, but you can build trust equity if these leaders stick around and have techy ideas in future. In my experience, I usually outlast bad leadership (and their associated ideas). You have to be correct and not act like you're the boss to survive it, though!
Feel free to make your own decisions about this stuff but know that there are people with lots of experience and success in the industry using llm coding tools successfully (I’m one).
I am in a situation where ai was mandated, I was skeptical, but took it as a chance to try it out. I now can’t imagine going back.
I think it is easy to give that advice if you are somehow shielded from the reality of the market right now.
Which companies can you point to with openings on their careers page that specifically mention "no AI" or don't mention AI as part of the toolchain/expectations?
You are 100% correct. And I can't understand why some company would want someone who never used AI. What's the advantage? Honestly, it sounds like wishful thinking.
>"Hiring people who haven’t used it will be a marketable skill too".
Can you explain your thinking on this?
Obviously there will be jobs where AI isn't required, so omission of experience would be fine, but I can't think of any reason why it would be marketable to advertise "I've never worked with AI".
In fact, I don't think I've ever seen a resume that contains anything along the lines of "I haven't used X". You would just omit the lack of experience. Otherwise it risks signaling other negatives (not comfortable with change, etc.).
It's a moral stance. But AI isn't as bad as say, biological weapons. It's closer to piracy.
So it's like I've never used a gun. Which isn't really a strong point. At the very least, even if you don't plan to use guns, you'd know how guns work and where they don't.
I can understand the stance, I'm just saying that it's not something that I would ever describe as "marketable".
No one is saying "I've never used a gun" or "I've never pirated a movie" on their resume to market their morals. Resumes are to market the skills you have that match the job you're applying for, not for marketing your moral stance.
But I doubt those people put "never used Flash" on their general resume (or literally any other resume except the one tailored to that position, if they even put it on the resume instead of their cover letter). I also doubt they thought of it as a "marketable skill" considering it was applicable to ~1 job.
In any case, this seems like an incredibly niche situation that probably has no business being extrapolated to all AI tools.
Good luck finding a place that isn't pushing devs to heavily use AI. And even if you find somewhere that's not doing it right now, it may just mean their management is far behind the curve and they will have a similar push soon.
Don't leave your job. Unless you've looked for jobs recently you have no idea how bad the current job market is. You also need to adapt to AI. Obviously, clueless people are going to misuse it. But let them learn the hard way. I've found that in organizations where the higher ups are incompetent if you try to signal the problem you become the problem. Then you're viewed as "being hard to work with" rather than trying to prevent a tragedy. Keep your job lad.
If you do get another offer remember that there's always a risk when you change jobs. I.E. how stable is that companies funding? Will they want to do layoffs, too? Are their investors pressuring them to make cuts? Because if you're a new hire you can say good bye to that job. We don't have formal tenure in tech but there's still a human cost to firing people who have been long-time with a company. The decision makers have less attachment to a new hire so its easier to fire them in that respect (and how many decisions with fires are just arbitrary, number-based, bad luck.)
There's always pointless fads and food fights. Just tough it out. (Until a better gig comes along.)
I wish I could advise my young self "this too shall pass". The savvy play is to be a "team player". All those dumb hills I choose to die on... For dumb crap which eventually self-mooted all by themselves.
There was a comment (or a story?) some time back about how to survive as a software developer when projects are managed by Pointy Haired Bosses (PHBs). From memory:
Always be positive, optimistic.
Never say no or hedge or doubt.
Proactively manage upwards with enthusiastic status reports.
Instead of owning up to failures (due to fantasy estimates, ridiculous deadlines, scope creep, misc chaos, etc), list in detail all the great things and awesome progress you and your fantastic team have miraculously accomplished.
Like "reproducible builds which reduced failures by 1000% FTW, saving 30 hours per week" and "implemented boss' recommended pub/sub heptagonal meta architecture event sourced distributed pseudo sharded hybrid cloud something something, attaining sustained P95 latency of sub 3 picoseconds for 2 days"
Sadly, I was never able to keep up the act for more than 12 months. I'm just too contrarian, sarcastic, jaded. (I used to tell myself that I was "results oriented". I now see I was just an asshole. Everyone lies, needs to suspend disbelief, have a reason to crawl out of bed every morning. Who am I to piddle in their Cheerios?)
I'd like to think that if someone had clubbed young(est) me with the clue stick, I could have adapted.
This AI nonsense has infected every company and everybody is an "AI expert". You can't escape it and you'll be at a competitive disadvantage at another employer when starting from scratch. The thing about this AI fad is that nobody has really figured out where the real utility is to be gained and how to reduce costs from that.
Look for a new job but be amendable to your current one. If they want you to "vibe code" then do it and look for another job on the side. Get in touch with your network and see if anyone is hiring with reasonable coding practices. My company bought everyone a Claude subscription but they trust us to use it where reasonable.
Software engineering is not about typing. The LLM would write the unit tests based on the implementation you provide. Regardless of the LLM's output, you'd be expected to read and understand it before committing, or at the very least, review, refactor, tweak, etc. Reading is crucial for both personal and cognitive development. That is not vibe coding! There are top devs in the industry using them in top applications, such as Ghostty or Bun. The biggest names in the scene! The reality is that you would not know if your favourite app was partially or fully built with LLM help, sorry to disappoint!
I’ve been doing this for 20 years. I see it as you having two primary options.
You should stay there, learn the new tech, and see what happens.
If it works better than you expected, then your mind will be changed and you’ll be well positioned for the new economy.
If it turns out how you expect, now you have experience working with this tooling to inform your positions at your next company.
Either way, a few months in that environment will help your career.
I have the same recommendation.
Learn the strengths and weaknesses of the new technology and add it to your resume.
Become the AI advisor who can help an organization adopt the tech where appropriate and avoid the traps associated with top-down hype- and fomo-driven adoption.
Also who knows where the AI cycle will be in 2-3 years. My sense is by then we will see the cost of tech debt caused by LLM generated code, the cost of the ignorance and naïveté of vibe coding and the cost of VC money wanting its ROI on a subsidized tech.
All of the above +
Start looking for a new role that is better aligned with your expectations. You may find it harder than you expect. In which case, you might be glad you didn't burn your bridges in a pique over AI mandates by the CEO & CTO.
It's the same elsewhere. Some places are actually using it as a way to get rid of people 'resistant to change'. It also remains to be seen what technical skills we need 5 years from now. I did memory management and pointers 15 years ago and I can still do them now.
What I'd suggest is adapt to it, find ways to push back. Obviously things like "delete entire unit test file & have claude generate a new one" is a bad idea. I've seen claude "monkey patching" a system so that it returns true to the tests.
This issue is going to pop up in the future. Experiment with it on the company's dime even if you've checked out emotionally. You are still doing your job - improving code quality and making sure things run.
The new approach seems to be doing TDD. One, as an engineer, you'll know when AI is bullshitting you with mocks. Even when mocks are BS, you can still test the thing they're meant to represent. 2) AI spits more code than anyone can review. The red, green, refactor approach is one way to keep them on the rails.
> I've seen claude "monkey patching" a system so that it returns true to the tests.
I’ve watched Github Copilot do the same thing. I’ve also seen it doubling down on ridiculous things and just spewing crash-laden messes. There seems to be a low upper ceiling on how “competent” it is, which makes sense.
I'm a senior engineer with 20+ (oof) years of industry experience. I appreciate that this sucks and you don't want to do it. I wouldn't either. That said, it's a hirer's market out there right now. There will be plenty of people who will be happy to take your position while you're looking for something you prefer.
My opinion is that we're going to have about 5 years of this. Managers and C-suite folks are going to do their absolute darnedest to replace and supplement people with AI tools before they figure out it's not going to work. While I appreciate the differences, I remember seeing this ~6-7 years ago with blockchain at my last role. It'll work itself out. In the mean time, you get to contribute to the situation, instead of simply not being present. It's not going to be fun of course.
I don't think we're ever going back from this. There's an entire generation of new coders, and new managers who are growing up with this stuff. It's part of their experience, and suggesting they not use it is going to be akin to asking if you can use a typewriter instead of a computer with a word processor. Some companies will take longer to adopt, but it's coming...
I feel I'm sort of stuck in the opposite situation of OP. I manage a few massive codebases that I simply cannot trust an AI to go mucking around with. The only type of serious AI coding experience I could get at this point would be to branch one of these and start experimenting on my own dime to see how good or bad the actual experience is. And that doesn't really seem worth it, because I know what I want to do with them (what's on the feature list that I'm being paid to develop)... and it feels like it would take more time to talk to an LLM and get it perfectly dialed in on any given feature, and ensure it was correct, than it would take to write it myself. And I'm not getting paid for it.
I feel like I'd never use Claude seriously unless someone demanded I used it from day one on a greenfield project. And so while I get to keep evolving my coding skills, I'm a little worried that my "AI skills" will lag behind.
I do a lot of non-work AI stuff on my own, from pair programming with AI, asking it to generate whole things, to just asking it to clarify a general approach to a problem.
FWIW, in a work environment (and I have not been given the go-ahead to start this at my work) I would start by supplementing my codebase. Add a new feature via AI coding, or maybe reworking some existing function. Start small.
With all due respect, and I’m particularly anti-LLM, you sound exactly like someone who has never tried the tech.
You can use LLMs without letting them run wild on the entire codebase. You have git, so you can see every minute change it makes. You can limit what files it’s allowed to change and how much context you give it.
You don’t have to give it root on your machine to make it useful. You don’t have to “Jesus, Take the Wheel”. It is possible to try it out at a smaller scale, even on critical code.
Is it worth leaving? Hard to say for your specific situation, there's thousands of variables that no one here will ever know. Unless you are a superstar or independently wealthy, it's typically a bad idea to leave a job before you have something else lined up.
Is it worth looking? Absolutely! It will be much easier to make a decision when you're comparing your current position to a job offer, rather than comparing your current position to an unknown. I would also add, no matter what you feel about your current job, it's always a good idea to keep feelers out there for new positions. The fastest way up the rank and salary ladders is moving to new positions. It will always outpace internal promotions.
Assume OP is talking about grabbing a new rope before letting go of another one; otherwise we are mixing generalities about career moves not specific to the issue in the topic.
>generalities about career moves not specific to the issue in the topic.
They explicitly asked for general opinions, and provided almost no context which would let me be more specific.
"Is it worth leaving position over push to adopt X" is not exclusive to AI, nor is it a new question, so I addressed the general case.
Three years ago I left my job with VERY high salary because I was starting to burn out and took two months off.
From my experience, if you're burnt out or starting to burn out then leave, otherwise I recommend staying until you secure another job.
Regarding the situation, they want to delete the tests? Fine, you have git right? Replace it, and let everything set on fire, quietly enjoy the chaos and at some point revert the changes. Or don't, you're leaving anyway.
I can't stop thinking what happened when CASE tools, WYSIWIG, UML, Model Driven Architecture/Development, etc was pushed into devs. I know, it's a different phenomenon (that was a graphical visual push, this keeps the text).
I'm basically retired now and I'm really glad about the timing - I would not want to be in this field if I were in my 30s, 40s or 50s the way things are going. I think what's happening at your company is happening in lots of companies right now so I don't think you'll be able to jump ship and end up somewhere else where it's not happening. You can hope for a backlash - and it might come. In the meantime, go ahead and vibecode being careful about the areas you do it in - they seem pretty good at coming up with testcases, for example. Maybe don't let your coding agent have full editing permissions. Have it give you suggestions for what it would do in the code and evaluate them closely before letting the edits happen (pushing back when needed).
I know with only 5 years experience this may not be obvious, but this is only the first of many “revolutionary” technologies making everyone around you lose their minds that you’ll have to deal with in your career. Like every other such technology, I recommend that you engage with it, understand it, relate that experience to what your employer does, and be the voice of knowledgeable pragmatism about where to use it. In other words, be an engineer.
If that can’t be done where you are, or isn’t valued, you’re in the wrong place.
I’ve been through this with (including but not limited to) PCs, OOP, client-server, SOA, XML, NoSQL, blockchain, “big data”, and indeed, multiple definitions of “AI”. Turns out all but one of those were actually somewhat useful in the end, when applied properly, but they didn’t eliminate the industry. Just roll with it.
> I know with only 5 years experience this may not be obvious, but this is only the first of many “revolutionary” technologies making everyone around you lose their minds that you’ll have to deal with in your career.
While this has some truth, the size of the current "revolution" makes all the others look tiny, especially in terms of how it affects a programmer's day job. Nor did most of those "revolutions" affect every field of programming at once, like this one does. The percent of programmers actually impacted by blockchain is probably in the low single-digits. The percent of programmers using some version of AI tooling 3 years into this is probably >50%, and the more impactful tools will be used more very soon is my gues.
Reminds me when Rational Rose and UML were briefly famous in the late 90s. What an absolute piece of crap that the suits pushed to use.
I remember at the time that Rational Rose was going to allow non-programmers to make apps…
History doesn’t repeat, but it rhymes
This time is different.
No, really. This time is different.
I’d stay and actually try the vibe coding, but if it’s not working, only a bit.
For example, try deleting one failing unit test and re-generate it with Claude. Then if it turns out mostly worthless, scrap it and restore the original test. Maybe the entire test is correct (and easy to verify), maybe you can take pieces from it, maybe it’s unsalvageable; if it doesn’t save time, write tests manually from then on until the next major AI improvement.
Worst case, CEO fires you for not vibe-coding enough. Best case, you find a way for them to make your life easier. My prediction (based on some but not much experience) is that you spend only a small amount of time trying the AI tools, occasionally they impress you, usually they fail, but even then it’s interesting and fun to see what they do.
EDIT: as for dealing with the spaghetti when others use AI; wait for that to become a problem before quitting over it. And of course you can look for opportunities now.
> The CEO & CTO are both obsessed with it and promote things like "delete entire unit test file & have claude generate a new one" rather than manually address test failures.
So what are the tests actually for then?
Quit.
How can you trust your economic welfare to be in the hands of people that believe in magic?
> "delete entire unit test file & have claude generate a new one" rather than manually address test failures.
First thought, "wat", what if the code is broken, not the tests...
Second thought, if the entire unit test file is getting generated by claude without significant oversight like this suggests... I suppose its probably the tests that are broken.
---
As for your own situation. Looking for a new job because you aren't happy with the process at your current job is completely reasonable.
I'm not sure that you're right that this workflow will cause your technical growth to stall though - the freedom to experiment with strange new (probably ineffective) workflows on someone else's dime might well be beneficial in many ways. But if you're not happy doing this, and you have the skills and network to find a new job, why wouldn't you?
Play along, but keep relatively detailed yet abstracted notes on how the AI code is failing. Keep these notes private ... type them into a notes program on your personal device so that you can draw upon them in the future if needed.
Yep, your company is pretty much doomed.
CEO can afford being somewhat ignorant about the nature of engineering work or how llms work (still a red flag for a tech company).
But CTO being that stupid (if you don’t exaggerate) leaves little room for doubts.
My workplace has execs saying similar things unfortunately. it's even in some company goals that we will be using it. pretty commonly known company too.
25 years of experience here. AI is the real deal, and it should be the primary way you’re coding now. Everyone who doesn’t embrace it is about to become a dinosaur overnight.
They’re going to pay you to learn to work with the thing you need to learn to work with anyway? Be smart. Take the deal.
That said, it’s a free country, you can quit any time for any reason.
If you are not at least using these tools, more so than vibecoding but understanding how to improve your craft, you are going to lose. I’m now 100% behind AI gen code as it has given me 10x powers. Ask the right prompts, get the best code. AI, if you reject it, yngmi
Why the anon account though?
Truth is for most businesses tech is just a means to an end and not an end to itself.
Technical growth in 2025 means understanding how to use LLMs effectively more than your peers.
There's no going back to pre-LLM days. Just like we're not going to stop using machines to weave textiles.
There isn't enough context to hazard a guess at stay or go.
I would like to call out deleting the unit tests as a very funny way to deal with code generators breaking the product.
I suspect that if you don't change the /sector/ of computing you are working in, you may run into the same thing elsewhere.
AI is being adopted mainly where it works, and where it works is where regurgitated code cobbed-together from what has been seen before is sufficient to get the job done.
Assume that your CEO and CTO are not complete idiots; that they have some rational argument for believing that the approach will work. It's also possible they are gambling on an experiment; if it fails, they will back off on it.
If you want to avoid being told to use AI, you have to work on legacy tech stacks that AI doesn't understand; algorithmically complex code; critical infrastructure code where one bad bit stops multiple machines and applications; safety-critical embedded where AI slop could maim and kill and so is out of the question, etc.
Make fun of their outputs
The market is not easy right now. I would not leave unless you have something definite lined up.
> 1. Be pushed into a workflow that will cause my technical growth to stall or degrade
Whether your growth stalls or degrades is up to you, but in my country your employer's ability to tell how you how to produce/deliver the work (not just the outcome desired) is the difference between being an employee and contractor
You should remain open to new things in this industry. Hate it or not, AI is currently the new thing in our line of work.
> 2. Be overseeing a bunch of AI-generated spaghetti 2-3 years from now
How you implement code, including human review and understanding of code, is key. I have never copy and pasted code into development from an LLM/AI helper. I've certainly asked it questions about the code, tested the code output, had it add comments to help me understand the code it wrote and produce alternate methods that better fit my needs, etc.
"No spaghetti" in the codebase will prevent having to take care of it, but that doesn't mean small modular components, troubleshooting, general ideation of different approaches to see what can scale, etc. isn't going to be really helpful.
> I'm a 'senior engineer' with ~5 years of industry experience and am considering moving on from this company
5 years is not what I would consider a big bargaining chip in today's market full of seasoned developers, including those who started when they were in middle school and are applying for the same jobs as you would be.
Can you work with your employer to effectively introduce some AI tools and workflows to help ideas, changes, revisions, new features, or even documentation?
Don't jump until it is safe, and remember the next place is likely just slower or one leadership away from asking their employees the same thing your employer is.
>You should remain open to new things in this industry
I'm open to new things. I've seen demo's, attended presentations, and spent a long time toying around with it myself. I have not been convinced there is any meat there, not in it's current iteration. LLM's are designed to make things that "look" like human output and thus are very good at hiding bugs. It's ok at getting the first 20% of the project done, but that was never the hard part. It's always been the last 20%, and modern LLM's simply cannot do it. Not on large scale projects.
New things have come and gone. So far the only thing I'm convinced of is, it's easier to get funding when you can claim you use AI. That's it.
> I have never copy and pasted code into development from an LLM/AI helper
Well that's simply a different reality from what my employer is encouraging. So not relevant. They not only want us to copy-and-paste, they want us to delete otherwise functional code to make it easier to paste in AI generated stuff.
Asking questions is fine, that's much much closer to an augmented search engine than prompt engineering. You're describing something different from what this post is about.
>5 years is not what I would consider a big bargaining chip
I'm not bragging. I'm giving context. If I was 0 yoe or 20 yoe, those would be relevant too. And for what it's worth, I also started in middle school.
>one leadership away from asking their employees the same thing your employer is
Yeah that's probably true
>I'm not bragging. I'm giving context.
I didn't think you were bragging, and I hope I didn't come across as trying to put you in your place.
I'm responding with market context. The market is upended right now with no end in sight. Also, most employers if not meaningfully all, will or are involving AI. Many, if not most, people applying for decent positions right now have 3x the experience and are very willing to do whatever.
Don't let your principles end you up sleeping in your car.
> LLM's are designed to make things that "look" like human output and thus are very good at hiding bugs.
This can be true, definitely was more often true in the past. But there is a time and a place for human expression, and probably isn't in code. Your human expression is likely helped by tools. I doubt you're writing in Notepad, but your IDE doesn't get thrown out the window because it can't fully replace you or write code for you.
IF you are being blindly told to copy/paste from an LLM, then use that as part of your ideation and work from there, using AI tools as much as you can in ways that work. Become a leader in this new frontier by delving in (just kidding, that's meta about another article trending on AI)
> They not only want us to copy-and-paste, they want us to delete otherwise functional code to make it easier to paste in AI generated stuff.
Your post needs more detail if you want people to reply to your exact situation, but I think you can make clear arguments against doing this, then do this for 3 weeks, followed by the obvious: backtracking.
Leaders are by nature often encouraged to try new things. Standing in their way won't help you, but you can warn them, do it, then help them get back on track. By being a team member in this way, you are not in charge, but you can build trust equity if these leaders stick around and have techy ideas in future. In my experience, I usually outlast bad leadership (and their associated ideas). You have to be correct and not act like you're the boss to survive it, though!
Feel free to make your own decisions about this stuff but know that there are people with lots of experience and success in the industry using llm coding tools successfully (I’m one).
I am in a situation where ai was mandated, I was skeptical, but took it as a chance to try it out. I now can’t imagine going back.
But also don’t give up your principles and use AI stuff if you’re against it.
Hiring people who haven’t used it will be a marketable skill too
I think it is easy to give that advice if you are somehow shielded from the reality of the market right now.
Which companies can you point to with openings on their careers page that specifically mention "no AI" or don't mention AI as part of the toolchain/expectations?
You are 100% correct. And I can't understand why some company would want someone who never used AI. What's the advantage? Honestly, it sounds like wishful thinking.
>"Hiring people who haven’t used it will be a marketable skill too".
Can you explain your thinking on this?
Obviously there will be jobs where AI isn't required, so omission of experience would be fine, but I can't think of any reason why it would be marketable to advertise "I've never worked with AI".
In fact, I don't think I've ever seen a resume that contains anything along the lines of "I haven't used X". You would just omit the lack of experience. Otherwise it risks signaling other negatives (not comfortable with change, etc.).
It's a moral stance. But AI isn't as bad as say, biological weapons. It's closer to piracy.
So it's like I've never used a gun. Which isn't really a strong point. At the very least, even if you don't plan to use guns, you'd know how guns work and where they don't.
I can understand the stance, I'm just saying that it's not something that I would ever describe as "marketable".
No one is saying "I've never used a gun" or "I've never pirated a movie" on their resume to market their morals. Resumes are to market the skills you have that match the job you're applying for, not for marketing your moral stance.
A while ago I had to hire people who'd never used Flash (and therefore agreed to the EULA of Flash Player) for a project.
That's certainly interesting!
But I doubt those people put "never used Flash" on their general resume (or literally any other resume except the one tailored to that position, if they even put it on the resume instead of their cover letter). I also doubt they thought of it as a "marketable skill" considering it was applicable to ~1 job.
In any case, this seems like an incredibly niche situation that probably has no business being extrapolated to all AI tools.
There are plenty of cleanroom-type jobs for developers, they just might not always be the sexy big companies offering them.
Good luck finding a place that isn't pushing devs to heavily use AI. And even if you find somewhere that's not doing it right now, it may just mean their management is far behind the curve and they will have a similar push soon.
Don't leave your job. Unless you've looked for jobs recently you have no idea how bad the current job market is. You also need to adapt to AI. Obviously, clueless people are going to misuse it. But let them learn the hard way. I've found that in organizations where the higher ups are incompetent if you try to signal the problem you become the problem. Then you're viewed as "being hard to work with" rather than trying to prevent a tragedy. Keep your job lad.
If you do get another offer remember that there's always a risk when you change jobs. I.E. how stable is that companies funding? Will they want to do layoffs, too? Are their investors pressuring them to make cuts? Because if you're a new hire you can say good bye to that job. We don't have formal tenure in tech but there's still a human cost to firing people who have been long-time with a company. The decision makers have less attachment to a new hire so its easier to fire them in that respect (and how many decisions with fires are just arbitrary, number-based, bad luck.)
Nah.
There's always pointless fads and food fights. Just tough it out. (Until a better gig comes along.)
I wish I could advise my young self "this too shall pass". The savvy play is to be a "team player". All those dumb hills I choose to die on... For dumb crap which eventually self-mooted all by themselves.
There was a comment (or a story?) some time back about how to survive as a software developer when projects are managed by Pointy Haired Bosses (PHBs). From memory:
Always be positive, optimistic.
Never say no or hedge or doubt.
Proactively manage upwards with enthusiastic status reports.
Instead of owning up to failures (due to fantasy estimates, ridiculous deadlines, scope creep, misc chaos, etc), list in detail all the great things and awesome progress you and your fantastic team have miraculously accomplished.
Like "reproducible builds which reduced failures by 1000% FTW, saving 30 hours per week" and "implemented boss' recommended pub/sub heptagonal meta architecture event sourced distributed pseudo sharded hybrid cloud something something, attaining sustained P95 latency of sub 3 picoseconds for 2 days"
Sadly, I was never able to keep up the act for more than 12 months. I'm just too contrarian, sarcastic, jaded. (I used to tell myself that I was "results oriented". I now see I was just an asshole. Everyone lies, needs to suspend disbelief, have a reason to crawl out of bed every morning. Who am I to piddle in their Cheerios?)
I'd like to think that if someone had clubbed young(est) me with the clue stick, I could have adapted.
YMMV. Happy Hunting.
This AI nonsense has infected every company and everybody is an "AI expert". You can't escape it and you'll be at a competitive disadvantage at another employer when starting from scratch. The thing about this AI fad is that nobody has really figured out where the real utility is to be gained and how to reduce costs from that.
Look for a new job but be amendable to your current one. If they want you to "vibe code" then do it and look for another job on the side. Get in touch with your network and see if anyone is hiring with reasonable coding practices. My company bought everyone a Claude subscription but they trust us to use it where reasonable.
Yeah, you should start looking for a new role immediately.
Rip
[dead]
No.
Good luck.
this comment is presented to you by "ShortsAI - never say nothing, never too much".
“Should I quit? My boss wants me to use”
Software engineering is not about typing. The LLM would write the unit tests based on the implementation you provide. Regardless of the LLM's output, you'd be expected to read and understand it before committing, or at the very least, review, refactor, tweak, etc. Reading is crucial for both personal and cognitive development. That is not vibe coding! There are top devs in the industry using them in top applications, such as Ghostty or Bun. The biggest names in the scene! The reality is that you would not know if your favourite app was partially or fully built with LLM help, sorry to disappoint!