> Engineers don't try because they think they can't.
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.
"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.
One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.
I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.
And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction
Or, and stay with me on this, it’s a reaction to the actual experience they had.
I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.
Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,
I don't understand why people seem so impatient about AI adoption.
AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)
I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)
FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
So I have a kanji-learning app (an app to learn to read and write japanese characters) written in vue. It is completely AI free, using open dictionary data instead of AI to actually teach users the characters. I don’t use AI to code either, just good old emacs and the new flashy LSP-IDE that comes with it now.
A couple of months ago I was talking about the app on a language learning discord server, and a learner of Mandarin Chinese saw it and was wondering if it could be forked for Hanzi characters instead of Kanji. While I can‘t do that because I don‘t know anything about Chinese, somebody should. This language learner tried, forked my app, and vibe-coded the changes.
Now, I must say, I am impressed. And I think this learner spent like 2 days doing this, so, good on her. But I definitely can see the limitation of what LLM can do. The fork is full of weird bugs, the code base is changed seemingly arbitrarily (AI hallucinations I guess) and the app very barely works.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.
> If you haven’t had a mind blown moment with AI yet...
Results are stochastic. Some users the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
> But moving toward one pole moves you away from the other.
My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.
Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two.
I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.
Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
Uh…
So the argument here is that
anticipated future value == meaningful value today?
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value.
It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
I think also AI is a product of all the source code its seen. If you're inexperienced at something AI will give you a good enough result that is better than you can do. If you're an expert at something AI might do something quicker but never as good as doing it yourself.
Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.
I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.
Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.
I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?
AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.
And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.
I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.
My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
> My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
Nothing says "this product is useful" quite like forcing people to use it and punishing people who don't. If it was that good, there'd be organic demand to use it. People would be begging to use it, going around their boss's back to use it.
The fact that companies have to force you to use it with quotas and threats is damning.
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
> But the shared goal is to get a change to the codebase to achieve a desired outcome.
I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.
The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.
I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.
Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
How did you verify that your AI agent performed the update correctly? I've experienced a number of cases where an AI agent made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it.
Unit tests, manual testing the final product, PR with two approvals needed (and one was from the most anal retentive reviewer at the company who is heavily invested in the changes I made), and QA.
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
Rinse and repeat for many "one-off" tasks.
It's not going away, you need to learn how to use it. shrugs shoulders
They're good questions! The problem is that I've tried to talk to the people who are getting real value from it, and often the answer ends up being that the value is not as real as they think. One guy gave an excited presentation about how AI let him write 7k LOC per day, expounded for an entire session about how the rest of us should follow in his shoes, and then clarified only in Q&A that reviewers couldn't keep up so he exempted himself from code review.
Most people don't have a problem with using genai for stuff like throwaway UI's. That's not even remotely relevant to the criticisms. People reject having it forced down their throats by companies who are desperate to make us totally reliant on it to justify their insane investments. And people reject the evangelicals who claim that it's going to replace developers because it can spit out mostly working boilerplate.
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.
When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.
I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.
I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.
> have you considered that you haven't used the tools correctly or effectively?
The problem is that this comes off just as tone-deaf as "you're holding it wrong." In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing. And when that doesn't work, the promoter goes into system prompts, MCP, agent files, etc and entire workflows that are required to get it to do the correct thing. It ends up feeling like you're being lied to, even if there's some benefit out there.
There's also the fact that all programming workflows are not the same. I've found some areas where AI works well, but a lot of my work it does not. Usually things that wouldn't show up in a simple Google search back before it was enshittified are pretty spotty.
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.
I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.
The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.
Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.
With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.
Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.
To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?
Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.
At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.
This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.
One of the tests I sometimes do of LLMs is a geometry puzzle:
You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.
Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).
Anecdotes are of course a bad way to study this kind of thing.
Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.
Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
IDK I got really sick in a foreign country, I wasn't sure how to get to the hospital and I was alone in a hotel room. I don't really know how using chatgpt to help me isn't actualizing.
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for.
Stolen items? Depending on the items and the place, possibly police.
Missed flights? Customer service agent at the airport for your airline or call the airline help line.
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
It's just another business/service niche that is solved until the current Big Provider becomes Evil or goes under.
Similar to "made for everyone" social networks and video upload platforms.
But there are niches that are trip planning + there are no one solving the pain! For example Geocaching. I always dreamed about an easy way to plan Geocaching routes to travel and find interesting caches on the way. Currently you gotta filter them out and then eyeball the map what seems to be nearby, despite there, maybe, not being any real roads there, or the cache is probably maybe actually lost or has to be accessed at specific time of day.
So... No one wants apps that are already solved + boring.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy.
FWIW, humans derive a lot of their self-evaluation as people from labor.
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
As a place with a high density of people with agency to influence the outcome, I think it's important for people here to acknowledge that much of what the negative people think is probably 100% true.
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do.
But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
Instead of admitting you built the wrong thing you denigrate a friend and someone whom you admire. Instead of reconsidering the value of AI you immediately double down.
This is a product of hurt feelings and not solid logic.
The thing about dismissing AI in 2025 is that it's on par with dismissing the wearable computing group at MIT in the 1980s.
But admittedly, if one had tried to productize their stuff in the 1980s it would have been hilarious. So the rewards here are going to go to the people who read the right tea leaves and follow the right path to what's inevitable.
In the short term, a lot of not so smart, people are going to lose a lot of money believing some of the ludicrous short-term claims. But when has that not been the case?
This is not the right time of year to pitch in Seattle. The days are short and the people are cranky. But if they want to keep hating on AI as a technology because of Microsoft and Amazon, let them, and build your AI technology somewhere else. San Francisco thinks the AGI is coming any day now so it all balances out, no?
We have these weekly rah rah AI meetings where we swap tips on what we've achieved with copilot and devin. Mostly crickets but everyone is talking with lots of enthusiasm. Its starting to get silly though now, most people can't even get the tools to do anything useful more than trivial things we used to see on stack overflow.
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
The product website isn't convincing either. It's only in private beta, and the first example shows 'A scenic walking tour of Venice' as the desired trip. I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice, including all highlights people write and post about a lot on social media to show how great their life is. But if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically? I thought you hated crowds — have you considered less crowded alternatives where you will be appreciated more as a tourist? Have you actually been to Italy at all?'.
LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.
She probably understood this from the minimal description given.
> I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice
I tried this in Crotone in September. The suggested walking tour was shit. The facts weren't remarkable. The stops were stupid and stupidly laid out. The whole experience was dumb and only redeeming because I was vacationing with a friend who founded on the of the AI companies.
Very new ex-MSFT here.
I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
FWIW: I realized this year that there are whole cohorts of management people who have absolutely zero relationship with the words that they speak. Literal tabula rasas who convert their thoughts to new words with no attachment to past statements/goals.
Put another way: Liars exist and operate all around you in the top tier of the FAANGS rn.
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
Thanks for the post - it's work to write and synthesize, and I always appreciate it!
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
It does feel like without a compelling AI product Microsoft isn't super differentiated. Maybe Satya is right that scale is a differentiation, but I don't think people are as trapped in an AI ecosystem as they were in Azure.
Lol. You don't think that Microsoft has _a_ compelling AI product? The new version of 365 Copilot is objectively compelling, even if it is a work in progress. And Github Copilot is also objectively compelling.
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
There has always been a lot of Microsoft hate, but now its a whole new level. Windows now really sucks, My new laptop is all Linux for the first time ever. I dont see why this company is still so valuable. Most people only use a browser now and some ios apps, there is no need for Windows or Microsoft (and of course Azure is never anyone's first choice). Steam makes the gamers happy to leave too.
The problem with AI is that the media and the tech hype machine wants everyone to believe that it is more than a glorified randomized text generator. Yes, for many problems this is just what you need, but not to create reliable software. Somehow, they want everyone to go into a state of disbelief and agree that it is a superior intelligence or at least the clear sign of something of this sorts, and that we should stop everything we're doing right now to give more money and attention to this endeavor.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I like AI to the extent that it can quickly solve well-worn, what I've taken to calling "embarrassingly solved problems", in your environment, like "make an animation subsystem for my program". A Qt timeline is not hard, but it is tedious, so the AI can do it.
And it turns out that there are some embarrassingly solved problems, like rudimentary multiplayer games, that look more impressive than they really are when you get down to it.
More challenging prompts like "change the surface generation algorithm my program uses from Marching Cubes to Flying Edges", for which there are only a handful of toy examples, VTK's implementation, and the paper, result in an avalanche of shit. Wasted hours, quickly becoming wasted days.
'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
The full quote from that section is worth repeating here.
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
> But in the end engineers do appear to become more productive 'pairing' with an LLM
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
The only clear applications for AI in software engineering are for throwaway code, which interestingly enough isn't used in software engineering at all, or for when you're researching how to do something, for which it's not as reliable as reading the docs.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
I’m all for neurodivergent acceptance but it has caused monumentally obnoxious people like this to assume everyone else is the problem. A little self awareness would solve a lot of problems.
I think reading the room is required here. You and your friend can both be right at the same time. You want to build an AI-enabled app, and indeed there's plenty of opportunity for it I'm sure. And your friend can hate what it's done to their job stability and the industry. Also, totally unrelated, but is the meaning or etymology behind the app name Wanderfugl? I initially read it as Wanderfungl.
> But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI."
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
I live in Seattle now, and have lived in San Francisco as well.
Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.
Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see are ripe for hatred - and Seattle will latch on to that in a heartbeat.
Seattle has always been a second-mover when it comes to hype and reality distortion. There is a lot more echo chamber fervor (and, more importantly, lots of available FOMO money to burn) in SF around whatever the latest hotness is.
My SF friends think they have a shot at working at a company whose AI products are good (cursor, anthropic, etc.), so that removes a lot of the hopelessness.
Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.
My previous software job was for a Seattle-based team within Amazon's customer support org.
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
There's a great non-AI point in this article - Seattle has great engineers. In pursuing startups, Seattle engineers are relatively unambitious compared to the Bay Area. By that I mean there's less "shooting for unicorns" and a comparatively more reserved startup culture and environment.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
My pet theory is that most of the investor class in seattle is ex microsoft and ex amazon. Neither microsoft nor amazon are really big splashy unicorns. Amazon's greatest innovation (aws) isn't even their original line of business and is now 'boring'. No doubt they've innovated all over their business in both little and big ways, but not splashy ways, hell every time amazon tries to splash they seem to fall on their ass more often than not (look at their various cancelled hardware lines, their game studios, etc. Alexa still chugs on, but she's not getting appreciably better to the end user over even the last 10 years).
Microsoft is the same, a generally very practical company just trying to practical company stuff.
All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.
Whenever I see "everyone", and broad statements that try to paint an entire geography based on one company "Microsoft" I'm suspect of the motives of the author at worst, or just dismissive of the premise at best.
I see what the author is saying here, but they're painting with an overly broad brush. The whole "San Francisco still thinks it can change the world" also is annoying.
I am from the Seattle area, so I do take it a bit personally, but this isn't exactly my experience here.
I think they exist as a "market segment" (i.e, there are people out there who will use AI), but in terms of how people talk about it, sentiment is overwhelmingly negative in most circles. Especially folks in the arts and humanities.
The only non-technical people I know who are excited about AI, as a group, are administrator/manager/consultant types.
It's probably good if some portion of the engineering culture is irrationally against AI and like refuses to adopt it sort of amish style. There's probably a ton still good work that can only be done if every aspect of a product/thing is given focused human attention to it, some that might out-compete AI aided ones.
I think you hit the nail in the head there. There's absolutely nothing we can do with AI that we can't do without it. And the level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
I think treating AI as the best possible field for everyone smart and capable is itself very narrow minded and short sighted. Some people just aren't interested in that field, what's so hard to accept it? World still needs experts in other fields even within computing.
He describes his startup as an ai-oriented map... to me that sounds amazing and totally at my alley. But then it's actually about trip planning... to me is too constrained and specific. What I would love is a map type experience that gives me an AI type interface for interesting things in any given area that might be near me and worth checking out.
And not just for travel by the way... I love just exploring maps and seeing a place.. I'd love to learn more about a place kind of like a mesh between Wikipedia and a map and AI could help
I don't think the root cause here is AI. It's the repeated pattern of resistance to massive technological change by system-level incentives. This story has happened again and again throughout recent history.
I expect it to settle out in a few years where:
1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not.
2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
It's satisfying to hear that Microsoft engineers hate Microsoft's AI offerings as much as I do.
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
There are a few clashing forces. One is the power of startups - what people love is what will prevail. It made macs and iphones grab marketshare back from "corporate" options like windows and palm pilot. Its what keeps tiktok running.
An opposing force is corporate momentum. Its unfortunately true that people are beholden to what companies create. If there are only a few phones available, you will have to pick. If there are only so many shows streaming, you'll probably end up watching the less disgusting of the options.
They are clashing. The ppl's sentiment is AI bad. But if tech keeps making it and pushing it long enough, ppl will get older, corporate initiatives will get sticky, and it will become ingrained. And once its ingrained, its gonna be here forever.
As I've said before: AI mandates, like RTO mandates, are just another way to "quiet fire" people, or at least "quiet renegotiate" their employment.
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
Seattle sounds kinda nice now. AI fatigue is real. I just had to swap eye doctors because they changed their medical records to some AI powered bullshit and wanted me to re-enter all my info into the new system in order to check-in for my appointment. A website that when I looked at their EULA page redirected to an empty page, no clear mention of HIPAA anywhere on the website's other pages. The eye doctor seemed confused why I wanted to stop using them after ten years as a patient even after I pointed out the flaws. It's madness.
Was this written by AI? It sounds like the writing style of an elementary school student. Almost entirely made of really simple sentence structures, and for whatever reason I find it really annoying to read.
> like building an AI product made me part of the problem.
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
That's the thing, though, it is about their careers.
It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.
It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.
Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?
I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.
If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.
I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
Some massive bait in this article. Like come on author - do you seriously have these thoughts and beliefs?
> It felt like the culture wanted change.
>
> That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
When the singularity hits in who knows how many years from now, do you really think it's one of these llm wrapper products that's going to be the difference maker? Again, sorry to break it to you but that's a party you and I are not going to get invited to. 0% chance governments would actually allow true super intelligence as a direct to consumer product.
In my opinion, the issue in AI is similar to the issue in self driving cars. I think the last “five percent” of functionality for agents etc. will be much, much more difficult to nail down for production use, just like snow weather and strange roads proved to be much more difficult for self-driving car technology rollout. They got to 95% and assumed they were nearing completion but it turned out there was even more work to be done to get to 100%. That’s kind of my take on all the AI hype. It’s going to take a lot more work to get the final five percent done.
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
This isn’t just a Seattle thing, but I do think the outsized presence of specific employers there contributes to an outsized negativity around AI.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
It reads like it's AI-edited, which is deliciously ironic.
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)
But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.
> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.
IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.
Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)
>>>
For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.
Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.
(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)
But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.
I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.
You just type hyphen twice in many programs... Or on mobile you hold hyphen for a moment and choose em dash. I don't use it, but it's very easy to use.
Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."
So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere
I'm not surprised you're getting bad reactions from people who aren't already bought-in. You're starting from a firm "I'm right! They're wrong!" with no attempt to understand the other side. I'm sure that comes across not just in your writing
> After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
I've recently found that it can be a useful substitute for stackoverflow. It does occasionally make shit up, but stackoverflow and forums searching also has a decently high miss rate as well, so that doesn't piss me off too much. And it's usually immediately obvious when a method doesn't exist, so it doesn't waste a lot of time for each incident.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
I got a thread on SomethingAwful gassed [1] because it was about an AI radio station app I was working on. People on that forum do not like AI.
I think some of the reasons that they gave were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
I don't know if anyone has been reading cover letters recently but it seems that people are prompting the LLMs with the same shit, dusting their hands and thinking "done" and what the reader then sees is the same repetitive, uncreative and instantly recognizable boilerplate.
The people prompting don't seem to realize what's coming out the other end is boilerplate dreck, and you've got to think - if you're replaceable with boilerplate dreck maybe your skills weren't all that, anyway?
Howdy! I personally don't really understand the "point" the article is trying to make. I mostly agree with your sentiment that AI can be useful. I too have seen a massive increase in productivity in my hobbies, thanks to LLMs.
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
I don't follow why it's hard to build in Seattle. Do you mean before this "AI summer" they struggled, or that with AI they have become too slow because they won't adopt it?
I get the feeling that this is supposed to be about the economics of a fairly expensive city/state and that "six-figure salary", but you don't really call it out.
If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.
People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.
Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
I was under the distinct impression that Seattle was somewhat divided over 'big tech', with many long-term residents resenting Microsoft and Amazon's impact on the city (and longing for the 'artsy and free-spirited' place it used to be). Do you think those non-techies are sympathetic to the Microsofties and Amazonians? This is a genuine question, as I've never lived in Seattle, but I visit often, and live in the PNW.
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...
Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.
They kind of are, though I think so many locals now work in big tech in some way that it's shifted a bit. I wish we could return to being a bit more artsy and free spirited
I've lived in the Seattle area most of my life and lived in San Francisco for a year.
SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.
Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.
As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.
As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.
I think most people in Seattle know how economics works, logic follows:
while "techbro" don't work is true:
if "techbro" debt > income:
unless assets == 0:
sellgighustle
else
sellhousebeforeforeclosure
nomoreseattleforyou("techbro")
end
else
"gigbot" isn't summoned and people don't get paid.
"techbro" health-- due to high expense of COBRA.
[etc...]
end
end
'how much they do for the community' like trying to buy elections so we won't tax them, same thing boeing and microsoft did. Anytime out local government gets a little uppity suddenly these big corps are looking to move like boeing largely did. Remember Amazon HQ2, at least part of the reasoning behind that disaster was seattlites asking, 'what the hell is amazon doing for us besides driving up rents and snarling traffic?'
(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)
I'm just really isolated right now, I've been building solo for a long time. I don't have anyone to share my thoughts with, which is something I used to really value at Microsoft.
Regarding "And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not."
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
One fun one was the leadership of Windows Update became obsessed with shipping AI models via Windows update, but they can't safely ship files larger than 200mb inside of an update.
I like that you shared the insight. Feels like you shared a secret to the world that is not so secret if you work a Microsoft (I guess this is less about the city)
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
Well I think it's interesting how much what goes on inside of the major employers that affects Seattle. Like crappy behavior inside of Microsoft is felt outside of it.
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
Wanderfugl is a strange for an "AI" powered map. The Wandervogel movement was against industrialization and pro nature. I'm sure they would have looked down on iPhones and centralized "AI" that gives them instructions where to go.
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
The author has unquestioning assumption that the only innovation possible is the one with AI. That is genuinely weird. Even if one believes in AI, innovation in non-AI space should be possible, no?
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
I love AI but I find Microsoft AI to be mostly useless. You'd think that anything called Copilot can do things for you, but most of the time it just gives you text answers. Even when it is in the context of the application it can't give you better answers than ChatGPT, Claude or Perplexity. What is the point of that?
Satya has completely wasted their early lead in AI. Google is now the leader.
I honestly expected this to be about sanctimonious lefties complaining about a single chatgpt query using an Olympic swimming pool worth of water, but it was actually about Seattle big tech workers hating it due to layoffs and botched internal implementations which is a much more valid reason to hate it.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
I'm stuck between feeling bad because this is my field–I spend most days worrying about not being able to pay my bills or get another job–and wanting to shake every last tech worker by the shoulders and yell "WAKE UP!" at them. If you are unhappy with what your employer is doing, because they have more power over you, you don't have to just sit there and take it. You can organize.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
To the extent that Microsoft pushes their employees to use all their other shitty products, Copilot seeks like just another one (it can't be more miserable/broken than Sharepoint).
> My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and *that AI isn't worth doing anyway*. *She's wrong on both counts*, but the culture made sure she'd land there.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Oh but we're all supposed to swoon over the author's ability to make ANOTHER AI powered mapping solution! Probably vibecoded and bloated too. Just what we need, obviously all the haters are wrong! /s
I live in Seattle (well a 20 min ferry from Seattle) and I too hate AI. In fact I have a Kanji learning app which I am trying to push on to people, and I brand it as AI free. No AI was used to develop it, no AI used to write content, no AI is there to “help you learn”.
When I see apps like Wanderfugl, I get the same sense of disgust as OPs ex coworker. I don‘t want to try this app, I don’t want to see it, just get it away from me.
This isn’t really a common-folk-vs-tech-bros story. It’s about one specific part of Seattle’s tech culture reacting to AI hype. People outside that circle often have very different incentives.
Unlike Seattle, in Los Angeles there are few software engineers but I would not utter AI at all here
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I was just about to post that this entire story could have been completely transposed to almost every conversation I've had in Los Angeles over the past year and a half. Looks like you beat me to it!
the only difference is that I don't have the conversation ha, I don't tell people about anything I do that's remotely close to that, rarely even mention anything in tech. I listen to enough other conversations to catch on to how it goes, very easy to get roped into an AI doomer conversation that's hard to get out of
Lots of creators (e.g., writers, illustrators, voice actors) hate "AI" too.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
Literally everyone I know is sick of AI. Sick of it being crowbar'd into tools we already use and find value in. Sick of it being hyped at us as though it's a tech moment it simply isn't. Sick of companies playing at being forward thinking and new despite selling the same old shit but they've bolted a chatbot to it, so now it's "AI." Sick of integrations and products that just plain do not fucking work.
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
Oh, I will happily get in your face and tell you your AI garbage sucks. I'm not afraid of these people, and you shouldn't be, either. Bring back social pressure. We successfully shamed Google Glassholes into obscurity, we can do it again. This shit has infested entire operating systems now, all so someone can get another billion dollars, while the rest of us struggle to make rent. It's made my career miserable, for so many reasons. It's made my daily life miserable. I'm so sick and tired of it.
> This shit has infested entire operating systems now
Well, it's not the fault on a random person doing some project that may even be cool.
I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interest on what they are doing.
The thing that stops me being outwardly hostile is that there are a minority, and it is a minor, minor minority, of applications for AI that are actually pretty interesting and useful. It's just catastrophically oversaturated with samey garbage that does nothing.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
I wonder if I'm the guy in the bubble or if all these people are in the bubble. Everyone I know is really enjoying using these tools. I wrote a comment yesterday about how much my life has improved https://news.ycombinator.com/item?id=46131280
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?
> Engineers don't try because they think they can't.
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.
"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.
One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.
> the other engineers have lost all ambition for anything else
Worse, they've lost all funding for anything else.
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
My experience too. They are so convinced that AI is magical that pushing back makes you look bad.
Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.
It's just boring.
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.
I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.
And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.
Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction
Or, and stay with me on this, it’s a reaction to the actual experience they had.
I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.
Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,
I don't understand why people seem so impatient about AI adoption.
AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)
I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)
FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)
Well said!
This isn’t “unfair”, but you are intentionally underselling it.
If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.
Edit: lol this forum :)
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Or your job isn't what AI is good at?
AI seems really good at greenfield projects in well known languages or adding features.
It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
This is precisely my experience.
Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.
Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.
I have an anecdote.
So I have a kanji-learning app (an app to learn to read and write japanese characters) written in vue. It is completely AI free, using open dictionary data instead of AI to actually teach users the characters. I don’t use AI to code either, just good old emacs and the new flashy LSP-IDE that comes with it now.
A couple of months ago I was talking about the app on a language learning discord server, and a learner of Mandarin Chinese saw it and was wondering if it could be forked for Hanzi characters instead of Kanji. While I can‘t do that because I don‘t know anything about Chinese, somebody should. This language learner tried, forked my app, and vibe-coded the changes.
There results are here: https://pianothshaveck.github.io/shodoku-hanzi/kanji/%E5%A3%...
Compare this to the original here: https://shodoku.app/kanji/%E5%A3%B0
Now, I must say, I am impressed. And I think this learner spent like 2 days doing this, so, good on her. But I definitely can see the limitation of what LLM can do. The fork is full of weird bugs, the code base is changed seemingly arbitrarily (AI hallucinations I guess) and the app very barely works.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.
> Edit: lol this forum :)
Indeed.
post portfolio I wanna see your bags
> If you haven’t had a mind blown moment with AI yet...
Results are stochastic. Some users the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
> But moving toward one pole moves you away from the other.
My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.
Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
You know what, this clarifies something for me.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
100%.
"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.
Same, doesn't make this hype phase more bearable though.
or 'interactive' or 'cloud' (early 2010s).
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
> often companies with real products will mix in tidbits of hype
The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Spot. Fucking. On.
Thank you.
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.
so, people with experience?
Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
In hindsight it makes sense, I’m sure every major shift has played out the same way.
ive been programming for more than 40 years
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Shells around chatgpt are fine if they provide value.
Way better than AI jammed into every crevice for no reason.
Yes ok then I definitely agree
Not OP but for starters LLMs != AI
LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI
>Not OP but for starters LLMs != AI
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
Why then there is an AI-powered dishwasher, but no AI car?
https://www.tesla.com/fsd ?
I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.
Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.
The values of bitcoin are:
- easy access to trading for everyone, without institutional or national barriers
- high leverage to effectively easily borrow a lot of money to trade with
- new derivative products that streamline the process and make speculation easier than ever
The blockchain plays very little part in this. If anything it makes borrowing harder.
I agree with "easy access to trading for everyone, without institutional or national barriers"
how on earth does bitcoin have anything to do with borrowing or derivatives?
in a way that wouldn't also work for beanie babies
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
With almost zero fundamentals. That’s the part you are glossing over.
Uh… So the argument here is that anticipated future value == meaningful value today?
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
I think also AI is a product of all the source code its seen. If you're inexperienced at something AI will give you a good enough result that is better than you can do. If you're an expert at something AI might do something quicker but never as good as doing it yourself.
Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
That is the problem.
AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.
I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.
Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.
so would love to be a fly in there office and hear all their convos
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.
I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?
AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.
And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.
I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.
> No one HAS to use AI.
Well… That's no longer true, is it?
My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
AI is already unavoidable.
> My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
Nothing says "this product is useful" quite like forcing people to use it and punishing people who don't. If it was that good, there'd be organic demand to use it. People would be begging to use it, going around their boss's back to use it.
The fact that companies have to force you to use it with quotas and threats is damning.
> where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
If you're lucky. I've had LLMs that just repeatedly hang up on me when they obviously hit a dead end.
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
> But the shared goal is to get a change to the codebase to achieve a desired outcome.
I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.
The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.
I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.
Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
How did you verify that your AI agent performed the update correctly? I've experienced a number of cases where an AI agent made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it.
Unit tests, manual testing the final product, PR with two approvals needed (and one was from the most anal retentive reviewer at the company who is heavily invested in the changes I made), and QA.
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
Rinse and repeat for many "one-off" tasks.
It's not going away, you need to learn how to use it. shrugs shoulders
It’s kind of fun watching this comment go up and down :)
There’s so much evidence out there of people getting real value from the tools.
Some questions you can ask yourself are “why doesn’t it work me?” and “what can I do differently?”.
Be curious, not dogmatic. Ignore the hype, find people doing real work.
They're good questions! The problem is that I've tried to talk to the people who are getting real value from it, and often the answer ends up being that the value is not as real as they think. One guy gave an excited presentation about how AI let him write 7k LOC per day, expounded for an entire session about how the rest of us should follow in his shoes, and then clarified only in Q&A that reviewers couldn't keep up so he exempted himself from code review.
Most people don't have a problem with using genai for stuff like throwaway UI's. That's not even remotely relevant to the criticisms. People reject having it forced down their throats by companies who are desperate to make us totally reliant on it to justify their insane investments. And people reject the evangelicals who claim that it's going to replace developers because it can spit out mostly working boilerplate.
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.
When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.
Or am I entirely off base with your experience?
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.
I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.
I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.
> have you considered that you haven't used the tools correctly or effectively?
The problem is that this comes off just as tone-deaf as "you're holding it wrong." In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing. And when that doesn't work, the promoter goes into system prompts, MCP, agent files, etc and entire workflows that are required to get it to do the correct thing. It ends up feeling like you're being lied to, even if there's some benefit out there.
There's also the fact that all programming workflows are not the same. I've found some areas where AI works well, but a lot of my work it does not. Usually things that wouldn't show up in a simple Google search back before it was enshittified are pretty spotty.
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.
I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.
The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.
Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.
Lol you made me think my power bill has gone up but I didn't get a pay rise for my increased productivity.
Most of the people against “AI” are not against it because they think it doesn’t work.
It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.
The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.
The reactions the author was getting was the reaction of a horse talking to someone happily working for the glue factory.
I don't think you're qualified to speak for most of the people against AI.
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
[0] https://github.com/ocaml/ocaml/pull/14369
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.
With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.
Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.
It has, thus far, made nearly everything worse.
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.
To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
> that the API costs are profit making if considered in isolation.
no, they are currently losing money on inference too
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?
Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.
At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.
This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.
Maybe those people do different work than you do? Coding agents don’t work well in every scenario.
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.
One of the tests I sometimes do of LLMs is a geometry puzzle:
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).Anecdotes are of course a bad way to study this kind of thing.
Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.
Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.
What have you tried? How much time have you spent? Using AI is it’s own skill set separate from programming
This fascinates me. Just observing but because it hasn't worked for you, everyone else must be lying? (I'm assuming that's what you mean by baseless)
How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
IDK I got really sick in a foreign country, I wasn't sure how to get to the hospital and I was alone in a hotel room. I don't really know how using chatgpt to help me isn't actualizing.
did you try asking at the reception desk?
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for. Stolen items? Depending on the items and the place, possibly police. Missed flights? Customer service agent at the airport for your airline or call the airline help line.
Well I got so weak I needed to go to the hospital, and that was tough.
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
> and I've even seen unit tests > that mock the actual function > under test.
Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
If a tool makes it easy to shoot yourself in the foot, then it's not a good tool. See C++.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Your coworkers were probably writing subtle bugs before AI too.
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.
No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.
You don’t get to fix bugs in code by simply pouring it through a filter.
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.
-206dev
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
If AI replaces software engineers, people outside tech doesn't have much chance of surviving it too.
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
It's just another business/service niche that is solved until the current Big Provider becomes Evil or goes under.
Similar to "made for everyone" social networks and video upload platforms.
But there are niches that are trip planning + there are no one solving the pain! For example Geocaching. I always dreamed about an easy way to plan Geocaching routes to travel and find interesting caches on the way. Currently you gotta filter them out and then eyeball the map what seems to be nearby, despite there, maybe, not being any real roads there, or the cache is probably maybe actually lost or has to be accessed at specific time of day.
So... No one wants apps that are already solved + boring.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
Ok so a few thoughts as a former Seattleite:
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy. FWIW, humans derive a lot of their self-evaluation as people from labor.
Marx was correct in his identification of the problem (the communist manifesto still holds up today). Marx went off the rails with his solution.
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
That's probably the difference
As a place with a high density of people with agency to influence the outcome, I think it's important for people here to acknowledge that much of what the negative people think is probably 100% true.
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
it's plenty sane to be angry when the benefits of those technical innovations are not distributed equally.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
The plough also made the rich richer, but in the long run the productivity gains it enabled drove improvements to common living standards.
I don't agree with any of this. I just think it's aggravating to live in a company town.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
> health and safety seems irrelevant to me
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
> Instead, she reacted to it with a level of negativity I'd never seen her direct at me before.
A few months ago, a friend of mine showed a poem she wrote for her newborn. Or more specifically, she asked ChatGPT to write for her newborn.
I almost acted like this ex-Microsoft senior. Tbh if I didn't know it was for her own child, I would have acted this way.
I (thought that I) managed ignoring my opinions about whether writing poems is a good use of AI and steering the topic to baby formula milk instead.
From the article:
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
Same in Danish FWIW.
In English, I’d pronounce it very similar to “wonderful”.
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
> Pronouncing every single letter.
Now I want to know how you pronounce words like: through, bivouac, and queue.
That's a gnarly standard you have there.
It's pronounced wanderfull in Norwegian
And how many of your users are going to have Nordic backgrounds?
I personally thought it was wander _fughel_ or something.
Let alone how difficult it is to remember how to spell it and look it up on Google.
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
Instead of admitting you built the wrong thing you denigrate a friend and someone whom you admire. Instead of reconsidering the value of AI you immediately double down.
This is a product of hurt feelings and not solid logic.
The thing about dismissing AI in 2025 is that it's on par with dismissing the wearable computing group at MIT in the 1980s.
But admittedly, if one had tried to productize their stuff in the 1980s it would have been hilarious. So the rewards here are going to go to the people who read the right tea leaves and follow the right path to what's inevitable.
In the short term, a lot of not so smart, people are going to lose a lot of money believing some of the ludicrous short-term claims. But when has that not been the case?
This is not the right time of year to pitch in Seattle. The days are short and the people are cranky. But if they want to keep hating on AI as a technology because of Microsoft and Amazon, let them, and build your AI technology somewhere else. San Francisco thinks the AGI is coming any day now so it all balances out, no?
We have these weekly rah rah AI meetings where we swap tips on what we've achieved with copilot and devin. Mostly crickets but everyone is talking with lots of enthusiasm. Its starting to get silly though now, most people can't even get the tools to do anything useful more than trivial things we used to see on stack overflow.
> AI-powered map
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
The product website isn't convincing either. It's only in private beta, and the first example shows 'A scenic walking tour of Venice' as the desired trip. I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice, including all highlights people write and post about a lot on social media to show how great their life is. But if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically? I thought you hated crowds — have you considered less crowded alternatives where you will be appreciated more as a tourist? Have you actually been to Italy at all?'.
LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.
She probably understood this from the minimal description given.
> I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice
I tried this in Crotone in September. The suggested walking tour was shit. The facts weren't remarkable. The stops were stupid and stupidly laid out. The whole experience was dumb and only redeeming because I was vacationing with a friend who founded on the of the AI companies.
Very new ex-MSFT here. I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
Hey man- hang in there.
FWIW: I realized this year that there are whole cohorts of management people who have absolutely zero relationship with the words that they speak. Literal tabula rasas who convert their thoughts to new words with no attachment to past statements/goals.
Put another way: Liars exist and operate all around you in the top tier of the FAANGS rn.
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
Thanks for the post - it's work to write and synthesize, and I always appreciate it!
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
It does feel like without a compelling AI product Microsoft isn't super differentiated. Maybe Satya is right that scale is a differentiation, but I don't think people are as trapped in an AI ecosystem as they were in Azure.
Lol. You don't think that Microsoft has _a_ compelling AI product? The new version of 365 Copilot is objectively compelling, even if it is a work in progress. And Github Copilot is also objectively compelling.
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
There has always been a lot of Microsoft hate, but now its a whole new level. Windows now really sucks, My new laptop is all Linux for the first time ever. I dont see why this company is still so valuable. Most people only use a browser now and some ios apps, there is no need for Windows or Microsoft (and of course Azure is never anyone's first choice). Steam makes the gamers happy to leave too.
Gaming.
The problem with AI is that the media and the tech hype machine wants everyone to believe that it is more than a glorified randomized text generator. Yes, for many problems this is just what you need, but not to create reliable software. Somehow, they want everyone to go into a state of disbelief and agree that it is a superior intelligence or at least the clear sign of something of this sorts, and that we should stop everything we're doing right now to give more money and attention to this endeavor.
wow — this hit me hard.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I like AI to the extent that it can quickly solve well-worn, what I've taken to calling "embarrassingly solved problems", in your environment, like "make an animation subsystem for my program". A Qt timeline is not hard, but it is tedious, so the AI can do it.
And it turns out that there are some embarrassingly solved problems, like rudimentary multiplayer games, that look more impressive than they really are when you get down to it.
More challenging prompts like "change the surface generation algorithm my program uses from Marching Cubes to Flying Edges", for which there are only a handful of toy examples, VTK's implementation, and the paper, result in an avalanche of shit. Wasted hours, quickly becoming wasted days.
'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
The full quote from that section is worth repeating here.
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
> But in the end engineers do appear to become more productive 'pairing' with an LLM
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
The only clear applications for AI in software engineering are for throwaway code, which interestingly enough isn't used in software engineering at all, or for when you're researching how to do something, for which it's not as reliable as reading the docs.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
I don't think the phenomenon is limited to Seattle.
Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
I’m all for neurodivergent acceptance but it has caused monumentally obnoxious people like this to assume everyone else is the problem. A little self awareness would solve a lot of problems.
HN guidelines ask commenters to avoid name-calling. You can critique the article without slurs.
I think reading the room is required here. You and your friend can both be right at the same time. You want to build an AI-enabled app, and indeed there's plenty of opportunity for it I'm sure. And your friend can hate what it's done to their job stability and the industry. Also, totally unrelated, but is the meaning or etymology behind the app name Wanderfugl? I initially read it as Wanderfungl.
I "spoke" it to myself while reading, and instantly heard "Wonderfuckle".
It's wandering bird in Norwegian
> But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI."
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
I live in Seattle now, and have lived in San Francisco as well.
Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.
Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see are ripe for hatred - and Seattle will latch on to that in a heartbeat.
Seattle has always been a second-mover when it comes to hype and reality distortion. There is a lot more echo chamber fervor (and, more importantly, lots of available FOMO money to burn) in SF around whatever the latest hotness is.
My SF friends think they have a shot at working at a company whose AI products are good (cursor, anthropic, etc.), so that removes a lot of the hopelessness.
Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.
My previous software job was for a Seattle-based team within Amazon's customer support org.
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
There's a great non-AI point in this article - Seattle has great engineers. In pursuing startups, Seattle engineers are relatively unambitious compared to the Bay Area. By that I mean there's less "shooting for unicorns" and a comparatively more reserved startup culture and environment.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
My pet theory is that most of the investor class in seattle is ex microsoft and ex amazon. Neither microsoft nor amazon are really big splashy unicorns. Amazon's greatest innovation (aws) isn't even their original line of business and is now 'boring'. No doubt they've innovated all over their business in both little and big ways, but not splashy ways, hell every time amazon tries to splash they seem to fall on their ass more often than not (look at their various cancelled hardware lines, their game studios, etc. Alexa still chugs on, but she's not getting appreciably better to the end user over even the last 10 years).
Microsoft is the same, a generally very practical company just trying to practical company stuff.
All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.
Whenever I see "everyone", and broad statements that try to paint an entire geography based on one company "Microsoft" I'm suspect of the motives of the author at worst, or just dismissive of the premise at best.
I see what the author is saying here, but they're painting with an overly broad brush. The whole "San Francisco still thinks it can change the world" also is annoying.
I am from the Seattle area, so I do take it a bit personally, but this isn't exactly my experience here.
Interesting that this talks about people in tech who hate AI; it's true, tech seems actually fairly divided with respect to AI sentiment.
You know who's NOT divided? Everyone outside the tech/management world. Antipathy towards AI is extremely widespread.
And yet there are multiple posts ITT (obviously from tech-oriented people) proclaiming that large swaths of the non-tech world love AI.
An opinion I've personally never encountered in the wild.
I think they exist as a "market segment" (i.e, there are people out there who will use AI), but in terms of how people talk about it, sentiment is overwhelmingly negative in most circles. Especially folks in the arts and humanities.
The only non-technical people I know who are excited about AI, as a group, are administrator/manager/consultant types.
The name "Wanderfugl" is wanderfully fugly.
Oddly, the screenshots in the article show the name as "Wanderfull".
Tech professionals that depend on their work to survive and have not been thinking about capital vs labor are on delulu land.
It's probably good if some portion of the engineering culture is irrationally against AI and like refuses to adopt it sort of amish style. There's probably a ton still good work that can only be done if every aspect of a product/thing is given focused human attention to it, some that might out-compete AI aided ones.
I think you hit the nail in the head there. There's absolutely nothing we can do with AI that we can't do without it. And the level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
> level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
Trying hiring and retaining that solid group of engineers if you are a small/mid sized company without FAANG-level resources to offer.
I think treating AI as the best possible field for everyone smart and capable is itself very narrow minded and short sighted. Some people just aren't interested in that field, what's so hard to accept it? World still needs experts in other fields even within computing.
He describes his startup as an ai-oriented map... to me that sounds amazing and totally at my alley. But then it's actually about trip planning... to me is too constrained and specific. What I would love is a map type experience that gives me an AI type interface for interesting things in any given area that might be near me and worth checking out.
And not just for travel by the way... I love just exploring maps and seeing a place.. I'd love to learn more about a place kind of like a mesh between Wikipedia and a map and AI could help
I don't think the root cause here is AI. It's the repeated pattern of resistance to massive technological change by system-level incentives. This story has happened again and again throughout recent history.
I expect it to settle out in a few years where: 1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not. 2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
I'm experimenting with the gemini 3 and will do opus 4.5 soon, but I've seen huge jumps doing EE for construction over the last batch of models.
I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)
Let me know what you've experienced. Not many construction EE on HN.
It's satisfying to hear that Microsoft engineers hate Microsoft's AI offerings as much as I do.
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
There are a few clashing forces. One is the power of startups - what people love is what will prevail. It made macs and iphones grab marketshare back from "corporate" options like windows and palm pilot. Its what keeps tiktok running.
An opposing force is corporate momentum. Its unfortunately true that people are beholden to what companies create. If there are only a few phones available, you will have to pick. If there are only so many shows streaming, you'll probably end up watching the less disgusting of the options.
They are clashing. The ppl's sentiment is AI bad. But if tech keeps making it and pushing it long enough, ppl will get older, corporate initiatives will get sticky, and it will become ingrained. And once its ingrained, its gonna be here forever.
This is making me gain significant respect for Seattle.
As I've said before: AI mandates, like RTO mandates, are just another way to "quiet fire" people, or at least "quiet renegotiate" their employment.
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
Our (on-the-way-out) mayor likes it!
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
Seattle sounds kinda nice now. AI fatigue is real. I just had to swap eye doctors because they changed their medical records to some AI powered bullshit and wanted me to re-enter all my info into the new system in order to check-in for my appointment. A website that when I looked at their EULA page redirected to an empty page, no clear mention of HIPAA anywhere on the website's other pages. The eye doctor seemed confused why I wanted to stop using them after ten years as a patient even after I pointed out the flaws. It's madness.
Was this written by AI? It sounds like the writing style of an elementary school student. Almost entirely made of really simple sentence structures, and for whatever reason I find it really annoying to read.
It has all the signs. Em-dashes and that distinct way of using short sentences to drive a point. Like this.
> like building an AI product made me part of the problem.
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
> It's not about their careers.
That's the thing, though, it is about their careers.
It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.
It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.
Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?
I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.
If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.
> like [being involved in creation of the problem] made me a part of the problem.
Yeah, that's weird. Why would anyone think that? /s
I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
Some massive bait in this article. Like come on author - do you seriously have these thoughts and beliefs?
> It felt like the culture wanted change. > > That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
When the singularity hits in who knows how many years from now, do you really think it's one of these llm wrapper products that's going to be the difference maker? Again, sorry to break it to you but that's a party you and I are not going to get invited to. 0% chance governments would actually allow true super intelligence as a direct to consumer product.
In my opinion, the issue in AI is similar to the issue in self driving cars. I think the last “five percent” of functionality for agents etc. will be much, much more difficult to nail down for production use, just like snow weather and strange roads proved to be much more difficult for self-driving car technology rollout. They got to 95% and assumed they were nearing completion but it turned out there was even more work to be done to get to 100%. That’s kind of my take on all the AI hype. It’s going to take a lot more work to get the final five percent done.
"everyone"? Clickbait.
"But in San Francisco, people still believe they can change the world-so sometimes they actually do."
? For the better, or for the worse ?
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
This isn’t just a Seattle thing, but I do think the outsized presence of specific employers there contributes to an outsized negativity around AI.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
this reads like an ad for your project
It reads like it's AI-edited, which is deliciously ironic.
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
My creative writing teacher in college drilled the m dash into me. I can’t really write without them now
I think the presence of em dashes is a very poor metric for determining if something is AI generated. I'm not sure why it's so popular.
For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)
But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.
> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.
IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.
OMG, beautifully described! (not sarcastic!)
Ironically, years ago I fell into the habit of using too many non-interrupting em dashes because people thought semicolons were pretentious.
But introductory rhetorical questions? As sentence fragments? There I draw the line.
Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)
>>>
For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.
Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.
(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)
But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.
I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.
You just type hyphen twice in many programs... Or on mobile you hold hyphen for a moment and choose em dash. I don't use it, but it's very easy to use.
Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."
So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere
Ironic? The author is working on an AI project.
The irony is that AI writing style is pretty off-putting, and the story itself was about people being put off by the author's AI project.
You mean Wanderfugl???
An iconic name
I'm not surprised you're getting bad reactions from people who aren't already bought-in. You're starting from a firm "I'm right! They're wrong!" with no attempt to understand the other side. I'm sure that comes across not just in your writing
> After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
I've recently found that it can be a useful substitute for stackoverflow. It does occasionally make shit up, but stackoverflow and forums searching also has a decently high miss rate as well, so that doesn't piss me off too much. And it's usually immediately obvious when a method doesn't exist, so it doesn't waste a lot of time for each incident.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
It's like porn: use it privately if you have to, but don't make it my problem.
Everyone who has been told AI is a panacea by executive leadership who barely understand it feels this way.
I got a thread on SomethingAwful gassed [1] because it was about an AI radio station app I was working on. People on that forum do not like AI.
I think some of the reasons that they gave were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
[1] basically the hall of shame for bad threads.
I don't know if anyone has been reading cover letters recently but it seems that people are prompting the LLMs with the same shit, dusting their hands and thinking "done" and what the reader then sees is the same repetitive, uncreative and instantly recognizable boilerplate.
The people prompting don't seem to realize what's coming out the other end is boilerplate dreck, and you've got to think - if you're replaceable with boilerplate dreck maybe your skills weren't all that, anyway?
The hate is justified. The hype, is not.
“I didn't fully grok how tone deaf I was being though.
[…]
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.”
Nope, still completely fucking tone deaf.
It’s like you saw all the evidence and drew the conclusion you were most comfortable with, despite what the evidence suggests.
Always amazed to see people who don't hate AI.
Author here if anyone has thoughts
Howdy! I personally don't really understand the "point" the article is trying to make. I mostly agree with your sentiment that AI can be useful. I too have seen a massive increase in productivity in my hobbies, thanks to LLMs.
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
I guess in conclusion I'm saying that it's hard to build in Seattle, and that's really unfortunate.
I don't follow why it's hard to build in Seattle. Do you mean before this "AI summer" they struggled, or that with AI they have become too slow because they won't adopt it?
I get the feeling that this is supposed to be about the economics of a fairly expensive city/state and that "six-figure salary", but you don't really call it out.
If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.
People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.
Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
I was under the distinct impression that Seattle was somewhat divided over 'big tech', with many long-term residents resenting Microsoft and Amazon's impact on the city (and longing for the 'artsy and free-spirited' place it used to be). Do you think those non-techies are sympathetic to the Microsofties and Amazonians? This is a genuine question, as I've never lived in Seattle, but I visit often, and live in the PNW.
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
It depends on how AI affects your economy.
If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...
Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.
They kind of are, though I think so many locals now work in big tech in some way that it's shifted a bit. I wish we could return to being a bit more artsy and free spirited
I've lived in the Seattle area most of my life and lived in San Francisco for a year.
SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.
Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.
As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.
As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.
I think most people in Seattle know how economics works, logic follows: while "techbro" don't work is true: if "techbro" debt > income: unless assets == 0: sellgighustle else sellhousebeforeforeclosure nomoreseattleforyou("techbro") end else "gigbot" isn't summoned and people don't get paid. "techbro" health-- due to high expense of COBRA. [etc...] end end
'how much they do for the community' like trying to buy elections so we won't tax them, same thing boeing and microsoft did. Anytime out local government gets a little uppity suddenly these big corps are looking to move like boeing largely did. Remember Amazon HQ2, at least part of the reasoning behind that disaster was seattlites asking, 'what the hell is amazon doing for us besides driving up rents and snarling traffic?'
(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)
Out of curiosity, is this piece just some content that you created in the hopes of boosting your company's mindshare?
I'm just really isolated right now, I've been building solo for a long time. I don't have anyone to share my thoughts with, which is something I used to really value at Microsoft.
Nope, no one does. This thread is devoid of opinion on the topic.
I think people just have a lot of frustration to get off their chest, which is fine.
Regarding "And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not."
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
> MS account manager once yelled at me
Presumably the account manager is under a lot of pressure internally...
Do they repeatedly yell at you?
Do you know how your <vaporware> usage was measured - what metrics was the account manager supposed to improve?
He was trying to get people to use the Azure unnamed service. I assume others like me did a trial, POC, and immediately ran away screaming.
Would love to hear more anecdotes from former colleagues.
One fun one was the leadership of Windows Update became obsessed with shipping AI models via Windows update, but they can't safely ship files larger than 200mb inside of an update.
I like that you shared the insight. Feels like you shared a secret to the world that is not so secret if you work a Microsoft (I guess this is less about the city)
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
Well I think it's interesting how much what goes on inside of the major employers that affects Seattle. Like crappy behavior inside of Microsoft is felt outside of it.
Out of curiosity, did you redact this with AI?
It has all the telltale signs: lots of em-dashes but also "punched up" paragraphs, a lot of them end with a zinger, e.g.
> Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.
or
> Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.
Once or twice can be coincidence, but a full article of it reads a tiny bit like AI slop.
I wrote it by hand but I had an AI do some edits. I got the m dash drilled into me by my creative writing teacher in college.
I actually think your usage is pretty different from the usual ai style, if that means anything. More traditional?
I'm not sure why you needed it for edits though, since you seem good at writing generally.
"Grabbed lunch" is an awful phrase
Oh, and there's also "grok" just few paragraphs later!
It kind of is
This person crafts quite the straw man!
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
> I don't know anyone who thinks AI is useless.
By my reading, there are several people on this discussion thread right now who think it (in the form of LLMs) is useless?
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
My esteem of Seattle area engineers compared to Silicon Valley engineers has just gone up.
It's almost like the hype of AI is massively ahead of the reality, and the people being directly squeezed by that dynamic don't like how it feels.
Wanderfugl is a strange for an "AI" powered map. The Wandervogel movement was against industrialization and pro nature. I'm sure they would have looked down on iPhones and centralized "AI" that gives them instructions where to go.
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
The author has unquestioning assumption that the only innovation possible is the one with AI. That is genuinely weird. Even if one believes in AI, innovation in non-AI space should be possible, no?
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
I love AI but I find Microsoft AI to be mostly useless. You'd think that anything called Copilot can do things for you, but most of the time it just gives you text answers. Even when it is in the context of the application it can't give you better answers than ChatGPT, Claude or Perplexity. What is the point of that?
Satya has completely wasted their early lead in AI. Google is now the leader.
Finance and HR are supposed to demoralize parts of organizations asking for too many resources.
I honestly expected this to be about sanctimonious lefties complaining about a single chatgpt query using an Olympic swimming pool worth of water, but it was actually about Seattle big tech workers hating it due to layoffs and botched internal implementations which is a much more valid reason to hate it.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
I'm stuck between feeling bad because this is my field–I spend most days worrying about not being able to pay my bills or get another job–and wanting to shake every last tech worker by the shoulders and yell "WAKE UP!" at them. If you are unhappy with what your employer is doing, because they have more power over you, you don't have to just sit there and take it. You can organize.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
Seattle hits a doom (helped by the gloom) loop every winter. This too shall pass.
To the extent that Microsoft pushes their employees to use all their other shitty products, Copilot seeks like just another one (it can't be more miserable/broken than Sharepoint).
> My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and *that AI isn't worth doing anyway*. *She's wrong on both counts*, but the culture made sure she'd land there.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Oh but we're all supposed to swoon over the author's ability to make ANOTHER AI powered mapping solution! Probably vibecoded and bloated too. Just what we need, obviously all the haters are wrong! /s
Honestly if it's using a swiss-army-knife framework it's already bloated.
I live in Seattle (well a 20 min ferry from Seattle) and I too hate AI. In fact I have a Kanji learning app which I am trying to push on to people, and I brand it as AI free. No AI was used to develop it, no AI used to write content, no AI is there to “help you learn”.
When I see apps like Wanderfugl, I get the same sense of disgust as OPs ex coworker. I don‘t want to try this app, I don’t want to see it, just get it away from me.
Seattle is going to tax the fuck out of big-tech, for better or worse.
AI the hype beast product and the club for workers is a plague that I frankly hate.
AI the manual algorithm to generate code and analyze images is quite an interesting underlying tech.
This isn’t really a common-folk-vs-tech-bros story. It’s about one specific part of Seattle’s tech culture reacting to AI hype. People outside that circle often have very different incentives.
Unlike Seattle, in Los Angeles there are few software engineers but I would not utter AI at all here
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I was just about to post that this entire story could have been completely transposed to almost every conversation I've had in Los Angeles over the past year and a half. Looks like you beat me to it!
the only difference is that I don't have the conversation ha, I don't tell people about anything I do that's remotely close to that, rarely even mention anything in tech. I listen to enough other conversations to catch on to how it goes, very easy to get roped into an AI doomer conversation that's hard to get out of
Lots of creators (e.g., writers, illustrators, voice actors) hate "AI" too.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
Literally everyone I know is sick of AI. Sick of it being crowbar'd into tools we already use and find value in. Sick of it being hyped at us as though it's a tech moment it simply isn't. Sick of companies playing at being forward thinking and new despite selling the same old shit but they've bolted a chatbot to it, so now it's "AI." Sick of integrations and products that just plain do not fucking work.
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
Oh, I will happily get in your face and tell you your AI garbage sucks. I'm not afraid of these people, and you shouldn't be, either. Bring back social pressure. We successfully shamed Google Glassholes into obscurity, we can do it again. This shit has infested entire operating systems now, all so someone can get another billion dollars, while the rest of us struggle to make rent. It's made my career miserable, for so many reasons. It's made my daily life miserable. I'm so sick and tired of it.
> "shamed Google Glassholes into obscurity"
Except it didn't stick? https://news.ycombinator.com/item?id=43088369
> This shit has infested entire operating systems now
Well, it's not the fault on a random person doing some project that may even be cool.
I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interest on what they are doing.
The thing that stops me being outwardly hostile is that there are a minority, and it is a minor, minor minority, of applications for AI that are actually pretty interesting and useful. It's just catastrophically oversaturated with samey garbage that does nothing.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
Slop.
Good for them. It turns out, the common folk have more wisdom than tech bros with regard to AI.
Ah yes. The big tech employees of Amazon and Microsoft, the common folk.
This article is about how the tech bros in Seattle hate AI.
The article reports Microsoft SDEs complaining about Copilot and being forced to use it. It's "worse than competitors' tools."
No shit. But that's hardly everyone is Seattle. I'd imagine people at Amazon aren't upset about being forced to use Copilot, or Google folks.
I wonder if I'm the guy in the bubble or if all these people are in the bubble. Everyone I know is really enjoying using these tools. I wrote a comment yesterday about how much my life has improved https://news.ycombinator.com/item?id=46131280
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
I recently returned to the world of education and it's _everywhere_. I feel for those people who hate LLMs because they've already lost the war.
206dev here...
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?