Using AI Generated Code Will Make You a Bad Programmer

(slopwatch.com)

52 points | by cmpit 2 days ago ago

53 comments

  • artemsokolov 2 days ago

    1972: Using Anything Other Than Assembly Will Make You a Bad Programmer

    1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer

    2024: Using AI Generated Code Will Make You a Bad Programmer

    • dwaltrip 2 days ago

      Generalized version: Using tool to do thing makes you bad at doing thing without tool.

      I get where this is coming from and it is true sometimes (e.g. my favorite example is Google maps). But it’s quite silly to assume this for all tools and all skill sets, especially with more creative and complex skills like programming.

      Wise and experienced practitioners will stay grounded in the fundamentals while judiciously adding new tools to their kit. This requires experimentation and continual learning.

      The people whose skills will be impacted the most are those who didn’t have strong fundamentals in the first place, and only know the craft through that tool.

      Edit: forgive my frequent edits in the 10 minutes since after initially posting

    • colincooke 2 days ago

      To me the issue with AI generated code, and what is different than prior innovations in software development, is that it is the the wrong abstraction (or one could argue not even an abstraction anymore).

      Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.

      The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.

      I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.

      So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.

      • danielmarkbruce a day ago

        It isn't an abstraction. Not everything is an abstraction. There is a long history of tools which are not abstractions. Linters. Static code analysis. Debuggers. Profiling tools. Autocomplete. IDEs.

        • SaucyWrong 14 hours ago

          I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.

          • danielmarkbruce 12 hours ago

            It's not a semantic issue.

            Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.

    • namaria 3 hours ago

      And we evidently have a fantastic software landscape with very few buggy products/features being introduced, superb security and transparent interoperability across diverse software domains nowadays.

    • morkalork 2 days ago

      Right, but having used assembler and C/C++ before has made me a better programmer, even if I choose to work with a more high level language day to day so I'm more productive.

    • PeterStuer a day ago

      You forgot:

      - using a debugger will make you a bad programmer

      - using an IDE will make you a bad programmer

      - using Google will make you a bad programmer

      - using StackOverflow will make you a bad programmer

      Hint: It's not the tools, it's how you use them.

    • perihelion_zero 2 days ago

      Plato: Writing will make people forgetful.

    • m463 2 days ago

      2035: Using simplified english will make you a bad prompt engineer.

      • ted_bunny 2 days ago

        2069: Ascension will unmake you

    • rsynnott a day ago

      > Using a Language with a Garbage Collector Will Make You a Bad Programmer

      If garbage collectors only did the correct thing 90% of the time, and non-deterministically did something stupid the other 10%, then, er, yeah, it very much would!

      There's a reason that conservative GCs for C didn't _really_ catch on... (It would be unfair to describe them as as broken as an LLM, but they certainly have their... downsides.)

    • luckman212 2 days ago

      2044: "What's a programmer?" the child asks his father innocently. "Well you see son, humans used to be needed to instruct the machines on what to do..."

      • euroderf a day ago

        "Yup, really! It used to be the other way around."

    • stonethrowaway 2 days ago

      >1972: Using Anything Other Than Assembly Will Make You a Bad Programmer 1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer

      Both of these remain true to today, which is why we always interview people at one layer below the requirement of the job so they know what they’re doing.

      Writing C/C++ - know how the output looks like. Using GC-based languages? Know the cleanup cycle (if any).

      I would wager the third also holds true.

      • giraffe_lady 2 days ago

        It's not true but I appreciate you leaving all of the people with experience actually building usable software for me to hire.

    • shusaku 2 days ago

      Don’t forget copy and pasting from stack overflow! I have to do it every time matplotlib cuts off part of my saved figure when using tight layout.

    • intelVISA 2 days ago

      2 out of 3 of those maxims are Reasonably Correct?

    • CivBase 17 hours ago

      I can trust that a higher level language will produce the correct assembly code.

      I can trust that a garbage collector will allocate and cleanup memory correctly.

      I cannot trust that an AI will generate quality code. I have to review its output. As someone who has been stuck doing nothing but review other people's code for the last few months, I can confidently say it would take me less time to code the solution myself than to read, digest, provide feedback for, and review changes for someone else's code. If I cannot write the code myself, I cannot accurately review its output. If I can write the code myself, it would be faster (and more fulfilling) to do that than review output from an AI.

      • helf 11 hours ago

        [dead]

    • BillLucky 2 days ago

      Haha, so funny

  • ErikBjare 2 hours ago

    I take issue with the first and major point: "You Rob Yourself of Learning Opportunities"

    My experience has been quite the opposite: it speeds up my rate of work as I get answers faster, and thus gives me more learning opportunities in a workday.

  • budududuroiu 2 days ago

    Copilot and AI generated code 100% has degraded code quality, judging simply by the now infamous metric about code churn in repositories shooting up after copilot was introduced.

    The thing that bothers me is that your colleagues will use AI, your bosses will see it as progress, yet not realise the time saved now is going to be wasted down the road

  • throw16180339 2 days ago

    My ChatGPT use falls into two categories:

    1. Having it perform mechanical refactorings where there's no creativity involved. I'm hacking on a program that was written in the early 2000s. It predates language support for formatted IO. I had ChatGPT replace many manual string concatenations with the equivalent of sprintf. It's easy enough to test the replacements at the REPL.

    2. Questions that would be unreasonable or impossibly tedious to ask a person.

    Describe in detail the changes from language version X to language version Y.

    Which functions in this module can be replaced by library functions or made tail recursive? This definitely misses things, but it's a good starting point for refactoring.

    Is there a standard library equivalent of this function? I regularly ask it this, and have replaced a number of utility functions.

    Give examples for using function.

    • rerdavies a day ago

      You might want to try Claude 3.5 Sonnet instead of Chat GPT. Claude 3.5 Sonnet seems to be a generational advance over earlier AIs like Chat GPT. I find that I can reliably uses it in cases where Chat GPT produces pure hallucinatory nonsense. I've only seen Claude hallucinate once, and that after I had incorrectly told it that its answer was wrong. It's not the only AI that does code well, but at this particular snapshot in time (Oct 2024), general concensus seems to be that it is currently the best.

      It has fundamentally changed the way I write code. And I'm still exploring the boundaries of what kinds of tasks I can feed it. (45 year veteran senior programmer).

      Sorry for the TLDR post, but I find it difficult to briefly make the case for why Claude 3.5 Sonnet (and other similarly modern and capable AIs) are fundamentally different from smaller and older AIs when it comes to use as a coding assistant.

      I do use it for simple tedious things like "Convert an ISO date string in GMT to a std::chrono::systemclock::timepoint" (requires use of 3 generations of C/C++/Posix library functions that would take about 15 minutes of wading through bad documentation and several false starts to get right).

      But have also had success with larger fragments ranging up to 100 or 200 lines of code. It still has distinct limitations (bizarre choices of functional composition, and an unpleasant predilection for hard-coded integer constants, which can be overcome with supplementary prompts. Seems to be brilliant tactically, and shows a terrifyingly broad knowledge of APIs and libraries across the the three platforms I use (android/javascript, typescript/React/MUI, C++/linux). But doesn't yet have a good sense of strategic coding (functional and class decomposition &c).And usually requires three or four supplementary prompts to get code into a state that's ready to copy and paste (and refactor some more). e.g. "Wrap that up in a class; use CamelCase for classnames, and camelCase for member names. ... &c &c.

      And have also used it help me find solutions to problems that I've spent months on ("android, java: unable to connect to an IoT wi-fi hotspot without internet access when the data connection is active"; Claude:" blah blah ... use connectivityManager.bindProcessToNetwork()"!!!).

      Or "C++/linux/asound library: why does this {3000 lines of code} fail to reliably recover from audio underruns".

      And had some success with "find the wild memory write in this 900 line code fragment". Doesn't always work, but I've had success with it often enough that I'm going to use it lot more often.

      And used it to write some substantial bash scripts that I just don't have the skills or literacy to deal with myself (long time Windows programmer, relative newcomer to linux).

      • sireat a day ago

        What is a good way to integrate Claude 3.5 Sonnet into Visual Studio Code based workflow?

        That is how to get the tight integration that Copilot offers but with Claude?

        I've been using Github Copilot since the technical preview in mid 2021 and it too changed fundamentally how I write code. Perhaps I've gotten too used to it.

        I find that regular LLM chat interfaces break the flow for me.

        My usual use is to use Copilot as a rubber ducky or an eager junior assistant of sorts That is I would write

          //Converting an ISO date string in GMT to a  std::chrono::systemclock::timepoint 
        
        and then it is Tab time.

        If the result is not so good it means my requirements were not detailed enough. Rarely will it be completely unusable.

        As a side effect I am forced to document more of my work, which is a good thing.

  • Alifatisk 2 days ago

    This has been my concern for a while, that switching from basic autocomplete to ai generated code, copilot or switching the Cursor ide will make me stupid and forget how to write the most basic stuff without needing those tools again. I am very scared of losing my ability to write the code by myself again thanks to ai.

    If it wasn't for that, I'd switch to Cursor or use copilot in a instant, because honestly, I've asked some ai tools like Claude for help a couple of times, and those has been for tasks that I know more than one would need to be involved in to complete that, but with Claude, I solved it in a couple of hours, incredible stuff!

    Also, if it wasn't obvious, I am not claiming that this is the case, these are just my feelings, I would love to be convinced otherwise because then I might switch and try out the luxury QoF others are having.

    • viraptor a day ago

      > and forget how to write the most basic stuff without needing those tools again

      This is a reality for some of us already - but it's not about tools. I'm working with 5+ different languages and likely between 20+ projects. I had to lookup in the docs how to (for example) lowercase a string every single time, because every language has to invent their own name/case for it. Now I'm free - just write tolower() and it gets fixed. New string array - ["foo"] and it gets fixed. etc. etc.

      There's a huge number of things that are not necessary to remember, but you just do if you see them consistently every day. But now I'm free. If I ever need to do them manually again, I'll check the docs. But today I'm enjoying the time saving.

  • marginalia_nu 2 days ago

    Yeah I don't think this is a good take.

    In a learning context, sure, you probably should not be using copilot or similar, the same way you shouldn't be using a calculator when doing your basic arithmetic homework.

    Beyond that, this just seems like a classic scrub mentality hangup. If a tool is useful, you should use it where appropriate. You'd be a fool not to. If it's not useful, then don't use it.

  • Rzor 2 days ago

    I guess you could extrapolate this in _some_ scenarios as: not using AI-generated code could get you fired. The conversation about how people are doing their work 90% faster feels exaggerated, but 20-30%? I can definitely vouch for that. Some employers will certainly want you to deliver results faster, especially if they are seeing results with your colleagues.

    Buckle up, LLMs are here to stay and will likely continue improving for a while before they plateau.

  • idopmstuff 2 days ago

    Yeah, probably true, but joke's on you - I'm just a B2B SaaS PM so making me a bad programmer is a huge step up!

    But in all seriousness, these models are getting to the point where they're really useful for me to just build one-off tools for my own use or to prototype things to show other people what I'm looking for (like an interactive mockup). That's the power of turning a non-programmer into a bad programmer, and it's certainly worth something!

    • budududuroiu 2 days ago

      This sucks so absolutely bad because even a shitty demo or POC sticks, and now you’re stuck with that, when it would’ve been faster and easier to just rewrite from scratch.

      Also exacerbates the problem of A teams that get assigned to greenfield work and B teams that thanklessly maintain and actually productionise said greenfield work

  • cranberryturkey 2 days ago

    fun fact: after 25 years as a software engineer in silicon valley, I'm convinced nobody cares about code quality and they never have.

    • everforward 2 days ago

      They do, but very few people are good enough to either write a pull request or review one well. I wouldn’t put myself in the really good category for either.

      Most people write pull requests that are scoped too poorly to tell what they’re doing. Like I get a single function with unit tests, so the best I can do for a review is check whether there are any obvious missed edge cases for a function whose purpose I don’t understand.

      On the review side, most people review by doing basically what a linter does. I joke with people that if they want to nitpick my variable names then I’ll start DMing them to ask what name they want every time I need a variable. A meaningful review would analyze whether abstractions are good, whether there is behavior that relies on an unspecified part of an abstraction (timing), etc. Nobody does those.

    • anonzzzies 2 days ago

      Some care but most don't or cannot care as talking code quality means to managers that you could be doing more, faster and cheaper that is relevant to the bottom line. In large corps, everything is slow and expensive as it is, so 'wasting time' on things that are hard/impossible to measure is not something that is a great sell.

    • stuckinhell 2 days ago

      I have 23 years,and I'm convinced too.

      • mewpmewp2 2 days ago

        Also most attempts at code quality lead to even worse code - into overengineered abstraction layers that no one will be able to adjust once it's inevitably realized that the assumptions were all wrong.

        • fragmede 2 days ago

          Totally. YAGNI - you ain't gonna need it. That beautiful framework you spent extra weeks writing, designing from first principles, turns out to be the wrong layer of abstraction because of a bad assumption. the client left off critical business logic details, or a pivot to a more popular feature of the program. The only question then is how to get out of that tech debt.

      • souldeux 2 days ago

        Only a dozen but same. Just ship the shit. Quality never mattered.

    • Koshkin 2 days ago

      Yeah, no, things have changed in the last 30 years...

      • anonzzzies 2 days ago

        How/where? We see a lot of companies inside as nature of business our business; almost no one cares. Some say they care but in the end it's money and shipping fast. If that can get code quality for the same price, they feel they are overpaying and they can ship faster.

  • rerdavies a day ago

    On the other hand, not using tools that increase your productivity by up to 50% will make you an unemployed programmer. <shrugs>

  • chairmansteve 2 days ago

    Using an excavator will make you bad at shovelling.

  • BugsJustFindMe 2 days ago

    > You May Become Dependent on Your Own Eventual Replacement

    If you're going to be eventually replaced, and I absolutely believe that even the best of us will, you may as well get in on the ground floor to extract value for a bit before that happens.

    Not writing your own code doesn't need to mean turning your brain off. You still need to look at what came out, understand it, and spot where it didn't match your needs.

    • baw-bag 2 days ago

      I agree with that entirely. Using the current tools available to make your life easier while picking up a decent pay check is a no brainer. I probably expect my next job is using said tools primarily followed by complete redundancy.

      By that point, having never used any of the tools makes you almost no different from anyone off the street.

      In a way I welcome it. Writing the same menial code as everyone else slightly differently becomes a pretty stale existence.

  • JohnMakin 2 days ago

    While I agree with the spirit of this post, it seems a bit misguided on several points.

    1) I do not believe AI will ever replace programming as a practice, because people will still need to read/review the code (and no, I don't personally believe LLM's are going to be able to do that themselves in the vast majority of cases)

    2) while the "script kiddie" characterization is a bit of an unfair generalization, there is some truth to this. I disagree that using AI to generate code puts you in that realm automatically, but I have seen quite a few cases of this actually happening to give this point some merit.

    3) Using AI generated code atrophies your skills no less than using someone's imported library/module/whatever. Yes, I probably couldn't write a really good merge sort in C off the top of my head anymore without thinking through it, but I don't really have to, because a bazillion people before me have solved that problem and created abstractions over it that I can use. It is not inherently bad to use other people's code, the entire software world is built on that principle. In fact, it's an extremely junior mindset (in my view) that all code you use must be written by your own hand.

    4) "code being respected" is not really a metric I'd ever go for, and I'm not sure in my career so far I've ever seen someone push a big pull request and not have a bazillion nitpicky comments about it. Respecting other people's code doesn't seem to be very common in the industry. I struggle to think why I personally would even want that. Does it work? Is it readable/maintainable by someone other than me? Is it resilient to edge cases? If all yes, good, that is all I really care about.

    5) > If you're someone who has no actual interest in learning to code, and instead see AI as more of a freelancer—telling it stuff like "make a kart racer game," and "that sucks, make it better"—then none of this really applies to you.

    I mean, sure. I have very little interest or joy in "coding." I like building, and coding is a means to that end. Again, seems like a very junior mindset. I know people do find an enormous amount of joy in it for the sake of it, I am not one of those people, and that's fine. Usually it drives me to create better abstractions and automation so I don't have to write more code than I want to.

    • cheevly 2 days ago

      For the record, some of us have tools which comprehensively solve all of your concerns (even #1). Generating 100% reliable self-analyzing deterministic code is already possible and will become mainstream within the next two months. When you see how it’s done, you’re going to smack yourself in the forehead. Programmers in particular seem extremely vulnerable to dismissing technology that they are not intimately familiar with.

      • howenterprisey a day ago

        Interesting, I would've thought that to be impossible. Would you happen to have any places I can learn more about tools that can do that?

      • JohnMakin a day ago

        What kind of code? what domains?

  • anarticle 2 days ago

    TL;DR: shipped

    More specifically, I think code quality is a luxury that not everyone has if you work for dumb corpos who think that moving the gantt chart block left will speed up development.

    The answer there is probably don't work for those people, but salaries cap out at some point and the allure of megacorps is there.

    I'm a CS old head, who has manually allocated / managed memory, and built what would be considered stupid data structures to support scientific efforts.

    For me, using AI and getting 0 to 1 experience in languages/frameworks I don't know is ultra. Combining those skills has made me some money in shipping small software, which has been fun.

  • m2024 2 days ago

    [dead]

  • mjtechguy 2 days ago

    Using AI generated code will also make me a programmer I could never be. Been an infra and cloud guy, so this has been a game changer for me to actually create MVPs of things that have only ever existed in my head. I love it.