190 comments

  • dcastonguay 5 hours ago

    My current opinion is that AI is just a thing that is going to further amplify our existing tendencies (on a person-to-person basis). I've personally found it to be extremely beneficial in research, learning, and the category of "things that take time because of the stuff you have to mechanically go through and not because of the brainpower involved". I have been able to spend so much more time on things that I feel require more human thinking and on stuff I generally enjoy more. It has been wonderful and I feel like I've been on a rocket ship of personal growth.

    I've seen many other people who have essentially become meatspace analogues for AI applications. It's sad to watch this happen while listening to many of the same people complain about how AI will take their jobs, without realizing that they've _given_ AI their jobs by ensuring that they add nothing to the process. I don't really understand yet why people don't see that they are doing this to themselves.

    • zelphirkalt 6 minutes ago

      The problem is, that many people are also incapable of adding much to the process. We have had that kind of situation long before "AI". There are tons of gatekeepers and other types of people out there, that are a net negative, wherever they are employed, either by doing bad work, which someone else needs to clean up after them, or by destroying work culture with silly shortsighted dogmatism of how things must work, clueless middle management, and so on. We need to leave these people somewhere. Maybe with "AI" a few more of these are revealed, but the problem stays the same. Where do we leave all these people? What task can we give them, that is not dehumanizing can we give them, where they will be a positive, instead of net negative to society? Or do we leave it all up to chance, that one day they will find something they are actually good at, and that doesn't result in net negative for humanity? What is the future of all the people? UBI and letting them figure it out doesn't look like such a bad idea.

    • password54321 5 hours ago

      The post was largely about young people who are growing up with these tools and are still at the stage of developing their habits and tendencies. Personally I am glad I learnt programming before LLMs even if it meant the tedious searches on stackoverflow because I didn't feel like I was coming up against a wave of new technology when it came to future job searches. Having done so I can understand and appreciate the intrinsic value of learning these things but western culture is largely about extrinsic values that may lead to the future generation on missing out on learning certain skills.

      • agentcoops 31 minutes ago

        I agree with you on the question of extrinsic values and do not envy people who are starting college right now, trying to make decisions about an extra-ordinarily unclear future. I recently became a father and I try to convince myself that in eighteen years we'll at least finally know whether it has all been hype or not.

        However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been. People are still always impressed whenever a precocious high schooler YouTubes his way to an MVP SaaS launch -- I hope and expect the first batch of LLM-accompanied youth to emerge will have set their sights higher.

        • johnnyanmac 16 minutes ago

          >However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been.

          I don't know about that. Early internet taught me I still need to put in work to to find answers. I still had to read human input (even though I lurked) and realize a lot of information was lies and trolls (if not outright scams). I couldn't just rely on a few sites to tell me everything and had to figure out how to refine my search queries. The early internet was like being thrown into a wilderness in many ways, you pick up survival skills as you go along even if no one teaches you.

          I feel an LLM would temper all the curiosity I gained in those times. I wouldn't have the discipline to use an LLM the "right way". Clearly many adults today don't either.

      • jstanley 4 hours ago

        Personally I am glad I learnt programming before StackOverflow! Precisely because it meant I had to learn to figure things out myself.

        I still use StackOverflow and LLMs, but if those things were available when I was learning I would probably not have learnt as much.

        • obscurette 3 hours ago

          My professor at uni said that people who have learned to search for information before the internet came along are the best at searching for information on the internet. My personal experience agrees and I'm very glad I'm one of such people.

        • mcny 4 hours ago

          I started programming before stack overflow, was never any good (still am not to this day), but I was always scared of asking questions on stack overflow. I felt like there was a certain amount of homework expected when you ask a question and usually by the time I did enough work to post a question, it was moot because I would have solved my problem usually by stringing together two or more stack overflow questions to understand my problem better.

          The change with LLM is I can now just ask my hare brained questions first and figure out why it was a stupid question later

          • catlifeonmars an hour ago

            I originally learned programming by answering questions on StackOverflow. It was (unsurprisingly) quite brutal, but forced me to dive deep into documentation and try everything out to make sure I understood the behavior.

            I can’t speak to whether this is a good approach for anyone else (or even for myself ~15 years later) but it served to ingrain in me the habit of questioning everything and poking at things from multiple angles to make sure I had a good mental model.

            All that is to say, there is something to be said for answering “stupid” questions yourself (your own or other people’s).

          • latexr an hour ago

            Of course you’re supposed to put in work, that’s how you learn. You have to think through your problem, not just look at the back of the math book to copy the answer.

            The problem with Stack Overflow is not that it makes you do the work—that’s a good thing—but that it’s too often too pedantic and too inattentive to the question to realise the asker did put in the work, explained the problem well, and the question is not a duplicate. The reason it became such a curmudgeonly place is precisely due to a constant torrent of people treating it like you described it, not putting in the effort.

          • skydhash 4 hours ago

            I can't check, but I don't think I've ever asked questions on StackOverflow, or even Reddit. Maybe I'm lucky, but my searches have always given me enough leads to find my own solutions or where to find them. There's a lot of documentations and tips floating around the internet. And for a lot of technologies, the code is available as well (or a debugger).

            • gertlex an hour ago

              I always enjoyed documenting things. So got great delight out of carefully asking the few questions I was really stuck on on stack overflow... and then half the time later coming up with a solution and adding that good answer.

              (Mostly this ended up being weird Ubuntu things relating to usecases specific to robots... not normal programming stuff)

        • crazygringo an hour ago

          Good lord, I'm not glad.

          It was horrible. Because it wasn't about "figuring things out for yourself." I mean, if the answer was available in a programming language or library manual, then debugging was easy.

          No, the problem was you spent 95% of your debugging time working around bugs and unspecified behavior in the libraries and API's. Bugs in Windows, bugs in DLL's, bugs in everything.

          Very frequently something just wouldn't work even though it was supposed to, you'd waste an entire day trying to get the library call to work (what if you called it with less data? what if you used different flags?), and then another day rewriting your code to use a different library call, and praying that worked instead. The amount of time utterly wasted was just massive. You didn't learn anything. You just suffered.

          In contrast, today you just search from the problem you're encountering and find StackOverflow answers and GitHub issues describing your exact problem, why it's happening, and what the solution is.

          I'm so happy people today don't suffer the way we used to suffer. When I look back, it seems positively masochistic.

          • johnnyanmac 11 minutes ago

            >I'm so happy people today don't suffer the way we used to suffer.

            TBF Ali bugs in some framework you're using still happens. The problem wasn't eliminated, just moved to the next layer.

            Those debugging skills are the most important part of working with legacy software (which is what nearly all industry workers work in). It sucks but is necessary for success in this metric.

          • 1718627440 19 minutes ago

            It's true that the time figuring out the buggy behaviour would now be less, but the buggy API doesn't go away only, because it exists on Github. If you would be able to change the source code now, you would've been able to fix it back then.

        • 1718627440 4 hours ago

          > Personally I am glad I learnt programming before StackOverflow!

          I have not, but at the beginner level you don't really need it, there are tons of tutorials and language documentation that is easier to understand. Also beginners feel absolutely discouraged to ask anything, because even if the question is not a real duplicate, you use all terms wrong and thus get downvoted to hell and then your question is marked as a duplicate to something, that doesn't even answer your question.

          Later it's quite nice to ask for clarifications of e.g. the meaning of something specific in a protocol or the behaviour of a particular program. But quite quickly you don't actually get any satisfying answers, so you revert to just read the source code of the actual program and are surprised how easy that actually was. (I mean it's still hard every time you start with a new unknown program, but it's easier than expected.)

          Also when you implement a protocol, asking questions on StackOverflow doesn't scale. Not because the time you need to wait for answers; even if that were zero, it still takes to long time and is deeply unsatisfying to develop a holistic enough understanding to write the code. So you start reading the RFCs and quickly appreciate how logically and understandable they are. You first curse how unstructured anything is and then you recognize that the order follows what you need to write and you can just trust the text and write the algorithm down. Then you see that the order in which the protocol is described actually works quite well for async and wonder what the legacy code did, because not deviating from the standard is actually easier.

          At some point you don't understand the standard, there will be no answer on StackOverflow, the LLM just agrees with you for every conflicting interpretation you suggest, so you hate everything and start reading other implementations. So no, you still need to figure out a lot for yourself.

    • mnky9800n 5 hours ago

      It fulfills steve jobs promise that a computer is a bicycle for your mind. It is crazy to me that all these poeple think they are going to lose their ability to think. If you didn't lose your ability to think to scrolling social media then you aren't going to lose it to AI. However, I think a lot of people lost their ability to think by scrolling social media and that is problematic. What people need to realize is that they have agency over what they put in their mind and they probably shouldn't put massive amounts of algorithmically determined content without first considering if its going to push them in a particular direction of beliefs, purchases, or lifestyle choices.

      • mtillman 5 hours ago

        1. https://www.researchgate.net/publication/255603105_The_Effec...

        2. https://pubmed.ncbi.nlm.nih.gov/25509828/

        3. https://www.researchgate.net/publication/392560878_Your_Brai...

        I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.

        • mnky9800n 4 hours ago

          I don't think that a couple papers counts for much of anything. A couple scientific articles form an opinion about a topic. The evidence shown in those papers are certainly something to think about, but there hasn't been enough time or effort put into the use of understanding how AI technologies aide technical work and the ability to solve more complex problems to make some sort of suggested claim that it is inherently bad.

          Compare this body of work to the body of work that has consistently showed social media is bad for you and has done so for many years. You will see a difference. Or if you prefer to focus on something more physical, anthropogenic climate change, the evidence for the standard model of particle physics, the evidence for plate tectonics, etc.

          I'm not saying we shouldn't be skeptical that these technologies might make us lazy or inable to perform critical functions of technical work. I think there is a great danger that these technologies essentially fulfill the promise of data science across industries, that is, a completely individualized experience to guide your choices across digital environments. That is not the world that I want to live in. But, I also don't think that my mind is turning to mush because I asked Claude Code to write some code to make a catboost model that would have taken me a few hours to try out some idea.

          • visarga 3 hours ago

            Time goes forward, in the future when will you be in a situation you can't access a LLM? Better use LLM as much as possible to learn the skills of controlling agents, scaffolding constrains, docs, breaking problems in such a way that AI can solve them. These are the skills that matter now.

            We don't practice much using the assembler either, or the slide ruler. I also lost the skill to start an old Renault 12 which I owned 30 years ago, it is a complex process believe me, there were some owners reaching artist level at it.

            • johnnyanmac 4 minutes ago

              >in the future when will you be in a situation you can't access a LLM?

              In an interview setting, while in a meeting, if you're idling on a problem while traveling or doing other work, while you are in an area with weak reception, if your phone is dead?

              There are plenty of situations where my problem solving does not involve being directly at my work station. I figured out a solution to a recent problem while at the doctor's office and after deciding to check the API docs more closely instead of bashing my head on the compiler.

              >We don't practice much using the assembler either, or the slide ruler.

              Treating your ability to research and critically think as yet another tool is exactly why I'm pessimistic about the discipline of the populace using AI. These aren't skills you use 9-5 then turn off as you head back home.

            • mnky9800n 3 hours ago

              Exactly. Why should I not learn new things and how they work? What is the point of living if not learning new things?

        • BolexNOLA 5 hours ago

          That’s always been the issue for me. It’s not the technology itself, it’s that virtually the entire initiative is being pushed by venture capital with a “we want to make more money than God” mission that means they call every person calling for caution a Luddite/anti-progress. That’s basically how every thing on the Internet has expanded over the last 20 years and the results have been, if I’m being insanely generous, mixed at best.

          • DrewADesign 5 hours ago

            I also think that being a little more cautious would have yielded many, if not most of the benefits we've received while avoiding many of the downfalls. Most of these companies knew their products were negatively affecting their users-- e.g. instagram and teen girl self esteem-- but actively suppressed it because it would inhibit making dump trucks full of money off of it. People who stand to make that money will ALWAYS say you're going to ruin everything if you do anything at all to impede progress-- that's ridiculous. The people that use their turn signal and drive within 10mph of the speed limit still reach their destination, and with dramatically less risk than the people that drive pedal-to-the-metal, tailgating, with that self-absorbed fuck-everybody-else attitude.

          • TheOtherHobbes 4 hours ago

            I think it's more "We want to use money to become gods", and AI is very much part of that.

            They're going to be rather surprised when this doesn't work as planned, for reasons that are both very obvious and not obvious at all. (Yet.)

      • jhbadger 5 hours ago

        Being a "bicycle for the mind" is a fine thing for technology to be. The problem is just as with bicycles that's too much work for a lot of people and they would prefer "cars for the mind" in which they have to do nothing.

        • daxfohl 3 hours ago

          Even cars are too much. Give me a Waymo. Or better yet just let me stay home and doom scroll while my life gets delivered to my doorstep.

          Geesh, doorbell again. Last mile problem? C'mon. Whoever solves the last 30 feet problem is the real hero.

      • MattRix 4 hours ago

        It’s less like a bicycle for the mind, and more like a bus. Sure you’re gonna get there quickly, but you’ll to end up at the same place as a bunch of other people, and you won’t remember the route you took to get there.

      • AlecSchueler 5 hours ago

        > If you didn't lose your ability to think to scrolling social media ...

        Didn't we?

        • jader201 5 hours ago

          > However, I think a lot of people lost their ability to think by scrolling social media and that is problematic.

          • KerrAvon 4 hours ago

            Specifically, most of the billionaires. They’re all deranged lunatics now, between COVID, social media, and being told they couldn’t say the n-word with impunity for two weeks back in 2020.

      • bgwalter 5 hours ago

        Except that generative "AI" is a tricycle for the mind that prevents you from ever learning how to ride a bicycle.

        • j4coh 4 hours ago

          It’s more like a bus where you can sit there and stare out the window if you like. But somehow it also gives people the illusion that they are driving the bus, giving them a sense of self satisfaction while robbing them of the opportunity to actually create or learn something.

    • catigula an hour ago

      I think what you will find is that many people fundamentally don't care about their job for various reasons, chiefly of which is most likely that they don't feel fairly compensated, and thus outsourcing their labor to AI isn't the fundamental identify transplant you think it is.

  • manmal 7 minutes ago

    The article sets out with an urban legend that doesn’t hold water:

    > More time is more tension; more pain is more gain.

    While this meta analysis [1] found that:

    > Results indicate that hypertrophic outcomes are similar when training with repetition durations ranging from 0.5 to 8s.

    Maybe author should have chosen the amount of training resistance for the intro allegory (perceived effort?). This would have made their point just as well.

    1: https://pubmed.ncbi.nlm.nih.gov/25601394/

  • rimeice 6 hours ago

    I'm undecided on this, initially I was on the “this is bad, we’re outsourcing our thinking” bandwagon, now after using AI for lots of different types of tasks for a while now, I feel like generally I’ve learnt so much, so much more quickly. Would I recall it all without my new crutch? Maybe not, but I may not have learnt it in the first place without it.

    • zdragnar 5 hours ago

      Think of it like alcohol.

      Some people benefit from the relaxing effects of a little bit. It helped humanity get through ages of unsafe hygiene by acting as a sanitizer and preservative.

      For some people, it is a crutch that inhibits developing safe coping mechanisms for anxiety.

      For others it becomes an addiction so severe, they literally risk death if they don't get some due to withdrawal, and death by cirrhosis if they keep up with their consumption. They literally cannot live without it or with it, unless they gradually taper off over days.

      My point isn't that AI addiction will kill you, but that what might be beneficial might also become a debilitating mental crutch.

      • JumpCrisscross 4 hours ago

        > Think of it like alcohol

        Better analogy is processed food.

        It makes calories cheaper, it’s tasty, and in some circumstances (e.g. endurance sports or backpacking) is materially enhances what an ordinary person can achieve. But if you raise a child on it, to where it’s what they reach for by default, they’re fucked.

    • techjamie 5 hours ago

      It comes down to how you use it, whether you're just getting an answer and moving on, or if you're getting an answer and then increasing your understanding on why that's the correct answer.

      I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.

      I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.

      All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.

      Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.

      • wizzwizz4 4 hours ago

        This reminds me of my experience using computer-assisted mathematical proof systems, where the computer's proof search pointed me at the Cantor–Schröder–Bernstein theorem, giving me a great deal of insight into the problem I was trying to solve.

        That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?

      • juped 5 hours ago

        Now imagine how much better

        - the code

        - your improvement in knowledge

        would have been if you had skipped copilot and described your problem and asked for algorithmic help?

        • bongodongobob 21 minutes ago

          Now imagine that he's interested in finishing his game, not the intricacies of raycasting algorithms.

    • jacquesm 6 hours ago

      You are not necessarily typical.

      • kannanvijayan 5 hours ago

        Discussing this in terms of anecdotes of whether people will use these tools to learn, or as mental crutches.. seems to be the wrong framing.

        Stepping back - the way fundamental technology gets adopted by populations always has a distribution between those that leverage it as a tool, and those that enjoy it as a luxury.

        When the internet blew up, the population of people that consumed web services dwarfed the population of people that became web developers. Before that when the microcomputer revolution was happening, there were once again an order of magnitude more users than developers.

        Even old tech - such as written language - has this property. The number of readers dwarfs the number of writers. And even within the set of all "writers", if you were to investigate most text produced, you'd find that the vast majority of it is falls into that long tail of insipid banter, gossip, diaries, fanfiction, grocery lists, overwrought teenage love letters, etc.

        The ultimate consequences of this tech will depend on the interplay between those two groups - the tool wielders and the product enjoyers - and how that manifests for this particular technology in this particular set of world circumstances.

        • jacquesm an hour ago

          > The number of readers dwarfs the number of writers.

          That's a great observation!

          'Literacy' is defined as the ability to both read and write. People as a rule can write, even if it isn't a novel worth publishing they do have the ability to encode a text on a piece of paper. It's a matter of quality rather than ability (at least, in most developed countries, though even there there are still people who can not read or write).

          So think that you could fine-tune that observation to 'there is a limited number of people that provide most of the writings'. Observing for instance Wikipedia or any bookstore would seem to confirm that. If you take HN as your sample base then there too it holds true. If this goes for one of our oldest technologies it should not be surprising that on a forum dedicated to creating businesses and writing the ability to both read and write are taken for granted. But they shouldn't be.

          The same goes for any other tech: the number of people using electronics dwarfs the number of circuit designers, the number of people using buildings dwarfs architects and so on, all the way down to food consumption and farmers or fishers.

          Effectively this says: 'we tend to specialize' because specialization allows each to do what they are best at. Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.

          • 1718627440 27 minutes ago

            > Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.

            This is quoted elsewhere in this thread (https://news.ycombinator.com/item?id=45482479). Most of the things are stuff that you will be doing at some point in your life, that are socially expected from every human at part of human life or things you do daily. It also only says you should be able to do it, it does not need to be good; but should the case arise, that you are required to do it, you should be able to deal with it.

      • add-sub-mul-div 5 hours ago

        Right. It doesn't matter how smart you still are if the majority of society turns into Idiocracy. Second, we're all at risk of blind spots in estimating how disciplined we're being about using the shortcut machine the right way. Smart people like me, you, grandparent aren't immune to that.

    • drbojingle 6 hours ago

      Agreed. I've engaged with different tech since moving things along is now easier.

    • tkgally 5 hours ago

      That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.

      It would have been nice if the author had not overgeneralized so much:

      https://claude.ai/share/27ff0bb4-a71e-483f-a59e-bf36aaa86918

      I’ll let you decide whether my use of Claude to analyze that article made me smarter or stupider.

      Addendum: In my prompt to Claude, I seem to have misgendered the author of the article. That may answer the question about the effect of AI use on me.

      • jacquesm an hour ago

        > That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.

        And then:

        > It would have been nice if the author had not overgeneralized so much

        But you just fell into the exact same trap. The effect on any individual is a reflection of that person's ability in many ways and on an individual level it may be all of those things depending on context. That's what is so problematic: you don't know to a fine degree what level of competence you have relative to the AI you are interacting with so for any given level of competence there are things that you will miss when processing an AI's output. The more competent you are the better you are able to use it. But people turn to AI when they are not competent and that is the problem, not that when they are competent they can use it effectively. And despite all of the disclaimers that is exactly the dream that the AI peddlers are selling you. 'Your brain on steroids'. But with the caveat that they don't know anything about your brain other than what can be inferred from your prompts.

        A good teacher will be able to spot their own errors, here the pupil is supposed to be continuously on the looking for utter nonsense the teacher utters with great confidence. And the closer it gets to being good at some stuff the more leeway it will get for the nonsense as well.

  • zdw 5 hours ago

    For an example of where this already happened, look at the number of people who literally don't have a minor inkling of how to plan a route or navigate without a GPS and mapping software.

    Sure, having a real-time data source is nice for avoiding construction/traffic, and I'd use a real-time map, but going beyond that to be spoon fed your next action over and over leads to dependency.

    • port11 an hour ago

      I always thought I had a good sense of navigation, until I realised it was getting quite bad.

      More or less at the same time I found “Human Being: Reclaim 12 Vital Skills We’re Losing to Technology”, and the chapter on navigation hit me so hard I put the book down and refused to read any more until my navigation skills improved.

      They're quite good now. I sit at the toilet staring at the map of my city, which I now know quite well. No longer navigate with my phone.

      I'm scared about the chapter on communication, which I'm going through right now.

      I do think we're losing those skills, and offloading more thinking to technology will further erode your own abilities. Perhaps you think you'll spend more time in high-cognition activities, but will you? Will all of us?

      • 1718627440 33 minutes ago

        > and the chapter on navigation hit me so hard I

        Don't leave as hanging, what were they saying?

        • port11 18 minutes ago

          Hmm, it's very well written and I don't mean to butcher it. It touches on how the people of Polinesia would navigate and do it better than Western colonisers. It mentions tricks such as memorising reference points, understanding the cardinal directions, and whatnot. Print a map of your city and put it in the loo. I don't know, it's a great book and I won't do it any justice.

    • WillAdams 5 hours ago

      And for the societal cost of that see stories such as:

      https://www.npr.org/2011/07/26/137646147/the-gps-a-fatally-m...

      and for the way this mindset erodes values and skills:

      https://www.marinecorpstimes.com/news/your-marine-corps/2018...

      • IshKebab 5 hours ago

        The "societal cost" you linked literally says this happens with paper maps too. The cause is incorrect maps, not GPS.

        (And of course, idiotic behaviour... but GPS doesn't cause that.)

        Overall GPS has been an absolutely enormous benefit for society with barely any downside other than nostalgia for map reading.

    • heisenbit 5 hours ago

      GPS allowed me to go where I would have been hesitant to venture before.

      • PessimalDecimal 4 hours ago

        Can you draw this analogy or a bit? What hesitation has an LLM helped you overcome?

    • crazygringo 41 minutes ago

      And so what? Why not be dependent?

      I grew up with a glove box full of atlases in my car. On one job, I probably spent 30 minutes a day planning the ~4h of driving I'd do daily to different sites. Looking up roads in indexes, locating grid numbers, finding connecting roads spanning pages 22-23, 42-43, 62-63, and 64-65. Marking them and trying not to get confused with other markings I'd made over the past months. Getting to intersections and having no idea which way to turn because the angles were completely different from on the map (yes this is a thing with paper maps) and you couldn't see any road signs and the car behind you is honking.

      What a waste of time. It didn't make me stronger or smarter or a better person. I don't miss it the same way I don't miss long division.

      • tdrz 31 minutes ago

        > It didn't make me stronger or smarter or a better person.

        Yes, it did.

    • LPisGood 5 hours ago

      Why do you consider relying on navigation apps to be over dependency? Planning a route is basically an entirely useless skill for most people, and if they do need to for some odd reason, it’s pretty easy.

      • zdw 5 hours ago

        In my observation of others, it's not an easy skill unless you've done it before. Most people have little idea where they are and no idea what they would do next if the turn-by-turn tech failed on them. I'd argue it is useful the first time you want to take a scenic route, or optimize for things other than shortest travel time, like a loop bike route that avoids major streets.

        Not to say that apps aren't useful in replacing the paper map, or doing things like adding up the times required (which isn't new - there used to be tables in the back of many maps with distances and durations between major locations).

        • 1718627440 3 hours ago

          > In my observation of others, it's not an easy skill unless you've done it before.

          I always feel like they aren't even trying. Like you just make a point were you are, a point were you want to do, draw a straight line, take the nearest streets, and then you can optimize ad libitum.

    • jncfhnb 4 hours ago

      I don’t have an inkling of how to navigate. I don’t really see the problem.

      • 1718627440 4 hours ago

        I just can't comprehend how people can accept that. I mean sure you can use your handheld computer, but when I wouldn't know where I am and what I need to do to go to where I intend to be, I would feel very alone, lost and abandoned, like being Napoleon on Elba. In a completely foreign city, often I just look at a map before the journey for the general direction and then just keep running, without thinking much. That works quite well, because street design is actually logical and where it isn't there are signs. I'm surprised that you often wouldn't even need a map; because you just need to look at the signs, they will tell you.

    • AndrewKemendo 5 hours ago

      But this is just proof that long ago we ceded

      The ballad of John Henry was written in the 1840s

      “Does the engine get rewarded for its steam?” That was the anti-automation line back then

      If you gave up anything that was previously called “AI” we would not have computers, cars, airplanes or any type of technology whatsoever anywhere

    • foxglacier 5 hours ago

      It seems arbitrary to set the limit of how much pointless busywork we need as just the amount you're used to. Maybe maps are already dumbing it down too much and we should work out directions from a textual description of landmarks and compass bearings that's not specific to our route? In my opinion, dependency on turn-by-turn directions is fine because we do actually have the machines to do it for us. We're equally dependent on all sorts of useful things that free us to think about something actually useful that really can't be done for us. For example, consumer law means we can walk into a shop and buy something without negotiating a contract with each seller and working out all the ways we might get cheated.

      Maybe the place to draw the line is different for each individual and depends on if they're really spending their freed-up time doing something useful or wasting it doing something unproductive and self-destructive.

  • dagmx 4 hours ago

    Atrophy has really been an issue in my recent hiring cycles for good senior engineers.

    80% of senior candidates I interview now aren’t able to do junior level tasks without GenAI helping them.

    We’ve had to start doing more coding tests to weed their skill set out as a result, and I try and make my coding tests as indicative of our real work as possible and the work they current do.

    But these people are struggling to work with basic data structures without an LLM.

    So then I put coding aside, because maybe their skills are directing other folks. But no, they’ve also become dependent on LLMs to ideate.

    That 80% is no joke. It’s what I’m hitting actively.

    And before anyone says: well then let them use LLMs, no. Firstly, we’re making new technologies and APIs that LLMs really struggle with even with purpose trained models. But furthermore, If I’m doing that, then why am I paying for a senior ? How are they any different than someone more junior or cheaper if they have become so atrophied ?

    • sktrdie 2 hours ago

      > then why am I paying for a senior ?

      Because they know how to talk to the AI. That's literally the skill that differentiates seniors from juniors at this point. And a skill that you gain only by knowing about the problem space and having banged your head at it multiple times.

      • htrp 2 hours ago

        Except most junior devs will be better than sr devs at wholehearted ai adoption

    • Narciss 4 hours ago

      I was actually thinking about this the other day while vibe coding for a side project.

      I am a lead engineer, but I’ve been using AI in much of my code recently. If you were to ask me to code anything manually right now, I could do it, but it would take a bit to acclimate to writing code line by line. By “a while”, I mean maybe a few days.

      Which means that if we were to do a coding interview without LLMs, I would probably flop without me doing a bit of work beforehand, or at least struggle. But hire me regardless, and I would get back on track in a few days and be better than most from then on.

      Careful not to lose talent just because you are testing for little used but latent capabilities.

      • dagmx 4 hours ago

        In your scenario though, how do you avoid hiring based on blind faith?

        How do I know you aren’t just a lead with a very good team to pick up the slack?

        How do I separate you from the 20 other people saying they’re also good?

        Why would I hire someone who can’t hit the ground running faster than someone else who can?

        Furthermore, why would I hire someone who didn’t prepare at all for an interview, even if just mentally?

        How do you avoid just hiring based on vibes? Bear in mind every candidate can claim they’re part of impressive projects so the resume is often not your differentiator.

        • htrp 2 hours ago

          We're gonna have to reinvent swe hiring

      • rurp 3 hours ago

        Expecting senior job applicants to have regained basic coding skills seems reasonable to me. I would be skeptical of an applicant who hadn't made the level of effort you're describing before applying.

      • OptionOfT 3 hours ago

        The problem becomes distinguishing someone like you, who has the skill but hasn't recently used it vs someone who doesn't have the skill.

      • woooooo 4 hours ago

        Leetcode was always a skill mostly practiced for interviews though, right? Arguably its a better signal now in the era of vibecoding that someone can do it themselves if they have to. It used to be "yeah of course I'm responsible in my job, I use a library for this stuff". But in this era, maybe performative leetcode has more value as a signal that you can really guide the AI.

      • roxolotl 4 hours ago

        Isn’t the solution to tell the interviewee that they will have to write some code without llm support? In the case of someone like you I’d hope they’d take that as notice to spend a tiny bit of time getting back up to speed. If it really is just a day or two then it shouldn’t be an issue

        • dagmx 3 hours ago

          Yes, I fore warn all candidates that they will do a coding test, with examples of similar tests.

          They are allowed to suggest the language they’re most familiar with, they’re told they don’t need to finish and they don’t need to be correct.

          It’s just about seeing how they work through something.

          If someone like the person you replied to would show up that unprepared , I would really question their own judgement of their abilities.

      • risyachka 3 hours ago

        >> and I would get back on track in a few days

        Thats the issue. How can one be sure you can actually get back on track - or - you never were on the track in the first place and you are just an AI slopper?

        Thats why on interview you need to show skills. And on actual job you can use AI.

    • spaceballbat 3 hours ago

      Grade inflation has spilled over into the corporate world. I’ve interviewed people titled “principal” who would barely qualify as “senior” a few decades ago.

      • ben_w 3 hours ago

        "Senior" was already a weird title, given it could have been anything from 3-10 years of experience even back in 2021.

        I've seen people with 10 years experience blindly duplicate C++ classes rather than subclass them, and when questioned they seemed to think the mere existence of `private:` access specifiers justified it. There were two full time developers including him, and no code review, so it's not like any of the access specifiers even did anything useful.

        • spaceballbat 2 hours ago

          The jump from junior to senior means you can self start and have created enough of a network to seek out help. Junior used to be a 1-3 year training period. Senior to principal means you have signififcant positive impact across the company: upper management relies on you to define the roadmap. Most people hang out in ‘senior’ for their entire careers because they never have that drive to stand out. Thats why there are titles like “staff” and “senior staff” to promote people who don’t have what it takes to get to principal.

    • k__ 4 hours ago

      It's a matter of opinion what you should know and what you can easily google or ask an LLM.

      • dagmx 4 hours ago

        If someone needs to continuously google how to use the basic data structures in a language they use every day, then I worry about their ability for knowledge retention as a whole.

        • Herring 4 hours ago

          “A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”

          ― Robert A. Heinlein

          (It's a matter of opinion)

          • 0x696C6961 3 hours ago

            I'm not sure I understand the point you're trying to make. Do you think a senior developer needs a basic understanding of the language they use or not?

            • ben_w 3 hours ago

              > Do you think a senior developer needs a basic understanding of the language they use

              4 years ago, I'd have said "obviously".

              At this point? Only for specialist languages the LLMs suck at. (Last I tried, "suck at" included going all the way from yacc etc. upwards when trying to make a custom language).

              For most of us, what's now important is: Design patterns; architectures; knowing how to think of big-O consequences; and the ability to review what the AI farts out, which sounds like it should need an understanding of the programming language in question, but it's surprisingly straightforward to read code in languages you're not fmiliar with, and it gets even easier when your instruction to the AI includes making the code easy to review.

              • dagmx 2 hours ago

                It’s not just the language but the domain within the language.

                I see both the latest Claude and GPT models fall over on a lot of C++, Swift/ObjC and more complex Python code. They do better in codebases where there is maximal context with type hints in the local function. Rust does well, and it’s easier to know when it’s messed up since it won’t compile.

                They also tend to clog up code bases with a cacophony of different coding paradigms.

                A good developer would be able to see these right away, but someone who doesn’t understand the basics at the least will happily plug along not realizing the wake they’ve left.

          • dagmx 4 hours ago

            Except I’m not hiring someone to do 20 things. I’m hiring them to do the one thing they say they can do.

            Would I hire a taxi driver who can’t drive to drive me somewhere?

            Why would I hire a software engineer who can’t demonstrate their abilities.

            • Herring 3 hours ago

              edit: wasting my time

              • dagmx 2 hours ago

                Please re-read the original comment. All your suggestions are addressed there and I explain why this is inadequate.

        • raincole 3 hours ago

          I think it depends on what you consider "basic data structures"...

          If it's the List and Dict (or whatever they are called in that language) then maybe. But I'd not expect someone to spell the correct API by heart to use Stack, Queue, Deque, etc, even they're text book examples of "basic" data structures.

          • dagmx 3 hours ago

            For me, in a coding test, basic data structures are (using Python terms for brevity):

            - lists

            - dictionaries

            - sets

            - nested versions of the above

            - strings (not exactly a data structure)

            strings are tenuously in my data structures list because I let people treat them as ascii arrays to avoid most of the string footguns.

        • jstummbillig 3 hours ago

          The question is: What are they better at? If you think the answer is "Nothing", I would be suspicious.

          • dagmx 3 hours ago

            It’s never nothing, but it’s a matter of whether that something is enough of a differentiator.

            More so, can they demonstrate that in interviews. We specifically structure our interviews so coding is only one part of them, and try and suss out other aspects of a candidate (problem solving, project leading, working with different disciplines) and then weight those against the needs of the role to avoid over indexing on any specific aspect. It also lets us see if there are other roles for them that might be better fits.

            So a weak coder has opportunities to show other qualities. But generally someone interviewing for a coding heavy role who isn’t a strong coder tends to not sufficiently demonstrate those other qualities. Though of course there are exceptions.

    • rcxdude an hour ago

      Are you sure you're not just mostly seeing the candidates who are using LLMs to pass through the earlier screening phases with flying colors despite their lack of skills? (i.e., they haven't atrophied, they just weren't very good to begin with). There's always a lot of unqualified applicants to a job but LLMs can make them way more effort to filter out.

      • dagmx an hour ago

        We have the coding test as the second phase now for specifically that reason, and might have more once they’re doing the full interview set.

    • drdaeman 2 hours ago

      > aren’t able to do junior level tasks without GenAI helping them

      I’m assuming “unable” means not complete lack of knowledge how to approach it, but lack of detail knowledge. E.g. a junior likely remembers some $algorithm in detail (from all the recent grind), while a senior may no longer do so but only know that it exists, what properties it has (when to use, when to not use), and how to look it up.

      If you don’t think of something regularly, memory of that fades away, becomes just a vague remembrance, and you eventually lose that knowledge - that’s just how we are.

      However, consider that not doing junior-level tasks means it was unnecessary for the position and the position was about doing something else. It’s literally a matter of specialization and nomenclature mismatch: “junior” and “senior” are frequently not different levels of same skill set, but somewhat different skill sets. A simple test: if at your place you have juniors - check if they do the same tasks as seniors do, or if they’re doing something different.

      Plus the title inflation - demand shifts and title-catching culture had messed up the nomenclature.

      • dagmx 2 hours ago

        I don’t test rote algorithmic knowledge in our coding tests. Candidates can pick their language

        Candidates can ask for help, and can google/llm as well if they can’t recall methods. I just do not allow them to post the whole problem in an LLM and need to see them solve through the problem themselves to see how they think and approach issues.

        This therefore also requires they know the language that they picked to do simple tasks , including iterating iterables

        • drdaeman 2 hours ago

          That’s weird. Any senior developer worth their salt surely should know that LLMs produce a lot of weird nonsense with one-shot prompts, so they need to talk design first, then code the implementation.

          This said, IMHO one-shot is worth a try because it’s typically cheap nowadays - but if it’s not good (or, for interview reasons, unavailable) any developer should have the skills to break the problem down and iterate on it, especially if all learning/memory-refreshing resources are so available. That’s the skill that every engineer should have.

          I guess I must take my words back - if that’s how nowadays “seniors” are then I don’t know what’s going on. My only guess is that you must’ve met a bunch of scammers/pretenders who don’t know anything but trying to pass for a developer.

        • 1718627440 39 minutes ago

          > including iterating iterables

          I would've chosen a language without iterators, what would you do then??

    • alchemism 4 hours ago

      Pair a senior with an agent LLM, pair a junior with an agent LLM, measure the output over some cycles. You will find your answer in the data, one way or another.

      • dagmx 4 hours ago

        Truthfully, in my experience, they both end up performing near the level of the LLM. It’s an averaging factor not an uplifting one.

        • alchemism 4 hours ago

          Right now we are seeing junior hiring fall off a cliff, whether or not LLMs are responsible is hard to say. But if it does turn out that everyone performs to the level of the LLM, then a vast amount of business dollars can be saved by hiring starving English majors exclusively and dispensing with STEM majors altogether.

          • dagmx 4 hours ago

            I said it’s an averaging factor not an absolute match.

            A senior with a LLM will likely still outperform a junior. A CS major with an LLM will out perform an English major.

            But is the senior out performing the junior at a level that warrants the difference in salary?

            Then people will point to the intangibles of experience, but will forget the intangibles of being fresh and new with regards to being malleable and gung-ho.

          • jasonthorsness 3 hours ago

            I keep hearing this however I was just at a career fair for University of Washington CSE new grads (representing my company) and it was packed with companies hiring. Obviously, this is just an anecdote, is there any good data showing the real situation?

        • risyachka 3 hours ago

          Nope, you don't pair senior dev with llm. You allow them use llm, but not an agent.

          IF agent starts generating code - nobody will have time and stamina to rewrite all the slop, it will just get approved.

          Copy/paste from chat is the only way to ensure proper quality so that developer can write high quality code and just outsource to AI tasks that are boring or generic.

    • athrowaway3z 3 hours ago

      What do you consider senior?

      We've seem to have had a significant title-inflation in the last 5 years, and everybody seems to be at least a senior.

      With no new junior positions opening up, I'm not even sure I blame them.

      • dagmx 3 hours ago

        A senior to me is someone who can tackle complex problems without significant supervision , differentiated from a lead in that they still need guidance on what the overall tasks are. They should be familiar enough with their tech stacks that I can send them to meetings (after the requisite on-boarding time) to represent the team if needed (though I try and not overload my engineers with meetings) and answer feasibility questions for our projects. They don’t need constant check ins on their work. I should be able to bounce ideas of them readily on how to approach bigger problems.

        A junior being someone who needs more granular direction or guidance. I’d only send them to meetings paired with a senior. They need close to daily check ins on their work. I include them in all the same things the seniors do for exposure, but do not expect the same level of technical strength at this time in their careers.

        I try not to focus on years of experience necessarily, partly because I was supervising teams at large companies very early in my career.

    • bongodongobob 16 minutes ago

      That's what I said about compiled languages. No one knows how to optimize assembly anymore, they just let the compiler do it.

    • erichocean 3 hours ago

      > Firstly, we’re making new technologies and APIs that LLMs really struggle

      LLMs absolutely excel at this task.

      Source: Me, been doing it since early July with Gemini Pro 2.5 and Claude Opus.

      So good, in fact, that I have no plans to hire software engineers in the future. (I have hired many over my 25 years developing software.)

      • dagmx 2 hours ago

        So you’ve made a decision based on three months of use.

        I am legitimately interested in your experience though. What are you creating where you can see the results in that time frame to make entire business decisions like that?

        I would really like to see those kinds of productivity gains myself.

        • erichocean a few seconds ago

          I'll give you an easy example. LMDB has a (very) long-standing bug where "DUPSORT" databases become corrupted over time.

          Separately, I wanted to make some changes to LMDB but the code is so opaque that it's hard to do anything with it (safely).

          So I gave the entire codebase to Gemini Pro 2.5 and had it develop a glossary for local variable renames and for structure member renames. I then hand-renamed all of the structures (using my IDE's struct member refactoring tools). Then I gave the local variable glossary and each function to Gemini and had it rewrite the code. Finally, I had a separate Gemini Pro 2.5 context and a Claude Opus context validate that the new code was LOGICALLY IDENTICAL the previous code (i.e. that only local variables were renamed, and that the renaming was consistent).

          Most of the time, GPro did the rewrite correctly the first time, but other times, it took 3-4 passes before GPro and Opus agreed. Each time, I simply pasted the feedback from one of the LLMs back into the original context and told it to fix it.

          The largest function done this way was ~560 LOC.

          Anyway, the entire process took around a day.

          However, at one point, GPro reported: "Hey, this code is logically identical BUT THERE IS A SERIOUS BUG." Turns out, it had found the cause of the DUPSORT corruption, without any prompting—all because the code was much cleaner than it was at the start of the day.

          That is wild to me! (It actually found another, less important bug too.)

          Without LLMs, I would have never even attempted this kind of refactoring. And I certainly wouldn't pay a software engineer to do it.

  • daxfohl 2 hours ago

    The education system is already a relic of the past. All the points mentioned in the article assumes that a classroom full of students writing five-paragraph essays on The Red Badge of Courage in 12-point TNR with one-inch margins, hastily graded by an overworked, underpaid public school teacher is still the best way to teach kids in 2025 about critical thinking.

    I picture going forward we'll have much more personalized AI-led curricula that students work on at their own pace. The AI systems can let you use as much or as little AI autocompletion as they feel appropriate, can test your understanding real-time by adding some subtle mistakes or opportunities for improvement, and iterate until you get it.

    The main issue I worry about is perhaps the opposite. With education actually becoming more effective and interesting, what happens to kids' social and collaboration skills? And maybe that's where human teachers can still add value. Or in discipline and motivation, etc. IDK exactly how that plays out, but I imagine there's still a role for human teachers to play, and perhaps that aspect is even more important than "lecturer" and "grader" that takes most of their time now.

    • obscurette a minute ago

      Are you a parent? Have you ever seen a kid growing up? 90%+ of kids wouldn't care at all about any work or learning without social pressure from parents, teachers and peers. That's the whole point of schools – to create a social pressure to put all this knowledge into heads of kids in hope that some of this would be useful for them. And to do it through all these awfully messy years we call childhood and adolescence.

    • port11 an hour ago

      It's ironic we assume a technology that hallucinates random garbage will be any better than underpaid and overworked teachers. The solution, to me, would be better paid and better supported teachers.

      • bongodongobob 16 minutes ago

        This comment feels like it was written in 2023.

  • mitthrowaway2 5 hours ago

    Despite the magazine being named the Argument, this article falls into the typical pattern of claiming "the problem isn't X, it's Y", and then spending the rest of the article body building support for Y, but never once making any argument that refutes X.

    • svat an hour ago

      Typically, such articles can be charitably interpreted as saying "I worry more about Y than X", rather than as literally making two separate claims (that X isn't a problem, and that Y is). So as a reader if you're trying to get value out of the article, you can focus on evaluating Y, and ignore X that the article does not address.

      In this particular case, the article is even explicit about this:

      > While we have no idea how AI might make working people obsolete at some imaginary date, we can already see how technology is affecting our capacity to think deeply right now. And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.

      So the author is already explicitly saying that he doesn't know about X (whether AI will take jobs), but prefers to focus in the article on Y (“the many ways that we can deskill ourselves”).

    • jncfhnb 4 hours ago

      Refuting X is not necessary if it’s a subjective perspective

    • slackfan 5 hours ago

      [flagged]

      • mitthrowaway2 4 hours ago

        That would only come close to working as an argument if X and Y were mutually exclusive, but this article doesn't even bother to make the case that they are, nor is there any reason why they would be.

        • 1718627440 4 hours ago

          Deadline X isn't relevant when deadline Y comes earlier. That doesn't mean that deadline X couldn't also kill the project, that just doesn't matter when you can show that deadline Y already did.

        • JumpCrisscross 4 hours ago

          > this article doesn't even bother to make the case that they are

          Because it’s trivial? Outsmarting can happen because the tortoise runs faster or the hare slows down.

  • daxfohl 3 hours ago

    The author may have hit on the answer without realizing it. He's in the gym doing pull-ups. Is his lat strength necessary for some important part of his survival? Highly unlikely. Pre-20th century, when most work was highly physical, would he have gone out after work to research and test out efficient lat exercises? Probably not either.

    If we're not using our brains for work, then maybe it'll actually increase our deliberateness to strengthen them at home. In fact, I can't imagine that not to be the case. I mean, it's possible we turn into a society of mindless zombies, but fundamentally, at least among some reasonable percentage of the population, I have to believe there is an innate desire to learn and understand and build, and a relationship-building aspect to it as well.

  • PeterStuer 37 minutes ago

    Our survival, not hypotetical, but actual, on an individual basis is tied directly to how much value we personally can extract from others.

    AI, even when it provides a net benefit, does threathen the value potentially offered by individuals.

    It's complicated.

  • arikrak 3 hours ago

    I recently wrote a short post on something similar: while AI is able to solve increasingly longer tasks, people's attention spans are getting shorter. https://www.zappable.com/p/ai-vs-human-attention-spans

    Hopefully people can learn to use AI to help them, while still thinking on their own. It's not like that many of the assignments in school were that useful anyways...

  • derekcheng08 4 hours ago

    AI is (or has the potential to be) a gigantic abstraction layer and software engineering is filled with abstraction layers. But one thing that has consistently held true is that the best engineers -- while taking advantage of abstractions -- also have the curiosity and intelligence to peel back and at least understand the gist of what's going on behind the scenes. So while most of us will never write a TCP/IP stack, it's helpful to know the protocol. While many will simply call into a hosted distributed database, strong engineers will know broadly how it is implemented and there are availability/consistency trade-offs, etc.

    It's the same here: if you just shut off your brain and do what AI says, copy/pasting stuff from/to chat windows, that's going to be a bad time.

  • simianwords 4 hours ago

    Why does everyone talk about this part of the issue but never the other one? Society can’t progress if we can’t delegate the boring stuff and work on complicated stuff. How else will it progress?

    Do you think we could have progressed to this level if we were still calculating stuff by hand? Log tables??

  • kristianc 4 hours ago

    I always think for pieces like these which claim atrophy, well yes , but what about the things that you would have never even tried without it. The barrier to many things isn't becoming lazy when you're already halfway proficient, it's getting started in the first place. AI lowers the getting started cost of almost everything exponentially.

    If the argument is that people shouldn't be able to get started on those things without having to slog though a lot of mindless drudgework - then people should be honest about that rather than dress it up in analogies.

  • artur_makly 3 hours ago

    One way schools, from middle and high schools to universities, can mitigate AI-driven cheating and cognitive decline is by radically rethinking how they conduct final exams.

    They should require in-person oral dissertations, presented before a panel of one to three teachers and lasting one to two hours. Ideally, the topic would be based on the student’s own original thesis.

    This approach restores creativity and critical thinking, because at any moment during the examination, teachers can ask probing questions, explore unexpected tangents, and even encourage real-time brainstorming.

    THIS is the kind of challenge that can help our species evolve beyond its current state of neurological atrophy.

    NYU's Gallatin School of Individualized Study has been doing this since'72

    https://gallatin.nyu.edu/

    *it was probably the most stimulating 4 years of my life.

    • 1718627440 an hour ago

      How is this not what is currently the norm? How do dissertations work at your university, if they are not exactly this?

  • jmward01 3 hours ago

    There is some discussion here, but the counterargument was never raised: We are transitioning to new things to learn and focus on so the old tests and measures aren't valid.

    Think of it like this: If 3d printing (finally) gets good enough, is it an issue that most people aren't good at traditional manufacturing? I think we have discussion to be had here but articles like this always strike me as shallow since they keep judging now based on the past. This is a common problem. We see it in politics (make X great again anyone?) and it is a very hard problem to solve since understanding 'better' is a very hard thing to do even with hindsight much less when you are in the middle of change. I do think AI has serious harms, but if all we do is look for the harms we won't find and capture the benefits. Articles should balance these things better.

  • michaelcampbell 4 hours ago

    This feels in line with this same argument that has come along with virtually every other tech innovation to help humans work less, think less, travel less, basically do anything less than they did. So, "tools", in essence.

    Is AI special here? Maybe, if it's truly an existential risk.

  • thisisit 4 hours ago

    This seems like the age old discussion of how new technology changes our lives and makes us "lazy".

    Before the advent of smartphones people needed to remember phone numbers and calculate on the fly. Now people don't even remember their own numbers and save it on somewhere and open the calculator app for smallest of things.

    But there are still people who can do both. They are not luddites rather not fully reliant on smartphones.

    Same thing is going to happen with LLMs.

    At some point restricting LLM usage is going to be considered good parenting just like restricting phones are today. Use it for some purpose but not all.

  • speak_plainly 4 hours ago

    Humans have a tendency to embed their cognition in the world, this is probably one of our greatest strengths as a species.

    AI allows for you to offload a lot of cognitive effort, allowing you to free up your mind, but the only catch is that AI can be politicized and more confident than accurate.

    If AI companies can focus on improving accuracy on facts and make their AI more philosophically grounded on the rest, it would allow people to free up their minds for their immediate real lives.

    Don't mistake thinking for intelligence.

  • jacquesm 6 hours ago

    This is already happening. In spite of all of the disclaimers that the AI make mistakes the fact that it is given higher billing in for instance google search results (but also lots of other similar places) means people will interpret it as having a higher reputation because that's how we were conditioned to consume search results. This is doing massive damage already, for instance, essay writing, making summaries, reading comprehension and research skills are dwindling because the teachers in high school (and lower as well, but that's where it is most visible) are not able to keep up with the ease with which good looking slop can be produced. Schools will need to radically alter their programs if they want to continue to be able to educate but they don't exactly turn on a dime when it comes to technology and there are - unfortunately - lots of teachers who themselves lack skills and the ability to transfer those skills. If you graduated before AI became mainstream your diploma will be worth more than those that graduated after.

    • watwut 5 hours ago

      School will have to rely on homework less. It ia nkt that deep or complicated. It does not require super radical reform.

      Ypu can still make the tests and exams checking what students know. There will be less "generate bs at home" tasks and more "come back with a knowledge" which will likely be an improvement.

      • jacquesm an hour ago

        The burden on teachers is already crazy high. That would only work if we multiply the number of teachers by 3 or so with a corresponding reduction in class size. The education system would collapse due to funding and recruitment issues.

      • iisan7 4 hours ago

        Except for one thing ... Schools tend not to fail many people. When an entire cohort has a different level of ability, standards adjust. Possibly some proctored, standardized exams might be more comparable over time. I have read that, controlling for student demographics, SAT scores (frequently used for US college entrance) were increasing until the mid 2000s and then flat since then.

        • 1718627440 3 hours ago

          Yes that's always true for the preceding school, but it really only means that the student needs to work harder or fail later on as the final job enabling exam isn't going to move.

      • 1718627440 4 hours ago

        > "generate bs at home" vs. "come back with a knowledge"

        They are the same picture!

  • kagevf 3 hours ago

    One approach would be to write code first, then run it by AI to get a critique. I think that strikes a good balance between avoiding atrophy and still getting the benefits of the tool.

  • Veedrac 5 hours ago

    A refusal to even acknowledge that AI might work isn't a very sensible refutation of the risks we're going to face.

    • DrewADesign 5 hours ago

      > A refusal to even acknowledge that AI might work isn't a very sensible refutation of the risks we're going to face.

      That's probably why they're not doing that. The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.

      • dns_snek 3 hours ago

        > The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.

        No it doesn't require that because the vast majority of people aren't rational actors and they don't optimize for the quality of their work - they optimize for their own comfort and emotional experience.

        They'll happily produce and defend low quality work if they get to avoid the discomfort of having to engage in cognitively strenuous work, in the same way people rationalize every other choice they make that's bad for them, the society, the environment, and anyone else.

  • raincole 5 hours ago

    When it comes to "humans collectively..." kind of grand scheme issues, I just can't take the risk of AI making us stupider too seriously, compared to:

    - Wars and violence to resolve geopolitical problems

    - The biggest trading partner of most countries is waging tariff warfare

    - Climate change

    - Declining birth rate in almost every country

    - Healthier foods are getting more expensive despite our technology and nutrition knowledge [0]

    I'm not saying there is 0 chance that AI will make people dumb, but it just doesn't seem to be such an emergency humans should collectively be worried about.

    [0] https://www.bbc.com/news/articles/cpql53p9w14o.amp

    • justonceokay 5 hours ago

      I don’t want to nitpick but the declining birthrate is almost certainly driven by us becoming smarter, not dumber. In most countries the birthrate was propped up heavily by teen pregnancy, which the internet and access to healthcare is slowly eradicating worldwide.

      If we as humans can’t maintain our current population without getting high schoolers pregnant, then so be it.

      • raincole 5 hours ago

        I didn't say declining birth rate is a result of people being dumber.

        And I also don't think higher education necessarily means smarter. I'm quite confident that in the next decade, worldwide educational attainment will keep rising with only some temporary setbacks. If making people smart is as simple as sending them to colleges we really have nothing to worry about.

        But anyway neither was my point.

    • foxglacier 4 hours ago

      You've been reading too much news. There are global societal risks but they're not just whatever the popular bogeymen of hour are supposed to be. Some of your concerns might even be backwards. I doubt you've critically thought about any of these and are just regurgitating what was fed to you.

      If you lived in an earlier time, you'd be worrying about the rise of homosexuality, communism, atheism, etc.

      • raincole 4 hours ago

        The exact same argument can be made for people who claiming AI would make us dumber:

        If they lived in an earlier time, they'd be worrying about the rise of google, the internet, tv, radio... were going to make people stop using their brains. Socrates believed writing was going to make people stop memorizing stuff.

        I'm not a doomsayer who thinks the world is on the edge of collapsing. But I do think the issues I listed are much more 'real' than AI making people stop thinking. (Including the birth rate one - yeah, I'm well aware that many people think it's a good thing. As someone whose mother worked at a nursing home, in a country with TFR of 0.78, I just don't agree with them. I believe people hugely underestimate the manpower needed to take care of the elderly and disabled.)

        • questionableans 2 hours ago

          At a global level, a more reasonably sized human population is exactly what we need, and a decline in birth rate is the most ethical way to achieve that. We have to find a way to make it work, because otherwise our trajectory takes us beyond the carrying capacity for our planet.

          There are three things we need to deal with the temporary population imbalance:

          Older people who remain healthy continuing to work to care for their peers—-not necessarily hard physical labor, but being out in their communities helping, which is rewarding and will help them stay healthy, too.

          Easy immigration for regular people so they can move to where they’re needed.

          General efficiency so we’re not wasting resources. This requires both technological and lifestyle changes, even for the rich. The more efficient we can get, the less we have to reduce our overall population.

          • 1718627440 41 minutes ago

            Okay, that sounds like a reasonable plan, but this also means that we need to stop the decline at some point, and this is not currently a solved problems since that's what governments try to achieve right now. Also the decline happens in the regions were not much decline is needed and in regions were it would be needed the population is currently increasing. Also population growth was how we have dealt with technological shifts in the past, so we need a new way for that.

  • waynecochran 6 hours ago

    For the record I am a pre-AI human and I read this.

  • ChrisArchitect an hour ago
  • r_singh 4 hours ago

    AI is essentially a puppet and will always be one. If you rely on it to educate yourself about anything non-objective (that cannot be measured), it will reflect the biases it’s tuned to, presenting a version of reality shaped by the perspectives and interests of its ultimate puppet masters—the board and major (non-futile) shareholders.

  • PartiallyTyped 5 hours ago

    Our new batch of interns and new grads relies too much on AI, they have a lot of trouble thinking and writing for themselves.

  • DFHippie 6 hours ago

    Too many never decided to start using their minds. They won't notice the transition.

  • deadbabe 3 hours ago

    Hug your pre-LLM era senior software engineers tightly, they aren’t making more of them. And some of them are already deteriorating after 2 years of LLM tech.

  • gdulli 4 hours ago

    We collectively used social media very responsibly so I'm sure that at a society level we well also be thoroughly disciplined in our use of AI.

  • jncfhnb 4 hours ago

    Can’t say the author’s fixation on reading long texts resonates with me. I’m sure Newton’s Principia is interesting and all but… no I’m not going to read that.

    Conciseness is a valuable thing. It wasn’t practical to convey knowledge in a short form previously because printing and distributing a blog post worth of info was too expensive.

    On some level long form content just seems… poorly written. It’s long for the sake of being long.

    There are things to be concerned about with students today. They are generally shockingly bad at literacy and numeracy. But I don’t buy that a lack of long form books are the culprit.

    • raincole 3 hours ago

      Reading books is good for your brain. Even just fiction [0].

      But I do think people often approach this issue wrong. Especially, as demonstrated by the OP article:

      > “Daniel Shore, the chair of Georgetown’s English department, told me that his students have trouble staying focused on even a sonnet,” Horowitch wrote.

      It's a wrong question to ask why people can't focus on a sonnet.

      The real question is: why do the students who are not interested in literature choose to major English? What societal and economical incentives drove them to do that?

      [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC4733342/

      • NaomiLehman 3 hours ago

        I would say that fiction is incredibly more important than reading non-fiction, in general, for human development.

        • GeoAtreides 2 hours ago

          Reading sets up, programs and tunes the holodeck in your brain. The better the holodeck is programmed, the better and more accurate the simulation is.

    • ineedasername 3 hours ago

      “I don’t buy that a lack of long form books are the culprit”

      I agree that they might not, in themselves, be a necessary requirement, however: the ability to engage with material, short or long, at a level of deep focus and intentionality is important. And one of the (extremely common, historically) stronger methods of doing this is with long form content, the less passive and more challenging the better.

      It touches on the topic of generally transferable— or diffuse, neurologically speaking— skills. It’s what is frustrating when speaking with folks who insist on ideas like “I shouldn’t have to take all these non-STEM courses”. A truly myopic world view that lacks fundamental understanding of human cognition, especially for the subgroup with this sentiment that will readily affirm the inverse: Non-STEM folks should nonetheless have a strong grounding in maths and sciences for the modes of thinking they impart.

      Why the difference? It’s a strange, highly mechanistic and modular view of how critical thinking faculties function. As though, even with plenty of exclusivity, there isn’t still enormous overlapping structures that light up in the brain with all tasks that require concentration. The salience network in particular is critical when reading challenging material as well as during stem-related thinking, eg Math. Which, ironically, means the challenging courses involving analytical literature are precisely the courses that, taken seriously, would lay down neural pathways in a persons salience network that would be extremely useful in thinking about challenging math problems with more tools at your disposal, more angles of attack.

      It really shouldn’t require much of an intuitive leap to realize that reading and interpreting complex works of literary creativity or other areas of a GenHumanities topics will help impart the ability to think in creative ways. It’s in the task name. Spending 3x16 hours in a class for a semester, roughly the same for work outside the class, 6 or 7 times throughout a 4 year college stretch is a very small cost for value.

      I think the foundational failing of education for the past decades falls into all of these gaps in understanding and the inability to engage with the learning process because too few people can even articulate the relevance of highly relevant material.

    • daxfohl an hour ago

      I agree with your main point, though with an asterisk.

      I don't think concise is necessarily better than long, nor do I think long is better than concise. The thing is, humanity tends to go in cycles. Poems for Babylonians, long epics for the Greeks, back to poems for Shakespeare and Goethe, then the Russians brought back epics. Kind of a mix during the 20th century, but poetry seemed to slowly fade, and novels trended generally shorter. (All of this is very 30K foot-view; of course there were many exceptions in every era).

      Philip Roth predicted the end of the era of the novel at some point, long (relatively) before AI [1]. He said that, similar to poetry in the early 20th century, humanity has evolved past the meaningfulness of the long-form novel.

      This doesn't mean "the humanities is dead." It just means that we're entering another cycle where a different from of humanities needs to take over from what we've had in the past.

      Anyone arguing that the death of the long-form novel is equivalent to the death of humanities is missing the fact that "humanities" is not a precisely-defined set of topics written in stone. Though it can seem like this is the case at any one point in time, humanities can, and must, exist in many forms that will invariably change as humanity's needs do likewise. That's why its prefix is "human".

      [1] https://www.nytimes.com/2018/05/23/books/philip-roth-apprasi...

    • JumpCrisscross 4 hours ago

      > Can’t say the author’s fixation on reading long texts resonates with me

      It’s an attention and working memory test.

      I don’t think I’ve ever prided myself on focus. But signing off social media ten years ago has absolutely left me in a competitive place when it comes to deep thinking, and that’s not because I’ve gotten better at it.

      > It wasn’t practical to convey knowledge in a short form previously because printing and distributing a blog post worth of info was too expensive

      This is entirely ahistoric. Pamphlets and books published in volumes, where each volume would today be a chapter, was the norm. The novel is a modern invention.

    • wilsonnb3 2 hours ago

      > It wasn’t practical to convey knowledge in a short form previously because printing and distributing a blog post worth of info was too expensive.

      I don’t think this is true, people have been printing newspapers, pamphlets, and leaflets for hundreds of years.

      It isn’t only long form content that the printing press was good for, It’s just that the long term content tends to be remembered longer. Probably because it isn’t just long for the sake of being long :p

  • more_corn 3 hours ago

    So like right now? Writing structures your thinking and clarifies your ideas. Students just have chat gpt write for them.

    Vibe coding takes the heavy thinking and offloads it to the machine. Even people who know how turn off their brains when they vibe code.

    The time is now. The effects are available for inspection today. Pair with ai on something you know well. Find the place where it confidently acts but makes an error. Check in on your ability to reactivate your mind and start solving the problem.

    It feels like waking up without coffee. Our minds are already mostly asleep when we lean on Ai for anything.

  • IT4MD 4 hours ago

    The race for AI, running over anything and anyone that stands between an AI and resources like energy, water, etc, combined with corporations shoving AI, unasked for, into anything and everything will ensure everyone hates everything about AI, except for the Ponzi scheme winners like Altman.

  • Trias11 5 hours ago

    AI is just a helpful tool.

    Ages ago someone surely cried that cars will cause our legs to disfunction.

    • WillAdams 5 hours ago

      Given the current obesity epidemic, I would argue that is the case --- a marked contrast to the development of the bicycle:

      https://www.roadbikereview.com/threads/editorial-the-bicycle...

      >When man invented the bicycle he reached the peak of his attainments. Here was a machine of precision and balance for the convenience of man. And (unlike subsequent inventions for man’s convenience) the more he used it, the fitter his body became. Here, for once, was a product of man’s brain that was entirely beneficial to those who used it, and of no harm or irritation to others.

      ~ Elizabeth West, Author of Hovel in the Hills

    • majorbugger 5 hours ago

      Smartphones are just a helpful tool, yet in conjunction with social media they can be attributed to decline of social relations and attention in children, raising levels of depression, etc.

    • watwut 5 hours ago

      Considering some people have problem to walk even short distances, it happened for those who drive everywhere.

  • bamboozled 5 hours ago

    It's happened to me recently, I've relied so much on LLMs for coding and after hours of spinning the wheel, I realized, it has no idea, when it fixes something it's mostly a guess, most I've the time we're just debugging by adding logging statements and the code we've created looks crap and is mostly wrong, full of fluff, or hard to understand.

    I've been coding without my LLM for 2 hours and it's just more productive...yes it's good for getting things "working" but yeah, we still need to think and understand to solve harder problems.

    My initial impressions blew me away because generating new things is a lot simpler than fixing old things, yes it's still useful, but only when you know what you're doing in the first place.

    • fluidcruft 5 hours ago

      > the code we've created looks crap and is mostly wrong, full of fluff, or hard to understand

      I don't disagree in general but I've had a lot of success asking the LLMs to specifically fix these things and make things more maintainable when specifically prompted to do so. I agree debugging and getting things working it often needs supervision, guidance and advice. And architecture it often gets a little wrong and needs nudges to see the light.

      I'm not great at this stuff and I got tired of reviewing things and generating suggestions to improve implementations (it seemed to be repetitive a lot) but I am having good results with my latest project using simulated ecosystems with adversarial sub-projects. So there's the core project I care about with a maintainer agent/persona, an extension with an extension developer agent/persona (the extensions provide common features built upon the core with the perspective of being a third-party developer), and an application developer that uses both the core and extensions. I have them all write reports about challenges and reviewing the sub-projects they consume, complaining about awkwardness and ways the other parties could be improved. Then the "owner" of each component reviews the feedback to develop plans for me to evaluate and approve. Very often the "user" components end up complaining about complexity and inconsistency. The "owner" developers tend to add a lot multiple ways of doing things when asked for new features until specifically prompted to review their own code to reduce redundancy, streamline use and improve maintainability. But they will do it when prompted and I've been pretty happy with the code and documentation it's generating.

      • bamboozled 4 hours ago

        It's not the article is about though, you're doing all this "engineering" to have success with the "AI". You're not really "depending on it", which is more in the spirit of the article and what I found myself starting to do. At some point you think , I'll either do it myself, or use "AI", at which point you will start to become dependent on it.

        • fluidcruft 3 hours ago

          Ah I see what you mean. I interpret "depending on it" to have shifted. I want to depend on a particular project for my work, and if it happened to exist on github, I would have just used that. But it doesn't exist so I have created a fake github "community" to develop the part that I will depend on and I don't want to think about a lot of the API design for things I haven't thought about yet, I just want it to work and be ergonomic.

          But my point remains specifically about the crappy code AI writes. In my experience, it will clean it up if you tell it to. There's the simple angle of complexity and it does an okay job with that. But there's the API design side also and that's what the second part is about. LLM will just add dumb ass hacks all over the place when a new feature is needed and that leads to a huge confusing integration mess. Whereas with this setup when I want to build an extension the API has been mostly worked out and when I want to build an application the API has mostly been worked out. That's the way it would work if I ran into a project on github I wanted to depend on.

  • echelon 6 hours ago

    I feel like I these doom prognostications were also written during the industrial revolution, the invention of electricity, the invention of television, the invention of the internet, ...

    We've turned out okay.

    • pizza234 5 hours ago

      You haven't read the article, which actually says the opposite:

      > A few weeks ago, The Argument Editor-in-Chief Jerusalem Demsas asked me to write an essay about the claim that AI systems would take all of our jobs within 18 months. My initial reaction was … no?

      [...]

      > The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.

      [..]

      > Students, scientists, and anyone else who lets AI do the writing for them will find their screens full of words and their minds emptied of thought.

    • NoOn3 6 hours ago

      At least someone survives all the time. For him Everything is okay...

    • HPsquared 6 hours ago

      Socrates on writing

      "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

      • watwut 4 hours ago

        That prediction qas right there, actually. We did lost the memorization capabilities. Memorization is not just about rote learning, it was kind of technology about how to structure text so that it is easy to remember.

        Second, Socrates was generally arrogant in stories. That attitude you see there was not special disdain to reading, it was more off his general "I am better then everyone else anyway" attitude.

      • lapcat 5 hours ago

        Technically, this was the character "Socrates" in the writing on Plato.

      • bwfan123 5 hours ago

        glad you brought up a historical note.

        Some of the best thinking across history - euclid,newton,einstein - happened in the pre-computer era. So, let alone AI, even computers are not necessary. Pen, paper and some imagination/experimentation were sufficient.

        Some in tech are fear mongering to seek attention.

  • lapcat 5 hours ago

    I feel lucky now that I grew up and attended school before ChatGPT, before smartphones, before social media, before ubiquitious internet.

    I can't say that I'm totally unaffected by contemporary technology, and my attention span seems to have suffered a little, but I think I'm mostly still intact. I read most days for pleasure and have started a book club. I deliberately take long walks without bringing my smartphone; it's a great feeling of freedom, almost like going back to a simpler time.

    • majorbugger 5 hours ago

      I feel quite similar, thankfully I grew up in the 90s/early 00s, and still love to read.

  • brador 5 hours ago

    AI is here. AGI is here. Moving those goalposts again isn’t gonna do a thing.

    The human/machine meld will continue until completed.

    Call me when a machine can set goals.