26 comments

  • awongh 26 minutes ago

    > Automatic creation of an initial billboard: Upon starting the program, a predefined list of movies currently showing must be automatically generated, including their details (title, genre, duration, and showtimes).

    I would say that these results might be relevant for a university CS program setting, but I would make the distinction between this and actually learning to program.

    The context of this task is definitely a very contrived "Let's learn OOP" assignment that, for example, just tires to cram in class inheritance without really justifying it's use in the software that's being built. It's a lazy kind of curriculum building that doesn't actually tell the students about OOP.

    In that sense it's no wonder that AI is not that helpful in the context of the assignment and learning.

    I wouldn't chalk this up to "AI doesn't help you learn". I would put this in the category of, in an overly academic assignment with contrived goals, AI doesn't help the student accomplish the goals of the course. That conclusion could be equally applied to French literature 102.

    And that's very different from whether or not an AI coding assistant can help you learn to code or not. (I'm actually not sure if it can, but I think this study doesn't say anything new).

  • taurath 41 minutes ago

    I am not a student and I wonder often whether we fill in memorization for the idea of learning, as though it’s somehow more valuable to be able to write valid syntax from memory on a blank file than it is to know and practice the broader strokes of abstractions, operators, readability and core concepts which make up good software craftsmanship.

    Sometimes I’m doing something in a new to me language, using an LLM to give me a head start on structure and to ask questions about conventions and syntax, and wondering to myself how much I’m missing had I started just by reading the first half of a book on the language. I think I probably would take a lot longer to do anything useful, but I’d probably also have a deeper understanding of what I know and don’t know. But then, I can just as easily discover those fundamental concepts to a language via the right prompt. So am I learning? Am I somehow fooling myself? How?

    • thomasahle 28 minutes ago

      I'm not sure we really know how much of learning is memorization. As we memorize more stuff, we find patterns to compress it and memorize more efficiently.

    • jjmarr 38 minutes ago

      Because not everyone can truly be great at their craft, but everyone can memorize syntax.

      Schools compromise their curriculum so that every student has a chance in the interests of fairness.

      • tehjoker 28 minutes ago

        You have to know the basics to build higher level knowledge and skills. What’s the use of high level book learning without the ability to operationalize it

  • calepayson a minute ago

    > Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work un-aided, pointing to potential over reliance and gaps in foundational knowledge transfer.

    As someone studying CS/ML this is dead on but I don't think the side-effects of this are discussed enough. Frankly, cheating has never been more incentivized and it's breaking the higher education system (at least that's my experience, things might be different at the top tier schools).

    Just about every STEM class I've taken has had some kind of curve. Sometimes individual assignments are curved, sometimes the final grade, sometimes the curve isn't a curve but some sort of extra credit. Ideally it should be feasible to score 100% in a class but I think this actually takes a shocking amount of resources. In reality, professors have research or jobs to attend to and same with the students. Ideally there are sections and office hours and the professor is deeply conscious of giving out assignments that faithfully represent what students might be tested on. But often this isn't the case. The school can only afford two hours of TA time a week, the professors have obligations to research and work, the students have the same. And so historically the curve has been there to make up for the discrepancy between ideals and reality. It's there to make sure that great students get the grades that they deserve.

    LLMs have turned the curve on its head.

    When cheating was hard the curve was largely successful. The great students got great grades, the good students got good grades, those that were struggling usually managed a C+/B-, and those that were checked out or not putting in the time failed. The folks who cheated tended to be the struggling students but, because cheating wasn't that effective, maybe they went from a failing grade to just passing the class. A classic example is sneaking identities into a calculus test. Sure it helps if you don't know the identities but not knowing the identities is a great sign that you didn't practice enough. Without that practice they still tend to do poorly on the test.

    But now cheating is easy and, I think it should change the way we look at grades. This semester, not one of my classes is curved because there is always someone who gets a 100%. Coincidentally, that person is never who you would expect. The students who attend every class, ask questions, go to office hours, and do their assignments without LLMs tend to score in B+/A- range on tests and quizzes. The folks who set the curve on those assignments tend to only show up for tests and quizzes and then sit in the far back corners when they do. Just about every test I take now, there's a mad competition for those back desks. Some classes people just dispense with the desk and take a chair to the back of the room.

    Every one of the great students I know is murdering themselves to try to stay in the B+/A- range.

    A common refrain when people talk about this is "cheaters only cheat themselves" and while I think has historically been mostly true, I think it's bullshit now. Cheating is just too easy, the folks who care are losing the arms race. My most impressive peers are struggling to get past the first round of interviews. Meanwhile, the folks who don't show up to class and casually get perfect scores are also getting perfect scores on the online assessments. Almost all the competent people I know are getting squeezed out of the pipeline before they can compete on level-footing.

    We've created a system that massively incentivizes cheating and then invented the ultimate cheating tool. A 4.0 and a good score on an online assessment used to be a great signal that someone was competent. I think these next few years, until universities and hiring teams adapt to LLMs, we're going to start seeing perfect scores as a red flag.

  • borski 20 minutes ago

    Crazy idea but: what if we built an AI pair programmer that actually pair programmed? That is, sometimes it was the driver and you navigated, pretty much as it is today, but sometimes you drive and it navigates.

    I surmise that would help people learn to code better.

  • dcre an hour ago

    Interesting to see quotes but note N=20 and the methodology doesn’t seem all that rigorous. I didn’t see anything that wasn’t exactly what you would expect to hear.

  • bgwalter an hour ago

    It is notable that so many publications try to salvage "AI" ("need for new pedagogical approaches that integrate AI effectively") rather than ditch "AI" completely.

    The world worked perfectly before 2023, there is no need to outsource information retrieval or thinking.

    • Wowfunhappy an hour ago

      The world worked perfectly before 1982, there is no need for the internet.

      (…I actually kind of think this.)

      • srpinto 41 minutes ago

        Ah yes, the perfect world we had when governments could get away with anything because the press was not enough to showcase their attrocities. A beautiful, perfect world, with rubella and a global population living in extreme poverty close to 50% (compared to today's 10%).

        I see this mentality almost exclusively in americans and/or anglo people in general, it's incredible... if you're not that, I guess you're just too young or completely isolated from reality and I wish you the best in the ongoing western collapse.

        (... I actually wish you're joking and I didn't catch it, though).

        • daseiner1 28 minutes ago

          last sentence in your first paragraph has nothing to do with the current state of the internet and certainly not AI. first sentence? turns out governments can still get away with pretty much anything and propaganda is easier than ever.

      • awesome_dude an hour ago

        God no.

        Speaking as someone that communicates primarily through text (high likelihood of Autism) the internet was the first chance a lot of us had to ... speak.. and be heard

        • deadbabe an hour ago

          That’s not a problem that generalizes to the broader population. We don’t really need internet.

          • awesome_dude 44 minutes ago

            Screw the broader population I can speak now dammit!!!!!

          • danaris 8 minutes ago

            In other words "disabled people can suck it, because I don't care about their lives or experiences"?

            We often fall short, but as a society we do try to make sure we're accommodating disabled people when we make big changes in our systems.

    • ako an hour ago

      Where is this perfect world you’re speaking of? Surely not the one we’re living in…

    • rezz an hour ago

      Why stop there? We could do long division before the calculator and hand write before the typewriter.

      • maplethorpe an hour ago

        I do wonder if the calculator would have been as successful if it regularly delivered wrong answers.

        • analog31 26 minutes ago

          My typerwriter delivered wrong answers.

        • Bootvis an hour ago

          It does if you’re a clumsy operator and those are not rare.

          • pfortuny an hour ago

            Yes, but the machine itself is deterministic and logically sound.

            • ramesh31 43 minutes ago

              >Yes, but the machine itself is deterministic and logically sound.

              Because arithmetic itself, by definition, is.

              Human language is not. Which is why being able to talk to our computers in natural language (and have them understand us and talk back) now is nothing short of science fiction come true.

          • maplethorpe 44 minutes ago

            Even worse is if it's in the other room and your fingers can't reach the keys. It delivers no answers at all!

      • walt_grata an hour ago

        Did you learn how to do long division in schools? I did, and I wasn't allowed to use calculators on a test until I was in highschool and basic math wasn't what was being taught or evaluated.

        • moregrist an hour ago

          I also learned long division in school.

          I was allowed to use a calculator from middle school onward, when we were being tested on algebra and beyond and not arithmetic.

          Some schools have ridiculous policies. Some don’t. Ymmv. I don’t think that’s changed from when I was in school.