60 comments

  • 2pEXgD0fZ5cF 19 minutes ago

    I don't see the whole AI topic as a large crisis, as others have mentioned: put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them, give them the freedom to so, but if as a result they fail the exams and similar in-person evaluations, then so be it. Let them fail.

    I would like to hire students who actually have skills and know their material. Or even better, if AI is actually the amazing learning tool many claim then it should enhance their learning and as a result help them succeed in tests without any AI assistance. If they can't, then clearly AI was a detriment to them and their learning and they lack the ability to think critically about their own abilities.

    If everyone is supposed to use AI anyway, why should I ever prefer a candidate who is not able to do anything without AI assistance over someone who can? And if you hold the actual opinion that proper ai-independent knowledge is not required, then why should I hire a student at all instead of buying software solutions from AI companies (and maybe put a random person without a relevant degree in front of it)?

    • apatheticonion 3 minutes ago

      It's a huge problem. I have several friends in university who have had assignments flagged as AI. They have had entire units failed and forced to retake semesters which is not cheap.

      Even if you fight it, the challenge goes into the next semester and pushes out your study timeline and associated costs.

      > put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them

      Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.

      One friend now uses a dashcam to record themselves when writing an assignment so they can prove no AI was used when they are eventually flagged.

  • threemux 39 minutes ago

    In person, proctored blue book exams are back! Sharpen those pencils kids.

    I've been wondering lately if one of the good things to come out of heavy LLM use will be a return to mostly in-person interactions once nothing that happens online is trustworthy anymore.

    • bambax 31 minutes ago

      Yes! This "problem" is really easy to fix with in person exams and no computers in class, ever.

      • seanmcdirmid 12 minutes ago

        Until they need to start learning how to use them to get a job in the modern world?

        There should be a class that teaches you how to use AI to get things done, especially judging on how many even on HN admit they aren’t good at it.

      • meroes 29 minutes ago

        This is the “back to office” of education. It is not a one size fits all solution. There are so many remote and hybrid classes now you guys sound outdated.

        • analog31 16 minutes ago

          That’s fair, but at the same time, expecting any learning to occur in remote classes, when fair evaluation is impossible, may also be outdated.

          • kbelder 12 minutes ago

            Learning is just as easy remote and with AI, maybe easier. It's testing and evaluation of that learning that's difficult.

            Universities make money not by teaching, but by testing and certifying. That's why AI is so disruptive in that space.

      • idle_zealot 29 minutes ago

        But is that webscale?

  • gotrythis 13 minutes ago

    I put a day of careful thought into writing a cover letter for a job a few weeks ago. Knowing there is there was the potential of AI screening, I checked if it would get flagged.

    Every detection program I tried said the letter that I personally wrote by hand was 100% AI generated!

    So, I looked for humanizer programs and ran my cover letter through a couple. Without the results in front of me at the moment, I can only revert to my judgemental conclusion instead of solid observations...

    You need to write like an idiot to pass AI detection algorithms. The rewritten cover letter was awful, unprofessional, and embarrassing.

  • HeavyStorm an hour ago

    This ship has sailed.

    It's how it was with the internet. I grew up in the 90s, and teacher didn't know how to deal with the fact we no longer had to go through multiple books in the library to get the information they needed. We barely needed to write it.

    Now nobody expects students to not use the internet. Same here: teachers must accept that AI can and will write papers and answer questions / do homework. How you test student must be reinvented.

    • xeromal an hour ago

      I know a lot of teachers are reverting back to handwritten papers. People can generate it but at least you're doing something.

      • randall an hour ago

        this is like irl cryptographic signatures for content lol

      • idiotsecant an hour ago

        This is just about the worst possible response it seems. It manages to probably hurt some wrists not used to long handwriting sessions, completely avoid learning how to use and attribute AI responsibly, and still probably just results in kids handwriting AI generated slop, anyway.

        • singpolyma3 an hour ago

          While I don't think it's the right solution, it will force them to at least read what they're submitting which means some learning :)

        • ethin 22 minutes ago

          It also disadvantages people with disabilities. How exactly are they supposed to do these papers and tests? Dictate everything to someone else, using Blindness as an example? Because that seems very very inefficient and extremely error-prone.

        • BugsJustFindMe an hour ago

          > It manages to probably hurt some wrists not used to long handwriting sessions

          I'm sorry but, lmao. You cannot be serious.

          > attribute AI

          Oh no!

          > still probably just results in kids handwriting AI generated slop

          Not if they're doing it in person. And at least they then need to look at it.

        • jxf 38 minutes ago

          We've been writing with our hands for thousands of years. I suspect that on balance a Butlerian Jihad against AI slop would be perfectly fine for our hands.

    • smoyer an hour ago

      When I was in high school, we were not allowed to use calculator for most science classes ... And certainly not for math class. I'm ten years, will you want to hire a student who is coming out of college without considerable experience and practice with AI?

      • AlotOfReading 33 minutes ago

        LLMs work best when the user has considerable domain knowledge of their own that they can use to guide the LLM. I don't think it's impossible to develop that experience if you've only used LLMs, but it requires a very unusual level of personal discipline. I wouldn't bet on a random new grad having that. Whereas it's pretty easy to teach people to use LLMs.

      • nkrisc 30 minutes ago

        If all they know is AI, and they supplanted all their learning with AI, why even hire them? Just use the AI.

      • ThrowawayR2 23 minutes ago

        Should I, by some miracle, be hiring, I'd be hiring those who come out of college with a solid education. As many have pointed out, AI is not immune to the "garbage in, garbage out" principle and it's education that enables the user to ask informed and precisely worded questions to the AI to get usable output instead of slop.

      • croes 34 minutes ago

        Why would I want to hire such a student? What makes him better the better pick than all the other students using AI or all the other non-students using AI?

    • croes 29 minutes ago

      This is not how AI will be in the future.

      At some point the will have to make profit, that will shape AI.

      Either by higher prices or ads. Both will change the use of AI

    • AndrewKemendo an hour ago

      I remember when websites couldn’t be considered valid sources for graded assignments

      • MattGaiser 42 minutes ago

        I was dealing with this even in 2014 when I was in high school. Even then, entire classes of government data weren’t published in some print volume.

        • AndrewKemendo 39 minutes ago

          In my case at least there was some validity to it in 1995

  • OsamaJaber an hour ago

    AI detectors punishing non native English speakers for writing too cleanly is the part nobody talks about enough -_-

    • Rexxar 25 minutes ago

      For example, native English speakers often make phonetic spelling errors (such as its/it’s, your/you’re) that non-native English speakers usually avoid. It’s probably a sign that someone speaks more fluently when he starts making these types of mistakes from time to time.

      • Tade0 3 minutes ago

        Or picked up English before they learned to read and write properly.

        I'm cursed with this as I was put in an international environment right before turning five, went back to my home country to start grade school and only in fifth grade started having English classes.

  • zkmon an hour ago

    Teachers are also heavy users of AI. Entire academic business staff is using AI.

    The goals of academic assessment need to change. What are they assessing and why? Knowledge retention skills? Knowledge correlations or knowledge expression skills? None of these going to be useful or required from humans. Just like the school kids are now allowed to use calculators in the exam halls.

    The academic industry need to redefine their purpose. Identify the human abilities that are needed for the future that is filled with AI and devices. Teach that and assess that.

    • Espressosaurus 20 minutes ago

      A calculator is more consistent and faster at calculating than I am, but I still need to understand how to multiply, divide, add, and subtract before I can move on to more complicated math. I need to intuitively understand when I'm getting a garbage result because I did an operation wrong, moved a decimal place by accident, or other problem.

      Memorization has a place, and is a requirement for having a large enough knowledge base that you can start synthesizing from different sources and determining when one source is saying something that is contradicted by what should be common knowledge.

      Unless your vision of the future is the humans in WALL-E sitting in chairs while watching screens without ever producing anything, you should care about education.

    • ThrowawayR2 11 minutes ago

      > "Knowledge retention skills? Knowledge correlations or knowledge expression skills? None of these going to be useful or required from humans."

      I'm fascinated by these claims from some LLM advocates that people will no longer need to know things, think, or express themselves properly. What value then will they bring to the table to justify their pay? Will they be like Sigourney Weaver's character in Galaxy Quest whose sole function was to repeat what the computer says verbatim? Will they be like Tom Smykowski in Office Space indignantly saying "I have people skills; I am good at dealing with people! Can't you understand that?!" Somebody, please explain.

    • kyykky 35 minutes ago

      Teaching is about moving our knowledge (the stuff we’ve collectively learned ourselves, from others and our parents [instead of everyone needing to find out on their own]) to the next generation. While some skills may become obsolete in some parts of professional life due to AI, the purpose of academia does not change much.

    • croes 31 minutes ago

      Teachers are also heavy users of solution books, but would not give them to students for this reason.

  • grahamburger 21 minutes ago

    I've heard some teachers are assigning their students to 'grade' a paper written by LLM. The students use an LLM to generate a paper on the topic, print it out, then notate in the margins by hand where it's right and wrong, including checking the sources.

  • ashleyn 2 hours ago

    I'm guessing this "humanizer" actually does two things:

    * grep to remove em dashes and emojis

    * re-run through another llm with a prompt to remove excessive sycophantry and invalid url citations

    • emmp an hour ago

      For student assignment cheating, only really the em dashes would still be in the output. But there are specific words and turns of phrases, specific constructions (e.g., 'it's not just x, but y'), and commonly used word choices. Really it's just a prim and proper corporate press release style voice -- this is not a usual university student's writing voice. I'm actually quite sure that you'd be able to easily pick out a first pass AI generated student assignment with em dashes removed from a set of legitimate assignments, especially if you are a native English speaker. You may not be able to systematically explain it, but your native speaker intuition can do it surprisingly well.

      What AI detectors have largely done is try to formalize that intuition. They do work pretty well on simple adversaries (so basically, the most lazy student), but a more sophisticated user will do first, second, third passes to change the voice.

    • dbg31415 2 hours ago

      You’re absolutely right!

      Ha. Every time an AI passionately agrees with me, after I’ve given it criticism, I’m always 10x more skeptical of the quality of the work.

      • glitchcrab an hour ago

        Why? The AI is just regurgitating tokens (including the sycophancy). Don't anthropomorphise it.

        • 20260126032624 an hour ago

          Because of the way regurgitation works. "You're absolutely right" primes the next tokens to treat whatever preceded that as gospel truth, leaving no room for critical approaches.

        • otikik 28 minutes ago

          Because I was only 55% sure my comment was correct and the AI made it sound like it was the revelation of the century

    • the_fall 2 hours ago

      No. No one is looking for em-dashes, except for some bozos on the internet. The "default voice" of all mainstream LLMs can be easily detected by looking at the statistical distribution of word / token sequences. AI detector tools work and have very low false negatives. They have some small percentage of false positives because a small percentage of humans pick up the same writing habits, but that's not relevant here.

      The "humanizer" filters will typically just use an LLM prompted to rewrite the text in another voice (which can be as simple as "you're a person in <profession X> from <region Y> who prefers to write tersely"), or specifically flag the problematic word sequences and ask an LLM to rephrase.

      They most certainly don't improve the "correctness" and don't verify references, though.

      • smrtinsert 9 minutes ago

        providers are also adding hidden characters and attempting to watermark if memory serves.

  • postepowanieadm 2 hours ago

    The Washing-Machine Tragedy was a prophecy.

  • falloutx an hour ago

    At some point, writing 2 sentences by hand will become more acceptable than this.

    • pinnochio an hour ago

      Shortly after, AI-powered prosthetic hands that mimic your handwriting will write those 2 sentences for you.

  • mc32 an hour ago

    I get that students are using the LLM crutch -and who wouldn’t?

    What I don’t get is why wouldn’t they act like an editor and add their own voice to the writing. The heavy lifting was done now you just have to polish it by hand. Is that too hard to do?

    • BugsJustFindMe 43 minutes ago

      Humans tend to be both lazy and stupid and are always looking for ways to pass by with minimal effort. Kids aren't different just because they're in school.

    • MattGaiser 40 minutes ago

      It would be dull to do. Being a tone scribe would be terrible.

  • yarrowy an hour ago

    just move to 2 hour in class writing blocks.

  • tgrowazay an hour ago

    Everyone knows about emdashes, but there are so much more!

    Here is a wiki article with all common tell-tales of AI writing: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

    • kbelder 3 minutes ago

      My fear is people will actually take that article to heart, and begin accusing people of posting AI simply for using all sorts of completely valid phrases in their writing. None of those AI tells originated with AI.

    • jurgenaut23 an hour ago

      I used em dashes heavily 15 years ago when writing my PhD thesis.

      • singpolyma3 an hour ago

        So did every author of classic literature. People who think they can spot AI writing by simple stylistic indicators alone are fooling themselves and hurting real human authors

        • A_D_E_P_T 4 minutes ago

          It's because LLMs were trained on classic literature that they began to use em-dashes in their now-famous manner.

          Seriously, highbrow literature is heavily weighted in their training data. (But the rest is Reddit, etc.) This really explains a lot, I think.

        • zeroonetwothree 33 minutes ago

          Let’s just say when my coworkers started sending emails with tons of bold and bullet points when they had never done that before I felt pretty justified in assuming they used AI

      • SecretDreams an hour ago

        Same, but 5 years ago. Now they're ruined for me lol.

    • AstroBen an hour ago

      I saw someone created a skill to weaponize that exact list to humanize the AI's output

      There are no clear signs, at least for anyone who cares to hide them