99 comments

  • greyadept 12 minutes ago

    I'd be really interested to run AI detectors on essays from years before the ChatGPT era, just to see if anything gets flagged.

  • owenpalmer 5 minutes ago

    As an engineering major who was forced to take an English class, I will say that on many occasions I purposely made my writing worse, in order to prevent suspicion of AI use.

  • selcuka 3 minutes ago

    New CAPTCHA idea: "Write a 200-word essay about birds".

  • gorgoiler 20 minutes ago

    We should have some sort of time constrained form of assessment in a controlled environment, free from access to machines, so we can put these students under some kind of thorough examination.

    (“Thorough examination” as a term is too long though — let’s just call them “thors”.)

    In seriousness the above only really applies at University level, where you have adults who are there with the intention to learn and then receive a final certification that they did indeed learn. Who cares if some of them cheat on their homework? They’ll fail their finals and more fool them.

    With children though, there’s a much bigger responsibility on teachers to raise them as moral beings who will achieve their full potential. I can see why high schools get very anxious about raising kids to be something other than prompt engineers.

  • prepend 12 hours ago

    My kids’ school added a new weapons scanner as kids walk in the door. It’s powered by “AI.” They trust the AI quite a bit.

    However, the AI identifies the school issued Lenovo laptops as weapons. So every kid was flagged. Rather than stopping using such a stupid tool, they just have the kids remove their laptops before going through the scanner.

    I expect not smart enough people are buying “AI” products and trusting them to do the things they want them to do, but don’t work.

    • willvarfar 26 minutes ago

      Do you think it stupid to scan kids for weapons, or stupid to think that a metal detector will find weapons?

      • selcuka 18 minutes ago

        Not the OP, but obviously it wasn't a metal detector, otherwise it would've detected all brands of laptops as weapons. It's probably an image based detector.

        The problem is, if it has been that badly tested that it detects Lenovo laptops as weapons, there is a good chance that it doesn't properly detect actual weapons either.

    • testfoobar 28 minutes ago

      Sometimes suboptimal tools are used to deflect litigation.

    • mazamats 7 hours ago

      I could see a student hollowing out the laptop and hiding a weapon inside to sneak it in if thats the case

      • hawski 35 minutes ago

        That is beyond silly. Unless students go naked they can have a weapon in a pocket.

        • setopt 24 minutes ago

          The point was that if the laptop is taken out and doesn’t go through the scanner, but the rest of the student has to go through the scanner, then the laptop is a great hiding place. Presumably that scanner can at least beep at a pocket knife.

    • notsound 11 hours ago
    • tightbookkeeper an hour ago

      And they trust them more than people.

  • jmugan 12 hours ago

    My daughter was accused of turning in an essay written by AI because the school software at her online school said so. Her mom watched her write the essay. I thought it was common knowledge that it was impossible to tell whether text was generated by AI. Evidently, the software vendors are either ignorant or are lying, and school administrators are believing them.

    • clipsy 11 hours ago

      > Evidently, the software vendors are either ignorant or are lying

      I’ll give you a hint: they’re not ignorant.

    • lithos 11 hours ago

      AI does have things it does consistently wrong. Especially if you don't narrow down what it's allowed to grab from.

      The easiest for someone here to see is probably code generation. You can point at parts of it and go "this part is from a high-school level tutorial", "this looks like it was grabbed from college assignments", and "this is following 'clean code' rules in silly places"(like assuming a vector might need to be Nd, instead of just 3D).

    • add-sub-mul-div 11 hours ago

      Imagine how little common knowledge there will be one or two generations down the road after people decide they no longer need general thinking skills, just as they've already decided calculators free them from having to care about arithmetic skills.

    • newZWhoDis 10 hours ago

      The education system in the US is broadly staffed by the dumbest people from every walk of life.

      If they could make it elsewhere, they would.

      I don’t expect this to be a popular take here, and most replies will be NAXALT fallacies, but in aggregate it’s the truth. Sorry, your retired CEO physics teacher who you loved was not a representative sample.

      • lionkor an hour ago

        In Germany, you have to do the equivalent of a master's degree (and then a bunch) to teach in normal public schools

      • krick 9 hours ago

        It's not just USA, it's pretty much universal, as much as I've seen it. People like to pretend like it's some sort of noble profession, but I vividly remember having a conversation with recently graduated ex-classmates, where one of them was complaining that she failed to pass at every department she applied to, so she has no other choice than to apply for department of education (I guess? I don't know what is the name of the American equivalent of that thing: bachelor-level program for people who are going to be teachers). At that moment I felt suddenly validated in all my complaints about the system we just passed through.

        • smokel 17 minutes ago

          Sounds like a self-fulfilling prophecy. We educate everyone to be the smartest person in the class, and then we don't have jobs for them. And then we complain that education is not good enough. Shouldn't we conclude that education is already a bit too good?

        • twoWhlsGud 29 minutes ago

          I went to public schools in middle class neighborhoods in California from the late sixties to the early eighties. My teachers were largely excellent. I think that was due to cultural and economic factors - teaching was considered a profession for idealistic folks to go into at the time and the spread between rich and poor was less dramatic in the 50s and 60s (when my teachers were deciding their professions). So the culture made it attractive and economics made it possible. Another critical thing we seem to have lost.

        • Gud 3 hours ago

          In some countries teaching is a highly respected profession.

          Switzerland and Finland comes to mind.

          • llm_trw 44 minutes ago

            You can't eat respect.

            • benjaminfh 23 minutes ago

              In those places salary (and good public services) follows respect

      • JumpCrisscross 9 hours ago

        > your retired CEO physics teacher who you loved was not a representative sample

        Hey, he was Microsoft’s patent attorney who retired to teach calculus!

    • Daz1 an hour ago

      >I thought it was common knowledge that it was impossible to tell whether text was generated by AI.

      Anyone who's been around AI generated content for more than five minutes can tell you what's legitimate and what isn't.

      For example this: https://www.maersk.com/logistics-explained/transportation-an... is obviously an AI article.

      • bryanrasmussen an hour ago

        >Anyone who's been around AI generated content for more than five minutes can tell you what's legitimate and what isn't.

        to some degree of accuracy.

      • zeroonetwothree 28 minutes ago

        It’s impossible to tell AI apart with 100% accuracy

  • gradus_ad 12 hours ago

    Seems like the easy fix here is move all evaluation in-class. Are schools really that reliant on internet/computer based assignments? Actually, this could be a great opportunity to dial back unnecessary and wasteful edu-tech creep.

    • dot5xdev an hour ago

      Moving everything in class seems like a good idea in theory. But in practice, kids need more time than 50 minutes of class time (assuming no lecture) to work on problems. Sometimes you will get stuck on 1 homework question for hours. If a student is actively working on something, yanking them away from their curiosity seems like the wrong thing to do.

      On the other hand, kids do blindly use the hell out of ChatGPT. It's a hard call: teach to the cheaters or teach to the good kids?

      I've landed on making take-home assignments worth little and making exams worth most of their grade. I'm considering making homework worth nothing and having their grade be only 2 in-class exams. Hopefully that removes the incentive to cheat. If you don't do homework, then you don't get practice, and you fail the two exams.

      (Even with homework worth little, I still get copy-pasted ChatGPT answers on homework by some students... the ones that did poorly on the exams...)

    • OptionOfT 10 hours ago

      That overall would be the right thing. Homework is such a weird concept when you think about it. Especially if you get graded on the correctness. There is no step between the teacher explaining and you validating whether you understood the material.

      Teacher explains material, you get homework about the material and are graded on it.

      It shouldn't be like that. If the work (i.e. the exercises) are important to grasp the material, they should be done in class.

      Also removes the need of hiring tutors.

      • yallpendantools 10 hours ago

        > If the work (i.e. the exercises) are important to grasp the material, they should be done in class.

        I'd like to offer what I've come to realize about the concept of homework. There are two main benefits to it: [1] it could help drill in what you learned during the lecture and [2] it could be the "boring" prep work that would allow teachers to deliver maximum value in the classroom experience.

        Learning simply can't be confined in the classroom. GP suggestion would be, in my view, detrimental for students.

        [1] can be done in class but I don't think it should be. A lot of students already lack the motivation to learn the material by themselves and hence need the space to make mistakes and wrap their heads around the concept. A good instructor can explain any topic (calculus, loops and recursion, human anatomy) well and make the demonstration look effortless. It doesn't mean, however, that the students have fully mastered the concept after watching someone do it really well. You only start to learn it once you've fluffed through all the pitfalls at least mostly on your own.

        [2] can't be done in class, obviously. You want your piano teacher to teach you rhythm and musical phrasing, hence you better come to class already having mastered notation and the keyboard and with the requisite digital dexterity to perform. You want your coach to focus on the technical aspects of your game, focus on drilling you tactics; you don't want him having to pace you through conditioning exercises---that would be a waste of his expertise. We can better discuss Hamlet if we've all read the material and have a basic idea of the plot and the characters' motivations.

        That said, it might make sense to simply not grade homeworks. After all, it's the space for students to fail. Unfortunately, if it weren't graded, a lot of students will just skip it.

        Ultimately, it's a question of behavior, motivation, and incentives. I agree that the current system, even pre-AI, could only barely live up to ideals [1] and [2] but I don't have any better system in mind either, unfortunately.

    • tightbookkeeper an hour ago

      Yep. The solutions which actually benefit education are never expensive, but require higher quality teachers with less centralized control:

      - placing less emphasis on numerical grades to disincentive cheating (hard to measure success) - open response written questions (harder to teach, harder to grade) - reading books (hard to determine if students actually did it) - proof based math (hard to teach)

      Instead we keep imagining more absurd surveillance systems “what if we can track student eyes to make sure they actually read the paragraph”

    • radioactivist 12 hours ago

      Out of class evaluations doesn't mean electronic. It could be problem sets, essays, longer-form things like projects. All of these things are difficult to do in a limited time window.

      These limited time-window assessments are also (a) artificial (don't always reflect how the person might use their knowledge later) (b) stressful (some people work better/worse with a clock ticking) and (c) subject to more variability due to the time pressure (what if you're a bit sick, or have had a bad day or are just tired during the time window?).

      • aaplok 10 hours ago

        It could also be hybrid, with an out-of-class and an in-class components. There could even be multiple steps, with in-class components aimed at both verifying authorship and providing feedback in an iterative process.

        AI makes it impossible to rely on out-of-class assignments to evaluate the kids' knowledge. How we respond to that is unclear, but relying on cheating detectors is not going to work.

    • jameslevy 12 hours ago

      The only longterm solution that makes sense is to allow students to use AI tools and to require a log provided by the AI tool to be provided. Adjust the assignment accordingly and use custom system prompts for the AI tools so that the students are both learning about the underlying subject and also learning how to effectively use AI tools.

  • SirMaster 20 minutes ago

    I guess if I was worried about this, I would just screen and camera record me doing my assignments as proof I wasn't using an LLM aid.

  • nitwit005 12 hours ago

    In some cases students have fought such accusations by showing their professor the tool flags the professor's work.

    Don't know why these companies are spending so much developing this technology, when their customers clearly aren't checking how well it works.

    • Ekaros 12 hours ago

      Aren't they exactly making it because their customers are not checking it and still buy it probably for very decent money. And always remember buyers are not end users, either the teachers or students, but the administrators. And for them showing doing something about risk of AI is more important than actually doing anything about it.

    • stouset 11 hours ago

      The companies selling these aren’t “spending so much developing the technology”. They’re following the same playbook as snake oil salesmen and people huckstering supplements online do: minimum effort into the product, maximum effort into marketing it.

  • krick 9 hours ago

    That's kinda nuts how adult people learned to trust some random algorithms in a year or two. They don't know how it works, they cannot explain it, they don't care, it just works. It's magic. If it says you cheated, you cheated. You cannot do anything about it.

    I want to emphasize, that this isn't really about trusting magic, it's about people nonchalantly doing ridiculous stuff nowdays and that they aren't held accountable for that, apparently. For example, there were times back at school when I was "accused" of cheating, because it was the only time when I liked the homework at some class and took it seriously, and it was kinda insulting to hear that there's absolutely no way I did it, but I still got my mark, because it doesn't matter what she thinks if she cannot prove it, so please just sign it and fuck off, it's the last time I'm doing my homework at your class anyway.

    On the contrary, if this article to be believed, these teachers don't have to prove anything, the fact that a coin flipped heads is considered enough of a proof. And everyone supposedly treats it as if it's ok. "Well, they have this system at school, what can we do!" It's crazy.

  • flappyeagle 11 hours ago

    Rather than flagging it as AI why don’t we flag if it’s good or not?

    I work with people in their 30s That cannot write their way out of a hat. Who cares if the work is AI assisted or not. Most AI writing is super dry, formulaic and bad. The student doesn’t recognize this the give them a poor mark for having terrible style.

    • echoangle 11 hours ago

      Because sometimes an exercise is supposed to be done under conditions that don’t represent the real world. If an exam is without calculator, you can’t just use a calculator anyways because you’re going to have one when working, too. If the assignment is „write a text about XYZ, without using AI assistance“, using an AI is cheating. Cheating should have worse consequences than writing bad stuff yourself, so detecting AI (or just not having assignments to do unsupervised) is still important.

    • Ekaros 10 hours ago

      Because often goal of assessing student is not that they can generate output. It is to ensure they have retained sufficient amount of knowledge they are supposed to retain from course and be able regurgitate it in sufficiently readable format.

      Actually being able to generate good text is entirely separate evaluation. And AI might have place there.

  • lelandfe 12 hours ago

    The challenging thing is, cheating students also say they're being falsely accused. Tough times in academia right now. Cheating became free, simple, and ubiquitous overnight. Cheating services built on top of ChatGPT advertise to college students; Chrome extensions exist that just solve your homework for you.

    • borski 12 hours ago

      I don’t know how to break this to you, but cheating was always free, simple, and ubiquitous. Sure, ChatGPT wouldn’t write your paper; but your buddy who needed his math problem solved would. Or find a paper on countless sites on the Internet.

      • crummy 5 minutes ago

        That wasn't free; people would charge money to write essays, and essays found online would be detected as such.

      • rfrey an hour ago

        That's just not so. Most profs were in school years before the internet was ubiquitous. And asking a friend to do your work for you is simple, but far from free.

      • rahimnathwani an hour ago

        It wasn't always free. Look at Chegg's revenue trend since ChatGPT came out.

  • ec109685 an hour ago

    Ycombinator has funded at least one company in this space: https://www.ycombinator.com/companies/nuanced-inc

    It seems like a long term loosing proposition.

    • selcuka 9 minutes ago

      Nothing is a losing proposition if you can convince investors for long enough.

  • stephenbez 8 hours ago

    Are any students coming up with a process to prove their innocents when they get falsely accused?

    If I was still in school I would write my docs in a Google Doc which provides the edit history. I could potentially also record video of me typing the entire document as well or screen recording my screen.

    • Springtime 24 minutes ago

      I don't think there's any real way around the fundamental flaw of such systems assuming there's an accurate way to detect generated text, since even motivated cheaters could use their phone to generate the text and just iterate edits from there, using identical CYA techniques.

      That said, I'd imagine if someone resorts to using generative text their edits would contain anomalies that someone legitimately writing wouldn't have in terms of building out the structure/drafts. Perhaps that in itself could be auto detected more reliably.

    • ec109685 an hour ago

      That’s what the person in the article did:

      “After her work was flagged, Olmsted says she became obsessive about avoiding another accusation. She screen-recorded herself on her laptop doing writing assignments. She worked in Google Docs to track her changes and create a digital paper trail. She even tried to tweak her vocabulary and syntax. “I am very nervous that I would get this far and run into another AI accusation,” says Olmsted, who is on target to graduate in the spring. “I have so much to lose.”

  • moandcompany 12 hours ago

    I'm looking forward to the dystopian sci-fi film "Minority Book Report"

    • m463 11 hours ago

      We should make an AI model called Fahrenheit 451B to detect unauthorized books.

      • moandcompany 7 hours ago

        Open Farenheit 451B will be in charge of detecting unauthorized books and streaming media, as well as unauthorized popcorn or bread.

  • mensetmanusman 13 hours ago

    My daughter’s 7th grade work is 80% flagged as AI. She is a very good writer, it’s interesting to see how poorly this will go.

    Obviously we will go back to in class writing.

    • testfoobar 20 minutes ago

      I'd encourage you to examine the grading policies of the high schools in your area.

      What may seem obvious based on earlier-era measures of student comprehension and success is not the case in many schools anymore.

      Look up evidence based grading, equitable grading, test retake policies, etc.

    • unyttigfjelltol 12 hours ago

      The article demonstrates that good, simple prose is being flagged as AI-generated. Reminds me of a misguided junior high English teacher that half-heartedly claimed I was a plagiarist for including the word "masterfully" in an essay, when she knew I was too stupid to use a word like that. These tools are industrializing that attitude and rolling it to teachers that otherwise wouldn't feel that way.

    • tdeck an hour ago

      > Obviously we will go back to in class writing.

      That would be a pretty sad outcome. In my high school we did both in-class essays and homework essays. The former were always more poorly developed and more more poorly written. IMO students still deserve practice doing something that takes more than 45 minutes.

    • ipaddr 12 hours ago

      She should run it through ai to rewrite in a way so another ai doesn't detect it was written by ai.

      • testfoobar 18 minutes ago

        I've heard some students are concerned that any text submitted to an AI-detector is automatically added to training sets and therefore will eventually will be flagged as AI.

      • minitoar an hour ago

        Right, I thought this was just an arms race for tools that can generate output to fool other tools.

  • from-nibly 10 hours ago

    This is not something that reveals how bad AI is or how dumb administration is. It's revealing how fundamentally dumb our educational system is. It's incredibly easy to subvert. And kids don't find value in it.

    Helping kids find value in education is the only important concern here and adding an AI checker doesn't help with that.

  • kelseyfrog 10 hours ago

    The problem is that professors want a test with high sensitivity and students want a test with high specificity and only one of them is in charge of choosing and administering the test. It's a moral hazard.

    • ec109685 an hour ago

      Do professors really not want high specificity too? Why would they want to falsely accuse anyone?

    • tightbookkeeper an hour ago

      No. Professors want students that don’t cheat so they never have to worry about it.

      This is an ethics problem (people willing to cheat), this is a multi cultural problem (different expectations of what constitutes cheating) this is an incentive problem (credentialism makes cheating worth it).

      Those are hard problems. So a little tech that might scare students and give the professor a feeling of control is a band aid.

  • ameister14 12 hours ago

    The article mentions 'responsible' grammarly usage, which I think is an oxymoron in an undergraduate or high school setting. Undergrad and high school is where you learn to write coherently. Grammarly is a tool that actively works against that goal because it doesn't train students to fix the grammatical mistakes, it just fixes it for them and they become steadily worse (and less detail oriented) writers.

    I have absolutely no problem using it in a more advanced field where the basics are already done and the focus is on research, for example, but at lower levels I'd likely consider it dishonest.

    • borski 11 hours ago

      My wife is dyslexic; grammarly makes suggestions, but it doesn’t fix it for her. Perhaps that’s a feature she doesn’t have turned on?

      She loves it. It doesn’t cause her to be any less attentive to her writing; it just makes it possible to write.

      • ameister14 4 hours ago

        >It doesn’t cause her to be any less attentive to her writing; it just makes it possible to write.

        I was not really referring to accommodations under the ADA. For people that do not require accommodations, the use of them is unfair to their classmates and can be detrimental to their ability to perform without them in the future, as there is no requirement to have the accommodations available to them. This is not the case for someone with dyslexia.

        • borski an hour ago

          Fair, I can see why it looks like I confused them. I was solely using her an example; my point is that grammarly hasn’t caused her knowledge of grammar to get worse, only better. It has taught her over time.

  • OutOfHere 10 hours ago

    I am glad I am done with schooling. I would not want to be a student in this hellscape.

    For those going to college, I strongly advise picking a department where such scanning is not performed.

    For those in public school, sue.

    • kelseyfrog 36 minutes ago

      I'm returning to complete a single class: the writing requirement. It's not that bad. You just run your paper through a 3rd party AI checker beforehand and then cross your fingers and hit submit. You're probably at lower risk than people who don't check. You don't have to outrun the bear, just your fellow students.

  • puttycat 9 hours ago
  • more_corn 13 hours ago

    A student I know texted me, the ai detector kept falsely flagging his work. “This is how I write!” I gave him some tips to sound less like ai which is funny because we train ai with rlhf to sound more and more like humans.

  • Animats 11 hours ago

    Most of the ChatGPT type systems have a rather blah default style. That's what you learn as a non-native speaker of the language. Thus the problem for people who learned English from textbooks.

    Amusingly, you can push ChatGPT type systems into other styles of writing.

    I put in the preamble to the US constitution and asked for different styles:

    Modern:

    We, the people of the United States, come together to build a stronger, more united country. We want to create fairness for everyone, keep peace at home, ensure our safety, support each other’s well-being, and protect our freedoms for ourselves and future generations. This is why we are establishing this Constitution for the United States.

    Gun nut:

    We the People of the United States, rallying together to create a stronger, more unified nation, are all about upholding justice, keeping the peace at home, and defending our rights. We’re here to support one another, ensure our freedoms, and safeguard the blessings of liberty for ourselves and future generations. That’s why we’re putting this Constitution in place for the United States of America—because we believe in freedom, security, and our right to bear arms!

    Woke:

    We, the people of the United States, are coming together to create a more inclusive and equitable society. Our goal is to establish justice for all, promote peace within our communities, ensure safety for everyone, and uplift the well-being of every individual. We are committed to protecting our freedoms and rights, not just for ourselves, but for future generations. This is why we are enacting this Constitution for the United States of America, to foster a nation that values diversity, equality, and the dignity of all.

    Christian:

    We, the people of the United States, gather together in faith and unity to build a more perfect nation under God. Our purpose is to establish justice and promote peace in our homes and communities. We strive to protect our common security and care for one another’s well-being, guided by the principles of love and compassion. In doing so, we seek to uphold the blessings of liberty that God has granted us, not just for ourselves, but for future generations. This is why we commit to this Constitution for the United States of America, as we seek to honor His will in all that we do.

    If you're having trouble with essays being detected as AI, you can run them through an LLM to rewrite them in a different tone.

    • Animats 10 hours ago

      Xi Thought:

      We, the people of the United States, unite to forge a more harmonious society, guided by the principles of collective strength and shared responsibility. Our aim is to establish justice and maintain social stability, ensuring the well-being of all citizens through solidarity and mutual support. By safeguarding our national defense and promoting the common welfare, we embody the spirit of unity and progress. This commitment to our foundational values is why we establish this Constitution for the United States of America, as we work toward a prosperous future for all.

      MAGA:

      We, the great people of the United States, are coming together to create a stronger, more united nation. Our mission is to establish true justice, ensure peace in our communities, and defend our country with pride. We’re all about promoting the well-being of hard-working Americans and securing our freedoms for ourselves and future generations. That’s why we’re affirming this Constitution for the United States of America—because we believe in making our nation great and preserving our rights for all!

      Martin Luther King:

      We, the people of the United States, come together in a spirit of unity and purpose, striving to create a more perfect union. It is our sacred duty to establish justice and ensure tranquility in our communities, uplifting one another in love and understanding. We must provide for the common defense and promote the welfare of all, recognizing that true freedom is a collective journey. In this endeavor, we seek to secure the blessings of liberty for ourselves and future generations. Therefore, we affirm this Constitution for the United States of America as a testament to our commitment to equality, justice, and the enduring hope for a brighter tomorrow.

      Trump:

      We, the amazing people of the United States, are coming together to build a tremendous nation, folks. Our goal? To create a more perfect union—believe me, it’s going to be fantastic! We’re establishing real justice, keeping our communities safe, and defending our great country. We’re promoting the welfare of all hardworking Americans and securing our incredible freedoms for ourselves and our future generations. That’s why we’re putting this Constitution in place for the United States of America—because we’re making America great again, and nobody does it better!

      ChatGPT has automatic blithering nailed.

  • pella 11 hours ago

    related:

    Post-apocalyptic education

    What comes after the Homework Apocalypse

    by Ethan Mollick

    https://www.oneusefulthing.org/p/post-apocalyptic-education

  • rolph 13 hours ago

    convergence will occur, measurable by increasing frequency of false positives output by detection.

    • HarryHirsch 12 hours ago

      You mean model collapse, because schoolchildren will soon base their writing on the awful AI slop they have read online? That's fearsome, actually.

      We are seeing this with Grammarly already, where instead of a nuance Grammarly picks the beige alternative. The forerunner was the Plain English Campaign, which succeeded in official documents publicised in imprecise language at primary school reading level, it's awful.

  • rowanG077 12 hours ago

    This has nothing to do with AI, but rather about proof. If a teacher said to a student you cheated and the student disputes it. Then in front of the dean or whatever the teacher can produce no proof of course the student would be absolved. Why is some random tool (AI or not) saying they cheated without proof suddenly taken as truth?

    • deckiedan 12 hours ago

      The AI tool report shown to the dean with "85% match" Will be used as "proof".

      If you want more proof, then you can take the essay, give it to chatGPT and say, "Please give me a report showing how this essay is written to en by AI."

      People treat AI like it's an omniscient god.

      • deepsquirrelnet 12 hours ago

        I think what you pointed out is exactly the problem. Administrators apparently don’t understand statistics and therefore can’t be trusted to utilize the outputs of statistical tools correctly.

    • JumpCrisscross 12 hours ago

      > the teacher can produce no proof

      For an assignment completed at home, on a student's device using software of a student's choosing, there can essentially be no proof. If the situation you describe becomes common, it might make sense for a school to invest into a web-based text editor that capture keystrokes and user state and requiring students use that for at-home text-based assignments.

      That or eliminating take-home writing assignments--we had plenty of in-class writing when I went to school.

      • xnyan 11 hours ago

        >For an assignment completed at home, on a student's device using software of a student's choosing, there can essentially be no proof

        According to an undergraduate student who babysits for our child, some students are literally screen recording the entire writing process, or even recording themselves writing at their computers as a defense against claims of using AI. I don't know how effective that defense is in practice.

        • JumpCrisscross 4 hours ago

          I hate that because it implies a presumption of guilt.

    • happymellon 12 hours ago

      Unfortunately with AI, AI detection, and schools its all rather Judge Dredd.

      They issue the claim, the judgement and the penalty. And there is nothing you can do about it.

      Why? Because they *are* the law.

      • borski 11 hours ago

        That’s not even remotely true. You can raise it with the local board of education. You can sue the board and/or the school.

        You can sue the university, and likely even win.

        They literally are not the law, and that is why you can take them to court.

        • HarryHirsch 11 hours ago

          In real life it looks like this: https://www.foxnews.com/us/massachusetts-parents-sue-school-...

          A kid living in a wealthy Boston suburb used AI for his essay (that much is not in doubt) and the family is now suing the district because the school objected and his chances of getting into a good finishing school have dropped.

          On the other hand you have students attending abusive online universities who are flagged by their plagiarism detector and they wouldn't ever think of availing themselves of the law. US law is for the rich, the purpose of a system is what it does.

          • borski 11 hours ago

            I’m not sure what “used AI” means here, and the article is unclear, but it sure does sound like he did have it write it for him, and his parents are trying to “save his college admissions” by trying to say “it doesn’t say anywhere that having AI write it is bad, just having other people write it,” which is a specious argument at best. But again: gleaned from a crappy article.

            You don’t need to be rich to change the law. You do need to be determined, and most people don’t have or want to spend the time.

            Literally none of that changes the fact that the Universities are not, themselves, the law.

            • HarryHirsch 11 hours ago

              The law is unevenly enforced. My wife is currently dealing with a disruptive student from a wealthy family background. It's a chemistry class, you can't endanger your fellow students. Ordinarily, one would throw the kid out of the course, but there would be pushback from the family, and so she is cautious, let's deduct a handful of points, maybe she gets it, and thus it continues.

              • borski 9 hours ago

                I completely agree that it is unevenly enforced. Still doesn't make universities the law.

        • zo1 11 hours ago

          That could take months of nervous waiting and who-knows how many wasted hours researching, talking and writing letters. The same reason most people don't return a broken $11 pot, it's cheaper and easier to just adapt and move around the problem (get a new pot) rather than fixing it by returning and "fighting" for a refund.

          • borski 11 hours ago

            I agree; I am not saying I am glad this is happening. I am saying it is untrue that universities “are the law.”

            They’re not. That doesn’t make it less stressful, annoying, or unnecessary to fight them.

    • underseacables 12 hours ago

      Universities don't exactly decide guilt by proof. If their system says you're guilty, that's pretty much it.

      • borski 11 hours ago

        Source? I was accused of a couple things (not plagiarism) at my university and was absolutely allowed to present a case, and due to a lack of evidence it was tossed and never spoken of again.

        So no, you don’t exactly get a trial by a jury of your peers, but it isn’t like they are averse to evidence being presented.

        This evidence would be fairly trivial to refute, but I agree it is a burden no student needs or wants.

  • nazzzario 9 hours ago

    Guys, you can easly protect yourself from such situations by using GPTInf here is a link https://www.gptinf.com?fpr=join