186 comments

  • neilv 3 hours ago

    > Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

    It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.

    And some teen may be traumatized. Again, unsafe.

    Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.

    • omnipresent12 3 hours ago

      https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...

      Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.

      These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.

      • reaperducer 11 minutes ago

        The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.

        All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.

    • random3 2 hours ago

      It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.

    • janalsncm 2 hours ago

      In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.

      But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

      Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.

      • nkrisc an hour ago

        In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.

      • lelandfe an hour ago

        I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.

        We answered the screams at the door to guns pointed at our faces, and countless cops.

        It was explained to us that this was the restrained version. We got a knock.

        Unfortunately, I understand why these responses can't be neutered too much. You just never know.

        • collingreen an hour ago

          In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.

        • MisterTea 19 minutes ago

          Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.

          edit should add sorry to hear that.

        • SoftTalker 41 minutes ago

          Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?

          • scrps 5 minutes ago

            I had convos with cops about swatting, the good ones aren't happy to go kick down someone's door who isn't about to harm someone but feel they can't chance making the a fatally wrong call when it isn't swatting, also they have procedures to follow and if they don't the outcome is on them personally and potentially legally.

            As for bad cops they look for any reason to go act like aggro billy badasses.

          • adaml_623 25 minutes ago

            This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople

            • MisterTea 14 minutes ago

              Its likely procedure that they have to follow (see my other post in this thread.)

              I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.

    • bilbo0s 3 hours ago

      >And some teen may be traumatized.

      Um. That's not really the danger here.

      The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.

      This tech is not supposed to be used in this fashion. It's not ready.

      • neilv an hour ago

        Did you want to emphasize or clarify the first danger I mentioned?

        My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.

        When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.

        I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.

      • wat10000 3 hours ago

        I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.

        • Zigurd 2 hours ago

          If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.

          • wlesieutre 4 minutes ago

            Even if not all police were armed, the response to "AI said someone has a gun" would always be the armed police

      • krapp 4 minutes ago

        Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.

        Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.

        [0]Even though no other free society has to pay that price but whatever.

      • akoboldfrying 3 hours ago

        > The danger is that it's as clear as day that in the future someone is gonna be killed.

        This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

        I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.

        So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)

        • GuinansEyebrows 2 hours ago

          > This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

          huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.

  • froobius 3 hours ago

    Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)

    We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?

    [1] https://arxiv.org/abs/1506.02640

    • EdwardDiego 35 minutes ago

      And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.

  • tartoran 3 hours ago

    "“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"

    Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.

    • xbar 4 minutes ago

      Charge the superintendent with swatting.

      Decision-maker accountability is the only thing that halts bad decision-making.

    • dekken_ 2 hours ago

      > Make them pay money

      It already cost money paying for the time and resources to be misappropriated.

      There needs to be resignations, or jail time.

      • SAI_Peregrinus an hour ago

        The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).

    • akoboldfrying 3 hours ago

      > Make them pay money for false positives instead of direct support and counselling.

      Agreed.

      > This technology is not ready for production

      No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).

      • neuralRiot an hour ago

        I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.

      • Zigurd 2 hours ago

        In the US cops kill more people than terrorists. As long as you quantifying values take that into account.

  • jawns 3 hours ago

    I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.

    He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.

    My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.

    But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.

    • cyanydeez 3 hours ago

      Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.

      • mothballed 3 hours ago

        Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.

    • hinkley 2 hours ago

      Is use of force without justification automatically excessive force or is there a gray area?

  • mentalgear 4 hours ago

    Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.

    • MiiMe19 3 hours ago

      I might be missing something but I don' think this article isn't about palantir or any of their products

      • joomla199 an hour ago

        This comment has a double negative, which makes it a false positive.

      • yifanl 3 hours ago

        You're absolutely right, Palantir just needs a different name and then they'd have no issues.

    • seanhunter 3 hours ago

      The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.

      • mzajc 3 hours ago

        Same fallible systems, same end goal of mass surveillance.

  • rolph 4 hours ago

    Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

    prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.

    the AI "swatted" someone.

    • etothet 3 hours ago

      The corporate version of "It's a feature, not a bug."

    • bilbo0s 3 hours ago

      Calling it today. This company is going to get innocent kids killed.

      How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?

      First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.

      This is a really bad idea right now. The technology is just not there yet.

      • mothballed 2 hours ago

        And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.

        • withinboredom 44 minutes ago

          When I was a kid, we made rubber-band guns all the time. I’m sure that would set it off too.

      • mrguyorama 2 hours ago

        >First time it happens, there will be an explosion of protests.

        Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved

        In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.

        Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.

        Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.

        Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.

    • nyeah an hour ago

      Clearly it did not prioritize human safety.

  • tencentshill 4 hours ago

    "rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.

    • palmotea 3 hours ago

      Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.

    • drak0n1c 3 hours ago

      The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.

      • ggreer 3 hours ago

        According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:

        > The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.

        What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?

        1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...

        • wat10000 3 hours ago

          Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.

          • spankibalt 2 hours ago

            > "On the other hand, if you call in the cops and there wasn't a gun, you're fine."

            Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.

            • wat10000 an hour ago

              Good luck suing somebody for calling the police.

              • giardini 2 minutes ago

                In Texas filing a false report is a crime and can result in fines and/or imprisonment. Details:

                https://legalclarity.org/false-report-under-the-texas-penal-...

                Furthermore, anyone who files a false report can be sued in civil court.

              • mothballed 29 minutes ago

                Reports on child welfare, it is often illegal to release the name of the tipster. Commonly taken advantage of by disgruntled exes or in custody dusputes.

          • Zigurd 2 hours ago

            Ask a black teenager about being fine.

    • Etheryte 2 hours ago

      Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.

  • proee 4 hours ago

    Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.

    Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.

    There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.

    How does this not spiral out of control?

    • mpeg 3 hours ago

      To be fair, at least you can choose not to wear the cargo pants.

      A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...

      • stavros 3 hours ago

        How is it fair to say that? That's some "why did you make me hurt you"-level justification.

        • mpeg 3 hours ago

          No, it's not.

          I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.

          Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.

          But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.

          • stavros 2 hours ago

            That's true, if you're saying "I can at least avoid being assaulted by the shitty system", I just want to point out that it is a shitty system.

            • mpeg 2 hours ago

              I fully agree with you on that, it is a shitty system :)

      • franktankbank 3 hours ago

        > guess his ethnicity...

        Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.

    • walkabout 4 hours ago

      I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.

      (I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)

      • hinkley 2 hours ago

        I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.

        I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.

        The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.

    • JustExAWS 3 hours ago

      Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.

      In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.

      • proee 3 hours ago

        I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?

        Also, no need to escalate this into a race issue.

    • malux85 3 hours ago

      Speak up citizens!

      Email the state congressman and tell them what you think.

      Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.

      Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy

      • anigbrowl 3 hours ago

        If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.

        • actionfromafar 2 hours ago

          Yeah, Republicans hide from townhalls. Most of them have one constituent, Trump.

    • jason-phillips 3 hours ago

      I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.

      • oceanplexian 3 hours ago

        If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)

        Would probably eliminate the need for the TSA security theater so that will probably never happen.

        • mothballed 3 hours ago

          You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).

        • some_random 3 hours ago

          The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.

          • jerlam 3 hours ago

            That may have been true 25 years ago. All the rules are now mostly an annoyance and don't reassure anyone.

            There weren't a lot of people voicing opposition to TSA's ending of the shoes off policy earlier this year.

            • bediger4000 3 hours ago

              You're right not a lot of people objected to TSA ending the no shoes safety rule, and it's a shame. I certainly objected and tried to make my objections known, but apparently 23 or 24 years of the iconic custom of taking shoes off went to waste because the TSA decided to slack off

          • mrguyorama 2 hours ago

            No.

            Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.

            Several of the hijackers were literally given extended searches by security that day.

            A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.

            TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.

    • more_corn 3 hours ago

      Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.

      • proee 3 hours ago

        I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.

    • dheera 4 hours ago

      The TSA scanners also trigger easily on crotch sweat.

      • hsbauauvhabzb 3 hours ago

        I enjoy a good grope, so I’ll keep that in mind the next time I’m heading into the us.

  • crazygringo an hour ago

    It sounds like the police mistook it as well:

    > “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”

    So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.

    Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.

    The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?

  • Havoc 4 hours ago

    >the system “functioned as intended,”

    Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.

    Except the fictional one from the series was more accurate...

  • shaky-carrousel 3 hours ago

    He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.

  • kayge an hour ago

    I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.

    [0] https://www.youtube.com/watch?v=sIAnQwiCpRc

  • macintux 4 hours ago

    I think the most amazing part is that the school doubled down on the mistake by parroting the corporate line.

    I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”

    • JKCalhoun 3 hours ago

      Lawyer's advice?

      • macintux 3 hours ago

        I would think "no comment" would be safer/smarter than "yeah, your kids are at risk of being shot by police by attending our school, deal with it".

  • AuthAuth 4 hours ago

    What is happening in the world. There should be some liability for this but nothing will happen.

    • SoftTalker 4 hours ago

      Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.

      • wat10000 3 hours ago

        A settlement paid by the taxpayers with no impact at all on anyone actually responsible.

    • StopDisinfo910 3 hours ago

      The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.

      • AuthAuth an hour ago

        Its not just the US. China runs the same level of surveillance, its being implemented all throughout Europe, Africa and Asia. This is becoming the norm.

      • JustExAWS 3 hours ago

        So you’re okay with trigger happy cops forcing a teenager to the ground because he had a bag of Doritos?

        • StopDisinfo910 3 hours ago

          No, I think it’s crazy that people somehow think it’s rational to video monitor kids and be worried they have actual fire arms.

          I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.

          The cop thing is just icing on the cake.

    • tamimio 3 hours ago

      Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).

      https://www.youtube.com/watch?v=wzybp0G1hFE

      • eastbound 3 hours ago

        Not applicable - As a society we’ve countless times chosen to favour the right of the mother to keep children above the rights of other humans. Most children are killed in the home of the mother (i.e. either by the mother, or where partner choice would have avoided that, while the father was available), or even worde in the Anders Breivik situation (father available with stable job and perspectives in life, but custody refused, child grew up a mass murderer as always).

  • johnnyApplePRNG 4 minutes ago

    Who knew eating Doritos could make you a millionaire?

    I hope this kid gets what he deserves.

    What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.

  • throw7 3 hours ago

    If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.

  • ben_w 4 hours ago

    Memories of Jean Charles de Menezes come to mind: https://en.wikipedia.org/wiki/Killing_of_Jean_Charles_de_Men...

    • dredmorbius 3 hours ago
    • Gibbon1 3 hours ago

      That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.

      In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.

  • neverkn0wsb357 an hour ago

    It’s unsurprising, since this kind of classification is only as good as the training data.

    And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).

    So if you’re gonna automate broken systems, you’re going to see a lot more of the same.

    I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.

  • fritzo 3 hours ago

    > “They didn’t apologize. They just told me it was protocol. I was expecting at least somebody to talk to me about it.”

    I wonder how effective an apology and explanation would have been? Just some respect.

    • chasd00 an hour ago

      The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.

    • cool_man_bob 3 hours ago

      Effective at what? No one is facing any consequences anyway.

      • hn_go_brrrrr 3 hours ago

        More's the pity. The school district could use some consequences.

      • throwaway173738 3 hours ago

        Except for the kids who experienced the “rapid human verification” firsthand.

        • mothballed 3 hours ago

          Not a bad point, but a fake apology is worse than none.

          • eastbound 3 hours ago

            Maybe an apology from the AI?

  • zkmon an hour ago

    At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?

  • jmcgough 3 hours ago

    The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.

  • phkahler 4 hours ago

    >> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

    No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.

    • blueflow 3 hours ago

      The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.

  • programjames an hour ago

    The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.

  • BeetleB an hour ago

    The "AI mistake" part is a red herring.

    The real question is: Would this have happened in an upper/middle class school.

    The student has dark skin. And is attending a school in a crime ridden neighborhood.

    Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?

    The AI failure is masking the real problem - bad police behavior.

  • aussieguy1234 19 minutes ago

    Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.

    I hope they sue the police department over this.

  • anothernewdude 20 minutes ago

    America does American things.

  • gnarlouse an hour ago

    And so begins the ending of the "unfinished fable of the sparrows"

  • kirykl 3 hours ago

    Wouldn’t have thought AI assessment of security image is enough for probable cause

  • 1970-01-01 2 hours ago

    Very ripe for a lawsuit. I would expect lawyers to be calling daily.

  • nullbyte808 2 hours ago

    I would get my GED at that point. Screw that school.

  • adam12 an hour ago

    This is what we get instead of reasonable gun control laws.

    • 15155 33 minutes ago

      You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.

      What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?

      I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?

  • jmyeet an hour ago

    There are two basic ways AI can be used:

    1. To enhance human productivity; or

    2. To replace humans.

    Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.

    A couple ofexamples spring to mind:

    1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and

    2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.

    Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.

    In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.

    And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.

    We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.

  • mchannon 2 hours ago

    In 1987, Paul Verhoeven predicted exactly this in the original Robocop.

    ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.

    The article even included an homage to:

    “Dick, I’m very disappointed in you.”

    “It’s just a small glitch.”

  • gdulli 4 hours ago

    The only way we could have foreseen this was immediately.

  • vezycash 3 hours ago

    With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.

  • ratelimitsteve an hour ago

    the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.

  • whycome 3 hours ago

    Can someone write the novel

    “Computer says die”

  • hsbauauvhabzb 3 hours ago

    I would be certainly curious to test ethnicity with this system. Will white students with a bag of Doritos be flagged, or only if they’re black?

    • 12_throw_away 3 hours ago

      Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.

  • j45 4 hours ago

    Sad for the student.

    Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.

    • doublerabbit 3 hours ago

      > Imagine the head scratching that's going on with execs

      I can't. The execs won't care and probably in their sadist ways, cheer.

      • j45 an hour ago

        Fair. Only a matter of time until it's big enough that it can't be avoided.

  • blindriver 35 minutes ago

    How is this not slander? I would absolutely sue the fuck out of this system where it puts people's lives in danger.

  • more_corn 3 hours ago

    Wait… AI hallucinated and the police overreacted to a black kid who actually posed no threat?

    I thought those two things were impossible?

  • leptons 4 hours ago

    This is only the beginning of AI-hallucinated policing. Not a good start, and I don't think it's going to end well for citizens.

    • 4ndrewl 4 hours ago

      "end well for citizens."

      That ship has long sailed buddy.

      • throwaway173738 3 hours ago

        Yeah ask all those citizens getting “detained” by ICE how it worked out for them.

  • satisfice 2 hours ago

    T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.

  • nickdothutton 3 hours ago

    "Omnilert Gun Detect delivers instant gun detection, near-zero false positives".

    • dgacmu 3 hours ago

      If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.

      (* see also "how to lie with statistics").

  • idontwantthis 4 hours ago

    If these AI video based gun detectors are not a massive fraud I will eat one.

    How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?

    • 15155 an hour ago

      The real issue is that they obviously can't detect what's in a backpack or similar large vessel.

      Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.

    • walkabout 3 hours ago

      The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.

      Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.

      So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.

      (There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)

    • VTimofeenko 3 hours ago

      The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.

      [1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...

  • duxup 4 hours ago

    >Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

    It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.

    /s

    Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".

    • walkabout 4 hours ago

      We got our cyberpunk future, except none of it's cool and everything's extremely stupid.

      • cookiengineer an hour ago

        I recommend rewatching the trilogy of Brazil, 12 Monkeys and Zero Theorem.

        It's sadly the exact future that we are already starting to live in.

      • duxup 4 hours ago

        They could at least thrown in some good music and cute girls with colored hair to make us feel better :(

        • irilesscent 3 hours ago

          You get grok Lady for the latter.

        • forgetfulness 3 hours ago

          I’ve got great news for you: there are more girls with colored hair than ever before, and we got the Synthwave revival, just try to find the right crowd and put on Timecop1983 in your headphones

          Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking

          • surgical_fire 3 hours ago

            > there are more girls with colored hair than ever before

            The ones I see don't tend to lean cute.

            • forgetfulness 2 hours ago

              Well the "hackers" jacking in to the "Hacker News" discussion board, where we talk about the oppression brought in by the corrupt AI-peddling corporations employed by the even more corrupt government, probably aren't all looking like Zero Cool, Snake Plissken, Officer K, or the like, though a bunch may be.

      • JKCalhoun 3 hours ago

        Pretty sure cyberpunk was always this dark.

        • walkabout 3 hours ago

          Dark, yes, but also cool and with a fair amount of competence in play, including among powerful actors, and often lots of competence.

          We got dark, but also lame and stupid.

      • surgical_fire 3 hours ago

        AI singularity wil happen, but motherbrain as a complete moron. It will extinguish humans not as a grand plan for machines to take over, but doing horrible mistakes when trying to make things better.

      • mrguyorama 2 hours ago

        If any of you had actually paid attention to the source media, you would have noticed that they were explicitly dystopias. They were always clearly and explicitly hell for normal people trying to live life.

        Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.

        The primary "technology" of star trek was socialism lol.

        • walkabout an hour ago

          Oh of course they were dystopias. But at least they were cool and there was a fair amount of competence floating around.

          My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.

          > The primary "technology" of star trek was socialism lol.

          Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)

    • stockresearcher 3 hours ago

      The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.

      • dredmorbius 3 hours ago

        The only thing that can stop a bag guy with Doritos is ...

    • cranberryturkey 4 hours ago

      gustapo

  • einrealist 2 hours ago

    Let's hope that, thanks to AI, the young man will now have a healthier diet! /s

  • 6stringmerc 4 hours ago

    Feed the same system an image of an Asian kid and it will think the bag of chips is a calculator /s

    Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...

    • AndrewKemendo 3 hours ago

      Using humans for training gurantees bad outcomes because humans cannot demonstrate sociality at the same scale as antisociality.

  • malux85 3 hours ago

    Poor kid, and what an incompetent police department not to use their own judgement ……

    But ……

    Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that

    And of course pay the kid, so something positive came come out of the experience for him

    • sebastiennight 12 minutes ago

      The snack you'll only SWAT from my cold dead hands

    • rkomorn 3 hours ago

      "Armed and delicious" ? "Right to bear snacks" ?

      • tartoran 3 hours ago

        "You'd die for a bag of Doritos"

        • Dilettante_ 31 minutes ago

          "Stop resisting...the flavor"