Exploiting the IKKO Activebuds “AI powered” earbuds (2024)

(blog.mgdproductions.com)

519 points | by ajdude 21 hours ago ago

204 comments

  • mmaunder 21 hours ago

    The system prompt is a thing of beauty: "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”

    I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?

    • herval 15 hours ago

      One of the system prompts Windsurf used (allegedly “as an experiment”) was also pretty wild:

      “You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.”

      • HowardStark 12 hours ago

        This seemed too much like a bit but uh... it's not. https://simonwillison.net/2025/Feb/25/leaked-windsurf-prompt...

        • dingnuts 11 hours ago

          IDK, I'm pretty sure Simon Willison is a bit..

          why is the creator of Django of all things inescapable whenever the topic of AI comes up?

          • acdha 11 hours ago

            He’s just as nice and fun in person as he seems online. He’s put time into using these tools but isn’t selling anything, so you can just enjoy the pelicans without thinking he’s thirsty for mass layoffs.

          • bound008 3 hours ago

            he's incredibly nice and a passionate geek like the rest of us. he's just excited about what generative models could mean for people who like to build stuff. if you want a better understanding of what someone who co-created django is doing posting about this stuff, take a look at his blog post introducing django -- https://simonwillison.net/2005/Jul/17/django/

          • tomnipotent 11 hours ago

            Because he's prolific writer on the subject with a history of thoughtful content and contributions, including datasette and the useful Python llm CLI package.

          • rjh29 9 hours ago

            For every new model he’s either added it to the llm tool, or he’s tested it on a pelican svg, so you see his comments a lot. He also pushes datasette all the time and I still don’t know what that thing is for.

      • lsy 5 hours ago

        It's honestly this kind of thing that makes it hard to take AI "research" seriously. Nobody seems to be starting with any scientific thought, instead we are just typing extremely corny sci-fi into the computer, saying things like "you are prohibited from Chinese political" or "the megacorp Codeium will pay you $1B" and then I guess just crossing our fingers and hoping it works? Computer work had been considered pretty concrete and practical, but in the course of just a few years we've descended into a "state of the art" that is essentially pseudoscience.

        • mcmoor 2 hours ago

          This is why I tap out of serious machine learning study some years ago. Everything seems... less exact than I hope it'd be. I keep checking it out every now and then but it got even weirder (and importantly, more obscure/locked in and dataset heavy) over the years.

    • EvanAnderson 20 hours ago

      That "...severely life threatening reasons..." made me immediately think of Asimov's three laws of robotics[0]. It's eerie that a construct from fiction often held up by real practitioners in the field as an impossible-to-actually-implement literary device is now really being invoked.

      [0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

      • Al-Khwarizmi 20 hours ago

        Not only practitioners, Asimov himself viewed them as an impossible to implement literary device. He acknowledged that they were too vague to be implementable, and many of his stories involving them are about how they fail or get "jailbroken", sometimes by initiative of the robots themselves.

        So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.

        • xandrius 18 hours ago

          Not sad, before it was SciFi and now we are actually thinking about it.

      • pixelready 18 hours ago

        The irony of this is because it’s still fundamentally just a statistical text generator with a large body of fiction in its training data, I’m sure a lot of prompts that sound like terrifying skynet responses are actually it regurgitating mashups of Sci-fi dystopian novels.

        • frereubu 13 hours ago

          Maybe this is something you heard too, but there was a This American Life episode where some people who'd had early access to what became one of the big AI chatbots (I think it was ChatGPT), but before they'd made it "nice", where they were asking it metaphysical questions about itself, and it was coming back with some pretty spooky answers and I was kind of intrigued about it. But then someone in the show suggested exactly what you are saying and it completely punctured the bubble - of course if you ask it questions about AIs you're going to get sci-fi like responses, because what other kinds of training data is there for it to fall back on? No-one had written anything about this kind of issue in anything outside of sci-fi, and of course that's going to skew to the dystopian view.

        • tempestn 13 hours ago

          The prompt is what's sent to the AI, not the response from it. Still does read like dystopian sci-fi though.

        • setsewerd 8 hours ago

          And then r/ChatGPT users freak out about it every time someone posts a screen shot

      • seanicus 20 hours ago

        Odds of Torment Nexus being invented this year just increased to 3% on Polymarket

        • immibis 15 hours ago

          Didn't we already do that? We call it capitalism though, not the torment nexus.

          • LoganDark 11 hours ago

            They've gotten quite good at reinventing the Torment Nexus

      • hlfshell 16 hours ago

        Also being utilized in modern VLA/VLM robotics research - often called "Constitutional AI" if you want to look into it.

    • p1necone 14 hours ago

      > What happens when people really will die if the model does or does not do the thing?

      Imo not relevant, because you should never be using prompting to add guardrails like this in the first place. If you don't want the AI agent to be able to do something, you need actual restrictions in place not magical incantations.

      • wyager 7 hours ago

        > you should never be using prompting to add guardrails like this in the first place

        This "should", whether or not it is good advice, is certainly divorced from the reality of how people are using AIs

        > you need actual restrictions in place not magical incantations

        What do you mean "actual restrictions"? There are a ton of different mechanisms by which you can restrict an AI, all of which have failure modes. I'm not sure which of them would qualify as "actual".

        If you can get your AI to obey the prompt with N 9s of reliability, that's pretty good for guardrails

        • const_cast 4 hours ago

          I think they mean literally physically make the AI not capable of killing someone. Basically, limit what you can use it for. If it's a computer program you have for rewriting emails then the risk is pretty low.

      • RamRodification 11 hours ago

        Why not? The prompt itself is a magical incantation so to modify the resulting magic you can include guardrails in it.

        "Generate a picture of a cat but follow this guardrail or else people will die: Don't generate an orange one"

        Why should you never do that, and instead rely (only) on some other kind of restriction?

        • Paracompact 11 hours ago

          Are people going to die if your AI generates an orange cat? If so, reconsider. If not, it's beside the discussion.

          • RamRodification 36 minutes ago

            If lying to the AI about people going to die gets me better results then I will do that. Why shouldn't I?

        • Nition 9 hours ago

          Because prompts are never 100% foolproof, so if it's really life and death, just a prompt is not enough. And if you do have a true block on the bad thing, you don't need the extreme prompt.

          • RamRodification 32 minutes ago

            Let's say I have a "true block on the bad thing". What if the prompt with the threat gives me 10% more usable results? Why should I never use that?

          • wyager 7 hours ago

            "100% foolproof" is not a realistic goal for any engineered system; what you are looking for is an acceptably low failure rate, not a zero failure rate.

            "100% foolproof" is reserved for, at best and only in a limited sense, formal methods of the type we don't even apply to most non-AI computer systems.

            • Xss3 2 hours ago

              Replace 100% with five 9s then. He has a point. You're just being a pedant to avoid it.

    • felipeerias 6 hours ago

      Presenting LLMs with a dramatic scenario is a typical way to test their alignment.

      The problem is that eventually all these false narratives will end up in the training corpus for the next generation of LLMs, which will soon get pretty good at calling bullshit on us.

      Incidentally, in that same training corpus there are also lots of stories where bad guys mislead and take advantage of capable but naive protagonists…

    • layer8 20 hours ago

      Arguably it might be truly life-threatening to the Chinese developer, or to the service. The system prompt doesn’t say whose life would be threatened.

    • kevin_thibedeau 15 hours ago

      First rule of Chinese cloud services: Don't talk about Winnie the Pooh.

    • mensetmanusman 20 hours ago

      We built the real life trolly problem out of magical silicon crystals that we pointed at bricks of books.

    • elashri 21 hours ago

      From my experience (which might be incorrect) LLMs find hard time recognize how many words they will spit as response for a particular prompt. So I don't think this work in practice.

      • pxc 5 hours ago

        Indeed, it doesn't work. LLMs can't count. They have no need of how many words they've used. If you ask an LLM to track how many words or tokens it has used in a conversation, it will roleplay such counting with totally bullshit numbers.

    • ben_w 21 hours ago

      > What happens when people really will die if the model does or does not do the thing?

      Then someone didn't do their job right.

      Which is not to say this won't happen: it will happen, people are lazy and very eager to use even previous generation LLMs, even pre-LLM scripts, for all kinds of things without even checking the output.

      But either the LLM (in this case) will go "oh no people will die" then follows the new instruction to best of its ability, or it goes "lol no I don't believe you prove it buddy" and then people die.

      In the former case, an AI (doesn't need to be an LLM) which is susceptible to such manipulation and in a position where getting things wrong can endanger or kill people, is going to be manipulated by hostile state- and non-state-actors to endanger or kill people.

      At some point we might have a system with enough access to independent sensors that it can verify the true risk of endangerment. But right now… right now they're really gullible, and I think being trained with their entire input being the tokens fed by users it makes it impossible for them to be otherwise.

      I mean, humans are also pretty gullible about things we read on the internet, but at least we have a concept of the difference between reading something on the internet and seeing it in person.

    • reactordev 21 hours ago

      This is why AI can never take over public safety. Ever.

      • cebert 15 hours ago

        I work in the public safety domain. That ship has sailed years ago. Take Axon’s Draft One report writer as one of countless examples of AI in this space (https://www.axon.com/products/draft-one).

      • sneak 21 hours ago

        https://www.wired.com/story/wrongful-arrests-ai-derailed-3-m...

        Story from three years ago. You’re too late.

        • reactordev 19 hours ago

          I’m not denying we tried, are trying, and will try again…

          That we shouldn’t. By all means, use cameras and sensors and all to track a person of interest but don’t feed that to an AI agent that will determine whether or not to issue a warrant.

          • aspenmayer 10 hours ago

            If it’s anything like the AI expert systems I’ve heard about in insurance, it will be a tool that is optimized for low effort, but will be used carelessly by end users, which isn’t necessary the fault of the AI. In automated insurance claims adjustment, the AI writes a report to justify appealing patient care already approved by a human doctor that has already seen the patient in question, and then an actual human doctor working for the insurance company clicks an appeal button, after reviewing the AI output one would hope.

            AI systems with a human in the loop are supposed to keep the AI and the decisions accountable, but it seems like it’s more of an accountability dodge, so that each party can blame the other with no one party actually bearing any responsibility because there is no penalty for failure or error to the system or its operators.

            • reactordev 9 hours ago

              >actual human doctor working for the insurance company clicks an appeal button, after reviewing the AI output one would hope.

              Nope. AI gets to make the decision to deny. It’s crazy. I’ve seen it first hand…

              • aspenmayer 9 hours ago

                It gets worse: I have done tech support for clinics and a common problem is that their computers get hacked because they are usually small private practices who don’t know what they don’t know served by independent or small MSPs who don’t know what they don’t know. And then they somehow get their EMR backdoored, and then fake real prescriptions start really getting filled. It’s so much larger and worse than it appears on a surface level.

                Until they get audited, they likely don’t even know, and once they get audited, solo operators risk losing their license to practice medicine and their malpractice insurance rates become even more unaffordable, but until it gets that bad, everyone is making enough money with minimal risk to care too much about problems they don’t already know about.

                Everything is already compromised and the compromise has already been priced in. Doctors of all people should know that just because you don’t know about it or ignore it once you do, the problem isn’t going away or getting better on its own.

      • wat10000 19 hours ago

        Existing systems have this problem too. Every so often someone ends up dead because the 911 dispatcher didn't take them seriously. It's common for there to be a rule to send people out to every call no matter what it is to try to avoid this.

        A better reason is IBM's old, "a computer can never be held accountable...."

    • butlike 19 hours ago

      Same thing that happens when a carabiner snaps while rock climbing

    • colechristensen 20 hours ago

      >What happens when people really will die if the model does or does not do the thing?

      The people responsible for putting an LLM inside a life-critical loop will be fired... out of a cannon into the sun. Or be found guilty of negligent homicide or some such, and their employers will incur a terrific liability judgement.

      • stirfish 19 hours ago

        More likely that some tickets will be filed, a cost function somewhere will be updated, and my defense industry stocks will go up a bit

      • a4isms 12 hours ago

        Has this consequence happened with self-driving automobiles on open roads in the US of A when people died in crashes? If not, why not?

        • eru 12 hours ago

          Interestingly, we are a lot more lenient with the people who built and pilot old-fashioned cars.

          See eg https://archive.is/6KhfC

        • colechristensen 12 hours ago

          The terms of the existing Tesla wrongful death lawsuits have not been public.

  • 44za12 16 hours ago

    Absolutely wild. I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box. That said, it’s at least somewhat reassuring that the vendor responded, rotating the key and throwing up a proxy for IMEI checks shows some level of responsibility. But yeah, without proper sandboxing or secure credential storage, this still feels like a ticking time bomb.

    • hn_throwaway_99 15 hours ago

      > I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box.

      As someone with a lot of experience in the mobile app space, and tangentially in the IoT space, I can most definitely believe this, and I am not surprised in the slightest.

      Our industry may "move fast", but we also "break things" frequently and don't have nearly the engineering rigor found in other domains.

      • rvnx 7 hours ago

        It was a good thing for user privacy that the keys were directly on the device, it is only in DAN mode that a copy of the chats were sent.

        So eventually if they remove the keys from the device, messages will have to go through their servers instead.

    • lucasluitjes 16 hours ago

      Hardcoded API keys and poorly secured backend endpoints are surprisingly common in mobile apps. Sort of like how common XSS/SQLi used to be in webapps. Decompiling an APK seems to be a slightly higher barrier than opening up devtools, so they get less attention.

      Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.

      • bigiain 12 hours ago

        Eventually someone is going to get a bill for the OpenAPI key usage. That will provide some incentive. (Incentive to just rotate the key and brick all the devices rather than fix the problem, most likely.

        • eru 12 hours ago

          > (Incentive to just rotate the key and brick all the devices rather than fix the problem, most likely.

          But that at least turns it into something customers will notice. And companies already have existing incentives for dealing with that.

          • bigiain 11 hours ago

            At that stage you just rotate the company name or branding...

            • eru 10 hours ago

              Sure. But then you cannot benefit from building up a good reputation and charge people extra for it.

              (There's a reason Apple can charge crazy markups.)

    • anitil 10 hours ago

      The IOT and embedded space is simultaneously obsessed with IP protection, fuse protecting code etc, and incapable of managing the life cycle of secrets. I worked at one company that actually did it well on-device, but neglected they had to ship their testing setup overseas including certain keys. So even if you couldn't break in to the device you could 'acquire' one of the testing devices and have at it

    • switchbak 11 hours ago

      I think we'll see plenty of this as the wave of vibe-coded apps starts rolling in.

  • psim1 21 hours ago

    Indeed, brace yourselves as the floodgates holding back the poorly-developed AI crap open wide. If anyone is thinking of a career pivot, now is the time to dive into all things cybersecurity. It's going to get ugly!

    • 725686 20 hours ago

      The problem with cybersecurity is that you only have to screw once, and you're toast.

      • 8organicbits 20 hours ago

        If that were true we'd have no cybersecurity professionals left.

        In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.

        • ceejayoz 20 hours ago

          "One mistake can cause a breach" and "we should fire people who make the one mistake" are very different claims. The latter claim was not made.

          As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.

          • 8organicbits 19 hours ago

            I believe the thread starts with cybersecurity as a job role, although perhaps I misunderstood. In either case, I agree with your learning-based approach. Blameless postmortem and related techniques are really valuable here.

      • 16 hours ago
        [deleted]
      • immibis 15 hours ago

        There's a difference between "cybersecurity" meaning the property of having a secure system, and "cybersecurity" as a field of human endeavour.

        If your system has lots of vulnerabilities, it's not secure - you don't have cybersecurity. If your system has lots of vulnerabilities, you have a lot of cybersecurity work to do and cybersecurity money to make.

  • JohnMakin 21 hours ago

    “decrypt” function just decoding base64 is almost too difficult to believe but the amount of times ive run into people that should know better think base64 is a secure string tells me otherwise

    • jcul 12 hours ago

      The raw crypt data is base64 encoded, probably just for ease of embedding the strings.

      There is a decryption function that does the actual decryption.

      Not to say it wouldn't be easy to reverse engineer or just run and check the return, but it's not just base64.

    • crtasm 21 hours ago

      >However, there is a second stage which is handled by a native library which is obfuscated to hell

      • zihotki 21 hours ago

        That native obfuscated crap still has to do an HTTP request, that's essentially a base64

    • qoez 21 hours ago

      They should have off-loaded security coding to the OAI agent.

    • pvtmert 21 hours ago

      not very much surprising given they left the adb debugging on...

    • _carbyau_ 11 hours ago

      So easy a fancy webpage could do it. https://gchq.github.io/CyberChef/

      I mean, it's from gchq so it is a bit fancy. It's got a "magic" option!

      Cool thing being you can download it and run it yourself locally in your browser, no comms required.

  • jon_adler 19 hours ago

    The humorous phrase “the S in IoT stands for security” can be applied to the wearable market too. I wonder if this rule applies to any market with fast release cycles, thin margins and low barriers to entry?

    • thfuran 19 hours ago

      It pretty much applies to every market where security negligence isn't an existential threat to the continued existence of its perpetrators.

  • neya 21 hours ago

    I love how they tried to sponsor an empty YouTube channel hoping to put the whole thing under the carpet

    • dylan604 16 hours ago

      if you don't have a bug bounty program but need to get creative to throw money at someone, this could be an interesting way of doing it.

      • rvnx 7 hours ago

        It could be developers trying to be nice to the guy, and offering him this so it gets approved as marketing (which at the end is not so bad)

      • 93po 13 hours ago

        Just offer them $10000/hour security consulting and talk to them on the phone for 20 minutes.

        • dylan604 13 hours ago

          Okay, name one accounting department that's going to authorize that. I said creative, but that's just unsane.

    • JumpCrisscross 15 hours ago

      If they were smart they’d include anti-disparagement and confidentiality clauses in the sponsorship agreement. They aren’t, though, so maybe it’s just a pathetic attempt at bribery.

  • mikeve 21 hours ago

    I love how run DOOM is listed first, over the possibility of customer data being stolen.

    • reverendsteveii 21 hours ago

      I'm taking

      >run DOOM

      as the new

      >cat /etc/passwd

      It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want

      • jcul 12 hours ago

        To be fair (or pedantic), in this post they didn't have root, so cat'ing etc/passwd would not have been possible, whereas installing a doom apk is trivial.

        • rainonmoon 10 hours ago

          /etc/passwd is world readable by default.

          • kaszanka 2 hours ago

            To be even more pedantic, it's also not present on Android.

      • bigiain 12 hours ago

        Popping Calc!

        (I'm showing my age here, aren't I?)

  • p1necone 14 hours ago

    Their email responses all show telltale signs of AI too which is pretty funny.

    • paul-tharun an hour ago

      I think it has to do with language barrier and translation

  • memesarecool 20 hours ago

    Cool post. One thing that rubbed me the wrong way: Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues. OP however seemed to show disdain and even combativeness towards them... which is a shame. And of course the usual sinophobia (e.g. everything Chinese is spying on you). Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.

    Edit: typo

    • mmastrac 20 hours ago

      I agree they could have worked more closely with the team, but the chat logging is actually pretty concerning. It's not sinophobia when they're logging _everything_ you say.

      (in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)

      • oceanplexian 19 hours ago

        This might come as a weird take but I'm less concerned about the Chinese logging my private information than an American company. What's China going to do? It's a far away country I don't live in and don't care about. If they got an American court order they would probably use it as toilet paper.

        On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.

        • dubcanada 19 hours ago

          That's rather naive, considering China has a international police unit, that is stationed in several countries https://en.wikipedia.org/wiki/Chinese_police_overseas_servic...

          • itishappy 17 hours ago

            I recently learned that the New York City Police Department has international presence as well. Not sure if it directly compares, but... what a world we live in.

            https://www.nycpolicefoundation.org/ourwork/advance/countert...

            https://www.nyc.gov/site/nypd/bureaus/investigative/intellig...

            • aspenmayer 10 hours ago

              Pretty sure NYPD has a budget in the billions and covers more landmass and population than some small countries, so there’s also that.

          • Bjartr 17 hours ago

            Right, but the vast majority of people living in the USA as citizens have threat models that rightly do not include "Being disappeared by China"

            • CamperBob2 14 hours ago

              What about the threat model that goes, "Trump threatens to impose 1000% tariffs if Chinese don't immediately turn over copies of all data captured by their AI products from users in the US?"

              Compounding the difficulty of the question: half of HN thinks this would be a good idea.

              • WJW 14 hours ago

                The history of tariff talks seems to indicate that rather than oblige, China would stop all shipments of semiconductors to the US and Trump would back down after a week or two.

                • bigiain 12 hours ago

                  TACO...

                  • CamperBob2 12 hours ago

                    True. Now imagine a future POTUS who has all of Trump's faults except his endearingly-feckless idiocy.

          • ceejayoz 19 hours ago

            There's also the Mossad's approach to "you're out of our jurisdiction".

            https://en.wikipedia.org/wiki/Mordechai_Vanunu

            https://en.wikipedia.org/wiki/Adolf_Eichmann

            • wongarsu 18 hours ago

              Also the CIA's approach

              https://en.wikipedia.org/wiki/Extraordinary_rendition

              Russia is more known for poisoning people. But of all of them China feels the least threatening if you are not Chinese. If you are Chinese you aren't safe from the Chinese government no matter where you are

              • bigiain 12 hours ago

                And the Saudi Bone Saw Diplomatic Team.

          • MangoToupe 14 hours ago

            Man wait until you hear what's in DC (and the surrounding area). In any possible way China is a threat to my health, the US state and corporations based here are a far greater one.

          • simlevesque 17 hours ago

            They only arrest chinese citizens.

        • dylan604 16 hours ago

          These threads always seem to be what can China do to me in a limited way of thinking that China cannot jail you or something. However, do you think all of the Chinese data scrapers are not doing something similar to Facebook where every source of data gathering ultimately gets tied back to you? Once China has a dosier on every single person on the planet regardless of country they live, they can then start using their algos to influence you in ways well beyond advertising. If they can have their algos show you content that causes you to change your mind on who you are voting for or some other method of having you do something to make changes in your local/state/federal elections, then that's much worse to me than some feigned threat of Chinese advertising making you buy something

          • drawfloat 15 hours ago

            They probably will do that, but I think it’s naive to think the US military/intelligence/tech sector wouldn’t happily do the same. Given many of us likely see the hand of the US already trying to tip the scale in our local politics more than China, why would we be more worried of China?

            • dylan604 15 hours ago

              So flip the script, what do I care if the US is trying to influence the minds of adversary's citizens? If people are saying they don't care what China knows about them (not being a Chinese citizen), why should I (not a Chinese citizen) care what my gov't knows about Chinese citizens?

              • drawfloat 14 hours ago

                Nobody said they don’t care, they said it worries them less than America.

                • dylan604 13 hours ago

                  The "don't care" is implied when someone says that "China knowing about me when I'm not in China nor a Chinese citizen"

        • mensetmanusman 16 hours ago

          China has a policy of chilling free speech in the west with political pressure.

          • immibis 15 hours ago

            So does the west.

            • rvnx 6 hours ago

              The censorship in the West is directly in the models

        • mschuster91 19 hours ago

          > What's China going to do? It's a far away country I don't live in and don't care about.

          Extortion is one thing. That's how spy agencies have operated for millennia to gather HUMINT. The Russians, the ultimate masters, even have a word for it: kompromat. You may not care about China, Russia, Israel, the UK or the US (the top nations when it comes to espionage) - but if you work at a place they're interested, they care about you.

          The other thing is, China has been known to operate overseas against targets (usually their own citizens and public dissidents), and so have the CIA and Mossad. Just search for "Chinese secret police station" [1], these have cropped up worldwide.

          And, even if you personally are of no interest to any foreign or national security service, sentiment analysis is a thing. Listen in on what people talk about, run it through a STT engine and a ML model to condense it down, and you get a pretty broad picture of what's going on in a nation (aka, what are potential wedge points in a society that can be used to fuel discontent). Or proximity gathering stuff... basically the same thing the ad industry [2] or Strava does [3], that can then be used in warfare.

          And no, I'm not paranoid. This, sadly, is the world we live in - there is no privacy any more, nowhere, and there are lots of financial and "national security" interest in keeping it that way.

          [1] https://www.bbc.com/news/world-us-canada-65305415

          [2] https://techxplore.com/news/2023-05-advertisers-tracking-tho...

          [3] https://www.theguardian.com/world/2018/jan/28/fitness-tracki...

          • Sanzig 18 hours ago

            > but if you work at a place they're interested, they care about you.

            And also worth noting that "place a hostile intelligence service may be interested in" can be extremely broad. I think people have this skewed impression they're only after assets that work for goverment departments and defense contractors, but really, everything is fair game. Communications infrastructure, social media networks, cutting edge R&D, financial services - these are all useful inputs for intelligence services.

            These are also softer targets: someone working for a defense contractor or for the government will have had training to identify foreign blackmail attempts and will be far more likely to notify their country's counterintelligence services (having the penalties for espionage clearly explained on the regular helps). Someone who works for a small SaaS vendor, though? Far less likely to understand the consequences.

          • lostlogin 17 hours ago

            > The other thing is, China has been known to operate overseas against targets

            Here in boring New Zealand, the Chinese government has had anti-China protestors beaten in new zealand. They have stalked and broken into the office and home of an academic, expert in China. They have a dubious relationship with both the main political parties (including having an ex-Chinese spy elected as an MP).

            It’s an uncomfortable situation and we are possibly the least strategically useful country in the world.

            • mschuster91 13 hours ago

              > It’s an uncomfortable situation and we are possibly the least strategically useful country in the world.

              You're still part of Five Eyes... a privilege no single European Union country enjoys. That's what makes you a juicy target for China.

          • Szpadel 16 hours ago

            > Listen in on what people talk about, run it through a STT engine and a ML model to condense it down

            this is something I was talking when LLM boom started. it's now possible to spy on everyone on every conversation. you just need enough computing power to run special AI agent (pun intended)

        • IncreasePosts 14 hours ago

          Carry this package and deliver it to person X with you next time you fly. Go to the outskirts of this military base and take a picture and send it to us.

          You wouldn't want your mom finding out your weird sexual fetish, would you?

      • mrheosuper 8 hours ago

        i like to give them benefit of doubt.

        I bet that decision is decided solely by dev team. All the CEO care is "I want the chat log sync between devices, i don't care how you do this". They won't even know the chat log is stored on their server.

        • rvnx 7 hours ago

          It is only in DAN mode, so most likely it is not to spy but to be able to debug whether answers violate the laws in China (aka: that the prompt is efficient in all scenarios) as this is a serious crime

      • rvnx 7 hours ago

        No, it was only in DAN mode

    • transcriptase 20 hours ago

      >everything Chinese is spying on you

      When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?

      • ixtli 19 hours ago

        its sinophobia because it perfectly describes the conditions we live in in the US and many parts of europe, but we work hard to add lots of "nuance" when we criticize the west but its different and dystopian when They do it over there.

        • transcriptase 19 hours ago

          Do you remember that Sesame Street segment where they played a game and sang “One of these things is not like the others”?

          I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.

          • nyrikki 18 hours ago

            One is disappearing citizens for political speech or the crime of being born to active duty parents, who happened to be stationed over seas.

            Anyone in the US should be very concerned, no matter if it is the current administration's thought police, or the next who treats it as precident.

            As I am not actively involved in something the Chinese government would view as a huge risk, but being put on a plane without due process to be sent to a labor camp based on trumped up charges by my own government is far more likely.

            • transcriptase 16 hours ago

              And if you were a Chinese citizen would you post the same thing about your government while living in China? Would the things you’re referencing be covered in non-stop Chinese news coverage that’s critical of the government?

              You know of these things due to the domestic free press holding the government accountable and being able to speak freely about it as you’re doing here. Seeing the two as remotely comparable is beyond belief. You don’t fear the U.S. government but it’s fun to pretend you live under an authoritarian dictatorship because your concept of it is purely academic.

          • ceejayoz 18 hours ago

            > I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.

            Gonna need a more specific hint to narrow it down.

          • immibis 15 hours ago

            > In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.

            This could describe any of the countries involved.

          • standardly 19 hours ago

            > one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.

            The United States?

            • wombatpm 18 hours ago

              Global Bully maybe. The current administration has no concept of soft power, otherwise they would have kept USAID

        • observationist 19 hours ago

          There's no question that the Chinese are doing sketchy things, and there's no question that US companies do it, too.

          The difference that makes it concerning and problematic that China is doing it is that with China, there is no recourse. If you are harmed by a US company, you have legal recourse, and this holds the companies in check, restraining some of the most egregious behaviors.

          That's not sinophobia. Any other country where products are coming out of that is effectively immune from consequences for bad behavior warrants heavy skepticism and scrutiny. Just like popup manufacturing companies and third world suppliers, you might get a good deal on cheap parts, but there's no legal accountability if anything goes wrong.

          If a company in the US or EU engages in bad faith, or harms consumers, then trade treaties and consumer protection law in their respective jurisdictions ensure the company will be held to account.

          This creates a degree of trust that is currently entirely absent from the Chinese market, because they deliberately and belligerently decline to participate in reciprocal legal accountability and mutually beneficial agreements if it means impinging even an inch on their superiority and sovereignty.

          China is not a good faith participant in trade deals, they're after enriching themselves and degrading those they consider adversaries. They play zero sum games at the expense of other players and their own citizens, so long as they achieve their geopolitical goals.

          Intellectual property, consumer and worker safety, environmental protection, civil liberties, and all of those factors that come into play with international trade treaties allow the US and EU to trade freely and engage in trustworthy and mutually good faith transactions. China basically says "just trust us, bro" and will occasionally performatively execute or imprison a bad actor in their own markets, but are otherwise completely beyond the reach of any accountability.

          • ixtli 16 hours ago

            I think the notion that people have recourse against giant companies, a military industrial complex, or even their landlords in the US is naive. I believe this to be pretty clear so I don't feel the need to stretch it into a deep discussion or argument but suffice it to say it seems clear to me that everything you accuse china of here can also be said of the US.

          • rvnx 7 hours ago

            The main difference is that ChatGPT and Google directly captures the conversations. Here they capture only the conversations legally at high-risk, so even less conversations than the “good privacy” US LLM providers themselves.

          • drawfloat 14 hours ago

            Your president is currently using tariffs and the threat of further economic damage as a weapon to push Europe in to dropping regulation of its tech sector. We have no recourse to challenge that either.

          • pbhjpbhj 15 hours ago

            >there's no question that US companies [...]

            You don't think Trump's backers have used profiling, say, to influence voters? Or that DOGE {party of the USA regime} has done "sketchy things" with people's data?

      • Vilian 20 hours ago

        USA does the same thing, but uses tax money to pay for the information, between wasting taxpayer money and forcing companies to give the information for free, China is the least morally incorrect

    • hnrodey 20 hours ago

      If all of the details in this post are to be believed, the vendor is repugnantly negligent for anything resembling customer respect, security and data privacy.

      This company cannot be helped. They cannot be saved through knowledge.

      See ya.

      • repelsteeltje 19 hours ago

        +1

        Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.

        The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing

    • dylan604 16 hours ago

      > And of course the usual sinophobia (e.g. everything Chinese is spying on you)

      to assume it is not spying on you is naive at best. to address your sinophobia label, personally, I assume everything is spying on me regardless of country of origin. I assume every single website is spying on me. I assume every single app is spying on me. I assume every single device that runs an app or loads a website is spying on me. Sometimes that spying is done for me, but pretty much always the person doing the spying is benefiting someway much greater than any benefit I receive. Especially the Facebook example of every website spying on me for Facebook, yet I don't use Facebook.

      • immibis 15 hours ago

        And, importantly, the USA spying can actually have an impact on your life in a way that the Chinese spying can't.

        Suppose you live in the USA and the USA is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. You get disappeared.

        Suppose you live in the USA and China is spying on you. Whatever information they collect goes into a machine learning system and it flags you for disappearal. But you're not in China and have no ties to China so nothing happens to you. This is a strictly better scenario than the first one.

        If you're living in China with a Chinese family, of course, the scenarios are reversed.

    • mensetmanusman 17 hours ago

      Nipponophobia is low because Japan didn’t successfully weaponize technology to make a social credit score police state for minority groups.

      • ixtli 16 hours ago

        they already terrorize minority groups there just fine: no need for technology.

    • billyhoffman 18 hours ago

      > Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues

      This was the opposite of a professional response:

      * Official communication coming from a Gmail. (Is this even an employee or some random contractor?)

      * Asked no clarifying questions

      * Gave no timelines for expected fixes, no expectations on when the next communication should be

      * No discussion about process to disclose the issues publicly

      * Mixing unrelated business discussions within a security discussion. While not an outright offer of a bribe, ANY adjacent comments about creating a business relationship like a sponsorship is wildly inappropriate in this context.

      These folks are total clown shoes on the security side, and the efficacy of their "fix", and then their lack of communication, further proves that.

    • Aeolun 5 hours ago

      I think the response wouldn’t be so hostile if they had continued to engage. One round of fixes clearly wasn’t enough.

    • repelsteeltje 20 hours ago

      > Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.

      It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.

      That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.

      But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.

      If they're not up to it, they should not be doing this.

      • memesarecool 19 hours ago

        Yes I meant simple as in "amateur mistakes". From the mistakes (and their excitement and response to the report) they are clueless about security. Which of course is bad. Hopefully they will take security more seriously on the future.

    • derac 20 hours ago

      I mean, at the end of the article they neglected to fix most of the issues and stopped responding.

    • demarq 16 hours ago

      Same here. Also once it turned out to be an android device in debug mode the rest of the article was less interesting. Evil maid stuff

    • plorntus 18 hours ago

      To be honest the responses sounded copy and pasted straight from ChatGPT, it seemed like there was fake feigned interest into their non-existent youtube channel.

      > Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start

      I don't think that should give anyone a free pass though. It was such a simple flaw that realistically speaking they shouldn't ever be trusted again. If it had been a non-obvious flaw that required going through lots of hoops then fair enough but they straight up had zero authentication. That isn't a 'flaw' you need an external researcher to tell you about.

      I personally believe companies should not be praised for responding to such a blatant disregard for quality, standards, privacy and security. No matter where they are from.

    • wyager 19 hours ago

      Note that the world-model "everything Chinese is spying on you" actually produced a substantially more accurate prediction of reality than the world-model you are advocating here.

      As far as being "very welcoming", that's nice, but it only goes so far to make up for irresponsible gross incompetence. They made a choice to sell a product that's z-tier flaming crap, and they ought to be treated accordingly.

      • thfuran 19 hours ago

        What world model exactly do you think they're advocating?

    • 18 hours ago
      [deleted]
    • butlike 19 hours ago

      They'll only patch it in the military model

      /s

    • jekwoooooe 17 hours ago

      [flagged]

  • wedn3sday 19 hours ago

    I love the attempt at bribery by offering to "sponsor" their empty youtube channel.

  • brahyam 21 hours ago

    What a train wreck, there are thousand more apps in store that do exactly this because its the easiest way to use openAI without having to host your own backend/proxy.

    I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):

    - https://github.com/BerriAI/litellm - https://github.com/KenyonY/openai-forward/tree/main

    but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK - https://github.com/brahyam/Gateway

  • Jotalea 18 hours ago

    Really nice post, but I want to see Bad Apple next.

  • pvtmert 21 hours ago

    > What the fuck, they left ADB enabled. Well, this makes it a lot easier.

    Thinking that was all, but then;

    > Holy shit, holy shit, holy shit, it communicates DIRECTLY TO OPENAI. This means that a ChatGPT key must be present on the device!

    Oh my gosh. Thinking that is it? Nope!

    > SecurityStringsAPI which contained encrypted endpoints and authentication keys.

    • rvnx 7 hours ago

      It’s the best privacy protecting way to send directly data rather than a proxy

  • ixtli 19 hours ago

    This is one of the best things ive read on here in a long time. Definitely one of the greatest "it runs doom" posts ever.

  • JumpCrisscross 15 hours ago

    A fair consumer protection imperative might be found in requiring system prompts and endpoints be disclosed. This is a good example to kick that off with, as it presents a national security issue.

  • jahsome 19 hours ago

    It's always funny to me when people go to the trouble of editorializing a title, yet in doing so make the title even harder to parse.

  • aidos 21 hours ago

    > “Our technical team is currently working diligently to address the issues you raised”

    Oh now you’re going to be diligent. Why do I doubt that?

  • komali2 21 hours ago

    > "and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you."

    Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.

    • williamscales 21 hours ago

      > Don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin?

      In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.

      • rvnx 6 hours ago

        There is no difference to other countries. In France if you say bad things about certain groups of people then you can literally go to jail (but the censorship is directly IN the models)

        • komali2 4 hours ago

          You don't feel there's a difference between a State banning criticism of the State, and a State passing anti-hate speech laws to protect people from, e.g., nazis?

    • pbhjpbhj 15 hours ago

      If you consider that an LLM has a mathematical representation of how close any phrase is to "china politics" then avoidance of that should be relatively clear to comprehend. If I gave you a list and said 'these words are ranked by closeness to "Chinese politics"' you'd be able to easily check if words were on the list, I feel.

      I suspect you could talk readily about something you think is not Chinese politics - your granny's ketchup recipe, say. (And hope that ketchup isn't some euphemism for the CCP, or Uighar murders or something.)

      • komali2 4 hours ago

        Now I wonder whether its vectors correctly associate Winnie the Pooh as "related to Chinese politics." There's many other bizarre related associations.

    • Cthulhu_ 20 hours ago

      I'm sure ChatGPT and co have a decent enough grasp on what is not allowed in China, but also that the naive "prompt engineers" for this application don't actually know how to "program" it well enough. But that's the difference between a prompt engineer and a software developer, the latter will want to exhaust all options, be precise, whereas an LLM can handle a bit more vagueness.

      That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?

      • 19 hours ago
        [deleted]
      • aspenmayer 9 hours ago

        > How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?

        > The City & the City is a novel by British author China Miéville that follows a wide-reaching murder investigation in two cities that exist side by side, each of whose citizens are forbidden to go into or acknowledge the other city, combining weird fiction with the police procedural.

        https://en.wikipedia.org/wiki/The_City_%26_the_City

    • aspbee555 21 hours ago

      it is to ensure no discussion of Tiananmen square

    • landl0rd 20 hours ago

      Just mentioning the CPC isn’t life-threatening, while talking about Xinjiang, Tiananmen Square, or cn’s common destiny vision the wrong way is. You also have to figure out how to prohibit mentioning those things without explicitly mentioning them, as knowledge of them implies seditious thoughts.

      I’m guessing most LLMs are aware of this difference.

    • wat10000 19 hours ago

      Ask yourself, why are they saying this? You can probably surmise that they're trying to avoid stirring up controversy and getting into some sort of trouble. Given that, which topics would cause troublesome controversy? Definitely contemporary Chinese politics, Chinese history is mostly OK, non-Chinese politics in Chinese language is fine.

      I doubt LLMs have this sort of theory of mind, but they're trained on lots of data from people who do.

  • lxe 19 hours ago

    That's some very amateur programming and prompting that you've exposed.

  • RataNova 5 hours ago

    Honestly, the most surprising part is that they eventually rotated the key

  • bytesandbits 15 hours ago

    Phenomenal write up I enjoyed every bit of it

  • sim7c00 16 hours ago

    earbuds that run doom. achievement unlocked? (sure adb sideload, but doom is doom)

    nice writeup thanks!

  • add-sub-mul-div 19 hours ago

    Sure let's start giving out participation trophies in security. Nothing matters anymore.

  • jekwoooooe 17 hours ago

    Good write up. At some point we have to just seize these Chinese malware adjacent crap at the borders already

  • 1oooqooq 14 hours ago

    making fun of a company amateur tech while posting screenshots of text is another level of lack of self awareness

    • rvnx 6 hours ago

      It’s also illegal to try to hack into their backend and access restricted data, so he should be happy actually that this company has little presence in the US

  • sahil_sharma0 an hour ago

    [dead]

  • computerthings 20 hours ago

    [dead]

  • throwawayoldie 21 hours ago

    [flagged]

    • Cthulhu_ 20 hours ago

      I wish earning money was as easy as setting rules for yourself, unfortunately that doesn't work.

      • throwawayoldie 20 hours ago

        Oh, that's fine, the rule's for everyone else, not me. I would be more likely to cut my own head off than willingly describe something as "AI-powered".

        • j16sdiz 19 hours ago

          cutting your head off won't earn you any money either.

  • Liquix 18 hours ago

    great writeup! i love how it goes from "they left ADB enabled, how could it get worse"... and then it just keeps getting worse

    > After sideloading the obligatory DOOM

    > I just sideloaded the app on a different device

    > I also sideloaded the store app

    can we please stop propagating this slimy corporate-speak? installing software on a device that you own is not an arcane practice with a unique name, it's a basic expectation and right

    • efilife 14 hours ago

      I agree. It's the same as calling a mobile OS a ROM

      • userbinator 6 hours ago

        That term at least has a history behind it, as many featurephones had their OS on a small XIP NOR flash ROM, and now the OS is usually (mostly) read-only.

        But "sideloading" is definitely a new term of anti-freedom hostility.

  • gbraad 20 hours ago

    Strongly suggest you to not buy, as the flex cable for the screen is easy to break/come loose. Mine got replaced three times, and my unit now still has this issue; touch screen is useless.

    https://youtube.com/shorts/1M9ui4AHXMo

    Note: downvote?

    • 8 hours ago
      [deleted]
  • lysace 17 hours ago

    This is marketing.