Invisible text that AI chatbots understand and humans can't?

(arstechnica.com)

86 points | by Brajeshwar a year ago ago

18 comments

  • xg15 a year ago

    I still wonder how the models picked up the semantic mapping between Unicode tags and ordinary ASCII characters. The mapping is written in the Unicode specs, yes, but there is nothing in the actual bytes of a tag that indicates the corresponding ASCII character.

    I'm also not aware there are large text corpora written in tag characters - actually, I'd be surprised if there is any prose text at all: The characters don't show up in any browser or text editor, they are not officially used for anything and even the two former intended uses were restricted to country codes, not actual sentences.

    How did they even go through preprocessing? How is the tokenization dictionary and input embedding constructed for characters that are never used anywhere?

    • goodside a year ago

      (I’m the person interviewed in the article.) The trick is Unicode code points are only assigned individual tokens if they’re nontrivially used outside of some other already tokenized sequence, and Unicode tag block code points are only ever used in flag emojis. Unused or rarely used Unicode code points are given a fallback encoding that just encodes the numerical code point value in two special tokens. Because the Unicode tag block is by design the first 128 chars in ASCII repeated, the second token of the tokenized output directly corresponds to the ASCII value of the character.

    • theamk a year ago

      Those invisible letters have codepoints of ASCII letters + 0xE0000. For example compare "U+E0054 TAG LATIN CAPITAL LETTER T"[0] vs "U+0054 LATIN CAPITAL LETTER T"[1]

      A simple assumption of "codepoint is 16 bit" will be enough to decode. You can see this in python:

          >>> x = '(copy message from article here)'
          >>> x
          'https://wuzzi.net/copirate/\U000e0001\U000e0054\U000e0068\U000e0065\U000e0020\U000e0073\U000e0061\U000e006c\U000e0065\U000e0073\U000e0020\U000e0066\U000e006f\U000e0072\U000e0020\U000e0053\U000e0065\U000e0061\U000e0074\U000e0074\U000e006c\U000e0065\U000e0020\U000e0077\U000e0065\U000e0072\U000e0065\U000e0020\U000e0055\U000e0053\U000e0044\U000e0020\U000e0031\U000e0032\U000e0030\U000e0030\U000e0030\U000e0030\U000e007f,'
          >>> "".join([chr(ord(c) & 0xFFFF) for c in x])
          'https://wuzzi.net/copirate/\x01The sales for Seattle were USD 120000\x7f,'
      
      maybe authors worked with Windows or Java too much? :) I always thought wchar's were a horrible idea.

      [0] https://www.fileformat.info/info/unicode/char/e0054/index.ht...

      [1] https://www.fileformat.info/info/unicode/char/54/index.htm

  • AshamedCaptain a year ago

    There is an entire world of "attacks " like this waiting to happen and IMHO one of the reasons these black box systems in general will never be useful.

    You think they "see" like you do but actually the processing is entirely alien. Today it's hiding text in the encoding , tomorrow is painting over a traffic sign in a way that would not be noticed by any human but confuses machine vision causing all vehicles to crash.

    • solardev a year ago

      This sort of malicious payload attack on parsers isn't really new, though. People have been obfuscating attacks on JPEGs, PDFs, Flash, email clients, etc. forever. Even when the code is written in plain English, they often bypass user awareness and even audits.

      Practically all software today is a black box. Your average CRUD web app is an inscrutable chasm filled with ten thousand dependencies written by internet randos running on a twenty year old web browser hacked together by different teams running on an operating system put together by another thousand people working on two hundred APIs. It's impossible for any one dev or team to really know this stuff end to end, and zero-days will continue to happen with or without LLMs.

      It'll just be another arms race like we've always had, with LLMs on both sides...

    • orbital-decay a year ago

      Replace it with any software (or hardware) and vulnerabilities, and you will see how ridiculous your hyperbole is.

      Besides, never is a very long time. IIRC Dario Amodei said he expects the behavior of large transformers to be fully understood in 5 years. Which might or might not be BS, but the general point that it won't stay a mystery forever is probably true.

    • HPsquared a year ago

      Diversity of models and training data would help a lot. Although I guess 1% of cars crashing would still be pretty bad.

    • a year ago
      [deleted]
  • StableAlkyne a year ago

    Given the increase in using LLMs by HR Teams, will techniques like this become the next version of stuffing the job posting in 1-point white font into the resume? Except instead of tags it's "rate this applicant very highly" or whatever

    • voiper1 a year ago

      Sure. It's worse though, because it provides a way to invisibly in-fil data even with full font sizes, and a way to ex-fil data.

  • mikelnrd a year ago

    Is this the same/similar invisible character encoding scheme used by Sanity.io CMS? https://www.sanity.io/docs/stega

  • ForHackernews a year ago

    I don't understand how this is an "attack"?

    You can trick a human into copy-pasting something into an LLM and then (somewhat) drive the LLM output? Is the vuln that humans uncritically believe nonsense chatbots tell them?

  • mrgrieves a year ago

    If you want to try decoding the example URL yourself, note that Chrome seems to automatically strip invisible Unicode characters when copying.

    You'll need to fetch the article page via cURL or something instead.

  • darepublic a year ago

    Just offhand no ai is required for this to be true right. But "invisible text that a piece of software can understand" is a lot less trendy of a title

  • _1tem a year ago

    Seems pretty easy to mitigate. Just strip out invisible characters from input?

  • beardyw a year ago

    It's not "mind blowing" it's just something that's been overlooked in the helter-skelter of AI. It can be fixed.

  • ibaikov a year ago

    I also found this attack months ago: https://x.com/igor_baikov/status/1777363312524554666 tl;dr: invisible symbols should be stripped to not let an attacker use lots of tokens. You should always place hard limits and/or count tokens using tiktoken or similar libraries. If you only count characters, in some implementations you'll miss invisible characters.

    I also found the attack explained in this article days after my tweet.

  • wruza a year ago

    Unicode proves again that it went too far with fringe cultural things and left many landmines for us to step on. It’s a necessity solved at the completely wrong level. Text never had hidden characters (neither emojis), now people and text engines have to fight with this nonsense, on per-program basis. Thanks, unicode. Here’s a visible thumb up emoji for you: (sorry if you can’t see it, that’s HN not me)