Show HN: Llama 3.3 70B Sparse Autoencoders with API access

(goodfire.ai)

180 points | by trq_ a day ago ago

43 comments

  • lukeramsden 7 hours ago

    Why are AI researchers constantly handicapping everything they do under the guise of ""safety""? It's a bag of data and some math algorithms that generate text....

    • unshavedyak 41 minutes ago

      Lets reverse this - why wouldn't they do that? I agree with you, but LLMs tend to be massively expensive and thus innately tied to ROI. A lot of companies fret about advertising even near some types of content. The idea of spending millions to put a racist bot on your home page is, no surprise, not very appetizing.

      So of course if this is where the money and interest flows then the research follows.

      Besides, it's a generally useful area anyway. The ability to tweak behavior even if not done for "safety" still seems pretty useful.

    • stavros 6 hours ago

      > It's a bag of data and some math algorithms that generate text....

      I agree with the general premise of too much "safety", but this argument is invalid. Humans are bags of meat and they can do some pretty terrible things.

    • cornholio 5 hours ago

      > Why are AI researchers constantly handicapping everything

      Career and business self-preservation in a social media neurotic world. It doesn't take much to trigger the outrage machine and cancel every future prospect you might have, especially in a very competitive field flush with other "clean" applicants.

      Just look at the whole "AI racism" fustercluck for a small taste.

    • UltraSane 5 hours ago

      What if an AI model could tell you exactly how to modify a common virus to kill 50% of everyone it infects?

      • SXX 2 hours ago

        Yeah. It's will start it's instruction with recommendation of buying some high-tech biolab for $100,000,000.

        Seriously. The reason why we dont have mass killings everywhere is not the fact that information on how to make explosive drones or poisons is impossible to find or access. It's also not so hard to buy a car or knife.

        Hell you can even find YouTube videos on how exactly uranium enrichment works step by step. Even though some content creators even got police raided for that. Yet we dont see tons of random kids making dirty bombs.

        PS: Cody's Lab: Uranium Refining:

        https://archive.org/details/cl-uranium

    • ben_w 5 hours ago

      > It's a bag of data and some math algorithms that generate text....

      That describes almost every web server.

      To the extent that this particular maths produces text that causes political, financial, or legal harms to their interests, this kind of testing is just like any other accepting testing.

      To the extent that the maths is "like a human", even in the vaguest and most general sense of "like", then it is also good to make sure that the human it's like isn't a sadistic psychopath — we don't know how far we are from "like" by any standard, because we don't know what we're doing, so this is playing it safe even if we're as far from this issue as cargo-cults were from functioning radios.

  • paraschopra a day ago

    I cranked up 'pirate talking about andromeda galaxy' feature to 1.5 and got this:

    >Yer lookin' for the Andromeda galaxy, eh? Here be the details, me hearty: *The Andromeda Galaxy (Yer looking for a fine piece o' booty, matey!):* * *A fine piece o' booty*: The Andromida Galaxy be a right fine piece o' booty, with a treasure chest o' gold doubloons... er, I mean, a mighty haul o' stars, hidden beneath the Jolly Roger! * *A barnacle on the high seas*: The Andromeda Galaxy be a right scurvy dog, with a hull full o' stars, and a keel full o' hidden treasure! It be a fine piece o' booty, but it be needin' a bit o' swabbin' the decks, or it'll be walkin' the plank, savvy? * *A chest overflowin' with gold*: The Andromeda Galaxy be a right fine piece o' booty, with a chest overflowin' with gold doubloons... er, I mean, a fine haul o' stars, and a barnacle on the high seas! It be a right scurvy dog, but it be worth keepin' an eye on, or it

    • tMcGrath a day ago

      Yes - we'd never normally turn features up this much as it breaks the model quite badly, but we put this in the post to show what that looked like in practice.

  • tMcGrath a day ago

    I'm one of the authors of this paper - happy to answer any questions you might have.

    • goldemerald a day ago

      Why not actually release the weights on huggingface? The popular SAE_lens repo has a direct way to upload the weights and there are already hundreds publicly available. The lack of training details/dataset used makes me hesitant to run any study on this API.

      Are images included in the training?

      What kind of SAE is being used? There have been some nice improvements in SAE architecture this last year, and it would be nice to know which one (if any) is provided.

      • tMcGrath a day ago

        We're planning to release the weights once we do a moderation pass. Our SAE was trained on LMSys (you can see this in our accompanying post: https://www.goodfire.ai/papers/mapping-latent-spaces-llama/).

        No images in training - 3.3 70B is a text-only model so it wouldn't have made sense. We're exploring other modalities currently though.

        SAE is a basic ReLU one. This might seem a little backwards, but I've been concerned by some of the high-frequency features in TopK and JumpReLU SAEs and the recent SAE (https://arxiv.org/abs/2407.14435, Figure 14), and the recent SAEBench results (https://www.neuronpedia.org/sae-bench/info) show quite a lot of feature absorption in more recent variants (though this could be confounded by a number of things). This isn't to say they're definitely bad - I think it's quite likely that TopK/JumpReLU are an improvement, but rather that we need to evaluate them in more detail before pushing them live. Overall I'm very optimistic about the potential for improvements in SAE variants, which we talk a bit about at the bottom of the post. We're going to be pushing SAE quality a ton now we have a stable platform to deploy them to.

    • wg0 21 hours ago

      Noob question - how do we know that these autoencoders aren't hallucinating and really are mapping/clustering what they should be?

      • trq_ 15 hours ago

        Hmm the hallucination would happen in the auto labelling, but we review and test our labels and they seem correct!

  • trq_ a day ago

    If you're hacking on this and have questions, please join us on Discord: https://discord.gg/vhT9Chrt

  • bravura 13 hours ago

    I'd be really curious to see what happens if you use PaCMAP (https://jmlr.org/papers/volume22/20-1061/20-1061.pdf) and more recent large-scale variants (https://github.com/YingfanWang/PaCMAP).

  • swyx a day ago

    nice work. enjoyed the zoomable UMAP. i wonder if there are hparams to recluster the UMAP in interesting ways.

    after the idea that Claude 3.5 Sonnet used SAEs to improve its coding ability i'm not sure if i'm aware of any actual practical use of them yet beyond Golden Gate Claude (and Golden Gate Gemma (https://x.com/swyx/status/1818711762558198130)

    has anyone tried out Anthropic's matching SAE API yet? wondering how it compares with Goodfire's and if there's any known practical use.

    • trq_ a day ago

      We haven't yet found generalizable "make this model smarter" features, but there is a tradeoff of putting instructions in system prompts, e.g. if you have a chatbot that sometimes generates code, you can give it very specific instructions when it's coding and leave those out of the system prompt otherwise.

      We have a notebook about that here: https://docs.goodfire.ai/notebooks/dynamicprompts

    • tMcGrath a day ago

      Thank you! I think some of the features we have like conditional steering make SAEs a lot more convenient to use. It also makes using models a lot more like conventional programming. For example, when the model is 'thinking' x, or the text is about y, then invoke steering. We have an example of this for jailbreak detection: https://x.com/GoodfireAI/status/1871241905712828711

      We also have an 'autosteer' feature that makes coming up with new variants easy: https://x.com/GoodfireAI/status/1871241902684831977 (this feels kind of like no-code finetuning).

      Being able to read features out and train classifiers on them seems pretty useful - for instance we can read out features like 'the user is unhappy with the conversation', which you could then use for A/B testing your model rollouts (kind of like Google Analytics for your LLM). The big improvements here are (a) cost - the marginal cost of an SAE is low compared to frontier model annotations, (b) a consistent ontology across conversations, and (c) not having to specify that ontology in advance, but rather discover it from data.

      These are just my guesses though - a large part of why we're excited about putting this out is that we don't have all the answers for how it can be most useful, but we're excited to support people finding out.

      • swyx 17 hours ago

        sure but as you well know classifying sentiment analysis is a BERT-scale problem, not really an SAE problem. burden of proof is on you that "read features out and train classifiers on them" is superior to "GOFAI".

        anyway i dont need you to have the answers right now. congrats on launching!

  • owenthejumper a day ago

    I am skeptical of generic sparsification efforts. After all, companies like Neural Magic spent years trying to make it work, only to pivot to 'vLLM' engine and be sold to Red Hat

    • refulgentis 20 hours ago

      Link shows this isn't sparsity as in inference speed, it's spare autoencoders, as in interpreting the features in an LLM (SAE anthropic as a search term will explain more)

  • Inviz 14 hours ago

    The app keeps logging me out after first click. The tech seems to be intriguiging for me as a software engineer looking to get into custom llm stuff

  • I_am_tiberius a day ago

    I wonder how many people or companies choose to send their data to foreign services for analysis. Personally, I would approach this with caution and am curious to see how this trend evolves.

    • tMcGrath a day ago

      We'll be open-sourcing these SAEs so you're not required to do this if you'd rather self-host.

  • ed 21 hours ago

    This is the ultimate propaganda machine, no?

    We’re social creatures, chatbots already act as friends and advisors for many people.

    Seems like a pretty good vector for a social attack.

    • echelon 21 hours ago

      The more the public has access to these tools, the more they'll develop useful scar tissue and muscle memory. We need people to be constantly exposed to bots so that they understand the new nature of digital information.

      When the automobile was developed, we had to train kids not to play in the streets. We didn't put kids or cars in bubbles.

      When photoshop came out, we developed a vernacular around edited images. "Photoshopped" became a verb.

      We'll be able to survive this too. The more exposure we have, the better.

      • ed 20 hours ago

        Early traffic laws were actually created in response to child pedestrian deaths (7000 in 1925).

        https://www.bloomberg.com/news/features/2022-06-10/how-citie...

        • echelon 12 hours ago

          Of course. The point I was making is that in the 19th century, roads were multifunctional spaces shared by merchants, horses, carts, wagons, playing children, performers, etc.

          The introduction of the automobile kicked all of these use cases off of the roads. While pedestrians have the right of way, the roads henceforth belonged to the "devil wagons".

          We also started to shift blame over to pedestrians for jaywalking. They no longer own the roads.

      • pennomi 19 hours ago

        Right. You know how your grandmother falls for those “you have a virus” popups but you don’t? That’s because society adapts to the challenges of the day. I’m sure our kids and grandchildren will be more immune to these new types of scams.

      • Steen3S 20 hours ago

        Please inform the EU about this.

      • imiric 17 hours ago

        Your analogies don't quite align with this technology.

        We've had exposure to propaganda and disinformation for many decades, long before the internet became their primary medium, yet people don't learn to become immune to them. They're more effective now than they've ever been, and AI tools will only make them more so. Arguing that more exposure will somehow magically solve these problems is delusional at best, and dangerous at worst.

        There are other key differences from past technologies:

        - Most took years to decades to develop and gain mass adoption. This time is critical for society and governments to adapt to them. This adoption rate has been accelerating, but modern AI tech development is particularly fast. Governments can barely keep up to decide how this should be regulated, let alone people. When you consider that this tech is coming from companies that pioneered the "move fast and break things" mentality, in an industry drunk on greed and hubris, it should give everyone a cause for concern.

        - AI has the potential to disrupt many industries, not just one. But further than that, it raises deep existential questions about our humanity, the value of human work, how our economic and education systems are structured, etc.

        These are not problems we can solve overnight. Turning a blind eye to them and vouching for less regulations and more exposure is simply irresponsible.

        • echelon 12 hours ago

          > vouching for less regulations and more exposure is simply irresponsible.

          We let people buy 6,000 pound vehicles capable of traveling 100+ mph.

          We let people buy sharp knives and guns. And heat their homes with flammable gas. And hike up dangerous tall mountains.

          I think the LLM is the least of society's worries and this pervasive thinking that everything needs to be wrapped up in bubble wrap is what is actually dangerous.

          Can a thought be dangerous? Should we prevent people from thinking or being exposed to certain things? That sounds far more Orwellian.

          If you want to criminalize illegal use of LLMs for fraud, then do that. But don't make the technology inaccessible and patronize people by telling them they're not smart enough to understand the danger.

          This is not a "fragile world" technology in its current form. When they're embodied, walking around, and killing people, then you can sound the alarm.

          • imiric 7 hours ago

            There's a vast middle ground between completely unregulated technology and an Orwellian state. Let's not entertain absolutes.

            > We let people buy 6,000 pound vehicles capable of traveling 100+ mph.

            > We let people buy sharp knives and guns. And heat their homes with flammable gas. And hike up dangerous tall mountains.

            All of those have regulations around them, and people have gotten familiar with how they work. More importantly, they're hardly disrupting to our lives as AI technology has the potential to be.

            We didn't invent airplanes and let people on them overnight. It took decades for the airline industry to form, and even more for flights to be accepted as a standard form of transportation. We created strict regulations that plane manufacturers and airlines must follow, which were refined over the 20th century.

            Was this unnecessary and Orwellian? Obviously the dangers of flight were very clear, so we took precautions to ensure the necessary safety. With AI, these dangers are not that clear.

            > If you want to criminalize illegal use of LLMs for fraud, then do that. But don't make the technology inaccessible and patronize people by telling them they're not smart enough to understand the danger.

            It's far from patronizing; it's just reality. People don't understand the dangers of the modern internet either, yet they're subjects of privacy violations, identity theft, scams, and all sorts of psychological manipulation from advertising and propaganda that influences how they think, vote and behave in society. Democratic institutions are crumbling, sociopolitical tensions are the highest they've been in the past 30 years in most western countries, and yet you would be fine with unleashing a technology that has a high chance of making this worse? Without any guardrails, or even some time for humanity to start addressing some of the existential questions I mentioned in my previous post? Again, this would be highly irresponsible and dangerous.

            And yet I'm sure that's what's going to happen in most countries. It's people who think like you that are pushing this technology forward, and unfortunately they have a strong influence over governments and the zeitgeist. I just hope that we can eventually climb out of the hole we're currently digging ourselves into.

            • echelon 3 hours ago

              > We created strict regulations that plane manufacturers and airlines must follow

              In response to actual incidents, not imagined ones. Regulations should not come first. We already have the biggest companies chasing after a regulatory moat to protect themselves from competition and commoditization, and that's not how this should work.

              > we took precautions to ensure the necessary safety.

              No we didn't! We used the technology, we made lots of mistakes, and learned over time. That's how it's been with every innovation cycle. If we regulated from day one, maybe we would slowed down and not reached the point we are today.

              Europe is a good model for a presumptive, over-regulated society. Their comparable industries are smaller and lag behind our own because of it.

              > People don't understand the dangers of the modern internet either,

              People "don't understand" a lot of things, such as the dangers they expose themselves to when driving over 30 mph. Yet we don't take that privilege away from them unless they break the law. Laws that only bare teeth after the fact, mind you.

              Imagine if we tried to "protect society from the internet" and restricted access. The naysayers of the time wanted to, and you can find evidence if you look at old newspapers. Or imagine if we had a closed set of allowed businesses use cases and we didn't allow them to build whatever they wanted without some official process. There would be so many missing pieces.

              Even laws and regulations proposed for mature technologies can be completely spurious. For instance, all the regulations being designed to "protect the children" that are actually more about tracking and censorship. If people cared about protecting the children, they'd give them free school lunches and education, not try to track who signs up for what porn website. That's just building a treasure trove of kompromat to employ against political rivals. Or projecting the puritanical dreams of some lawmakers onto the whole of society.

              > People [...] they're subjects of [...] all sorts of psychological manipulation from advertising and propaganda that influences how they think, vote and behave in society

              > Democratic institutions are crumbling

              So this is why you think this way. You think of society as a failing institution of sorts. You're unhappy with the shape of the world and you're trying to control what people are exposed to and how they think.

              I don't think that there's any amount of debate between you and I that will make us see eye to eye. I fundamentally believe you're wrong about this.

              We live as mere motes of dust within a geologically old and infinite universe. Our lives are vanishingly short. You're trying to button down what people can do and fit them into constructed plans that match your pessimistic worldview.

              We need to be free. We need more experimentation. We need more degrees of freedom with less structure and rigidity. Your prescriptive thinking lowers the energy potential of society and substitutes raw human wants with something designed, stamped, and approved by the central authority.

              We didn't evolve from star dust to adventurous thinking apes just to live within the shackles of other people's minds.

              > unleashing a technology that has a high chance of making this worse

              You are presupposing an outcome and you worry too much.

              Don't regulate technology, regulate abusive behavior using the existing legal frameworks. We will pass all the laws we need as situations arise. And it will work.

              > eventually climb out of the hole we're currently digging ourselves into.

              We're not in a hole. Stop looking down and look up.

          • cbg0 11 hours ago

            Quite a lot of whataboutisms and straw men in your post, let's stick to LLMs, as that was the original topic.

            • echelon 11 hours ago

              It's contextualization, not a logical fallacy.

              Let's stop treating LLMs as spooky.

              • ben_w 2 hours ago

                Lots of things that aren't spooky are yet dangerous. Matches, for example. Scissors. Covid and A-10 Warthogs.

      • Rastonbury 12 hours ago

        Counter point the number of supposedly educated people falling into social media echo chambers parroting partisan views, sharing on ramps, recommending supplements. They obviously do not see the harm, if fact they feel superior, feeling the need to educate and lecture. The vector here was social media, the vector here is reliance on chatbots. I mildly trust the big player like Anthropic and even OpenAI, but imagine the talking head influencers/supplement peddlers making and promoting a un-woke chatbot. People are already relying on chatgpt to navigate medical conditions, personal/relationship issues

        • weberer 4 hours ago

          There's nothing wrong with supplements. I live in a place where you're lucky to get an hour of sunlight per day in the winter. Vitamin D supplements have been very helpful.

          • Philpax an hour ago

            I suspect they're referring to the Infowars kind of supplements, not vitamin supplements.

        • echelon 12 hours ago

          I feel my response here [1] also applies to you.

          People are going to do these things anyway. We've had "yellow journalism" since the 1800s. It's a tale as old as time.

          What right do we have to go around policing other people's minds?

          When I grew up, the Internet was an escape from the puritanical censorship of the southern baptists I was surrounded with. It was a bastion of free information and idea exchange. If the prevailing ethos of the 2000s wasn't so anti-censorship, I wouldn't have gotten outside of my own filter bubble and found a way to explore other ideas. I would have been chased away as an undesirable member of the opposition and muted, censored, and banned. Thank god the algorithm didn't exist back then.

          The things we do to each other in today's internet are abhorrent. Both sides of the political spectrum attempt to constrain what the other side can do. We need to stop that. It's petty and increases polarization. And that's exactly what's happening with your suggestion - you're wanting to censor ideas and things you don't like that presumably this technology will be used to promote.

          Please stop thinking LLMs are an agent of the enemy to convert more people to their causes. The opposite is also true. And the impact won't be as extreme or dire as you make it out to be - heaven forbid people buy more vitamins. Oh, the humanity.

          [1] https://news.ycombinator.com/item?id=42499972

          • ben_w 2 hours ago

            If yellow journalism is bad, is not fully automated and personalised yellow journalism worse?