Tensor Product Attention Is All You Need

(arxiv.org)

116 points | by eunos 13 hours ago ago

80 comments

  • carbocation 12 hours ago

    My kingdom for renaming this paper to something like "Tensor Product Attention is a Memory-Efficient Approach for Long-Sequence Language Modeling"

    • Zacharias030 11 hours ago

      If you don’t like the title, wait till you see this acronym: „… we introduce the Tensor ProducT ATTenTion Transformer (T6), a new model architecture…“

      • imjonse 10 hours ago

        There is a famous transformer model named T5 from Google, and also S4, S4 and S6 (Mamba) in the LLM space, so it is not unusual naming.

        • svantana 7 hours ago

          Yes, but T5 is at least a normal acronym: Text-To-Text Transfer Transformer (albeit a bit forced)

        • TeMPOraL 5 hours ago

          That it's not unusual tells us that too many researchers in the field are chasing citations and fame at the expense of doing quality work.

          • ben_w 2 hours ago

            Mm. That or all sharing a sense of humour/in-jokes: I'm sure I'm not the only one here who immediately thought of "GOTO is all you need" and "Attention considered harmful"

            • TeMPOraL 2 hours ago

              Right. But then, what they did to the title to make it collapse down to T6 is even worse than what I did to my nickname back in high school to squeeze in a long forgotten inner joke about our city's municipal sanitation department (MPO).

        • black_puppydog 8 hours ago

          "... is all you need" isn't unusual either, and yet GGP isn't happy about it (and I understand why)

  • sva_ 11 minutes ago

    The main contribution of the paper aside, I have to say that the background section 2 is very neatly and succinctly written.

  • bbcc90 9 hours ago

    (trying to move the critique beyond the title...)

    When trying to deploy llms in with larger context windows constrained environments 2 things start to hurt: a) increased memory footprint for longer KV cache b) increased decode speed due to longer context window. this paper addresses a) only, which is useful, but we are still left with b) (right?)

    • verdverm 8 hours ago

      The more meaningful contribution may be (section 3.4)

      > These variants illustrate TPA’s versatility in balancing memory cost, computational overhead, and representation power. By choosing which dimensions (heads or tokens) remain contextual and adjusting ranks (RQ, RK, RV ), TPA unifies multiple existing attention mechanisms— such as MHA, MQA, and GQA—under one framework, while potentially reducing the KV cache size by an order of magnitude during autoregressive inference.

      re: the title, it might be the true one if their proofs hold up

      ---

      I'm now curious if the Element-wise Attention is All You Need preprint can be fit into this framework. Sadly my math is not currently up to the task. It appears to offer even better computational savings during both training and inference while maintaining accuracy, though only tested with a smaller model

      https://arxiv.org/abs/2501.05730

      • ashupadhi01 2 hours ago

        Curious to know what mathematics you are comfortable with. If you are able to understand the papers you mentioned, you must belong to the 99 percentile.

        • verdverm 2 hours ago

          I was never good at proof writing. I found group theory and algebra interesting, topology and analysis eluded me. It's just been a while since I did any serious math thinking

    • llm_trw 6 hours ago

      It addresses b too since decompositions are always smaller than the original tensor. It's usually the case that memory access is also slower than matrix multiplications so this will be faster. Burning flops to save memory movement.

      • menaerus 2 hours ago

        > It's usually the case that memory access is also slower than matrix multiplications so this will be faster. Burning flops to save memory movement.

        I haven't read this paper (yet) but isn't this the case that mostly applies to training and not so much to inference? A good example would be flash-attention, it trades the higher flops for better memory utilization but it's mostly irrelevant in inference workloads.

        • verdverm an hour ago

          They claim an inference time savings to the kv cache

          • menaerus an hour ago

            I skimmed through the paper real quickly. There's no performance data on inference speedups in the paper. Only the benchmarks relevant for training.

            They also, interestingly, don't compare against the flash-attention. Flash-attention outperforms all of the other attention mechanisms mentioned in the paper: MHA, MQA, GQA, and MLA.

            • verdverm 40 minutes ago

              They aren't claiming speedups, they are claiming up to an order of magnitude less space needed for the kv cache at runtime. This translates to a smaller GPU or longer sequences in the same GPU

    • wseqyrku 5 hours ago

      > (trying to move the critique beyond the title...)

      This is kind of a theme in HN now. The top comments are completely besides the point of the article/story/etc.

      • msoad 2 hours ago

        I know. It is sad. Naming can also be seen as a way of showing respect to a hugely impactful paper if you want to be positive about it.

  • whymauri 12 hours ago

    I really can't with these paper titles anymore, man.

    • magicalhippo 12 hours ago

      There's an Ask HN thread going[1] asking about what people have done with small LLMs. This seems like a possible application. I asked Granite 3.1 MOE 3B to generate a title based on the abstract and it came up with:

      Tensor Product Attention: A Memory-Efficient Solution for Longer Input Sequences in Language Models

      Maybe a Greasemonkey script to pass arXiv abstracts to a local Ollama could be something...

      [1]: https://news.ycombinator.com/item?id=42784365

    • wisty 12 hours ago

      Clickbait paper titles considered harmful?

      • llm_trw 6 hours ago

        Only if the paper is handwritten.

      • gbnwl 11 hours ago

        OK I'll admit I chuckled

      • moffkalast 5 hours ago

        Clickbait paper titles cause cancer, study shows

    • anigbrowl 11 hours ago

      By 2038 all scientific papers will be titled 'Bruh.' While this might at first seem a recipe for confusion, the fundamental interconnectedness of all things as demonstrated by Ollama(Googol 13) highlight the fact that pretty much any insight is as good as any other and are all descriptions of the same underlying phenomenon. Freed from constraint like survival or the necessity to engage in economic activity, humanity in the 203s will mainly devote itself to contemplating amusing but fundamentally interchangeable perspectives within increasingly comfy pleasure cubes.

    • ilove196884 12 hours ago

      I hate how paper titles are worded like seo techniques.

      • LPisGood 38 minutes ago

        Having a catchy title is great for short hand. If it didn’t have such a catchy name I probably wouldn’t remember Flush+Reload, Spectre, or even Attention is All You Need

      • Upvoter33 an hour ago

        On the one hand, sure, it's dumb.

        But, on the other hand, it's hard to get researchers to read your paper, esp. in fast-moving areas. Every little thing might be the difference between reading the abstract or not. Reading the abstract might lead to reading the intro. And so on.

        So, for better or worse, the competition for human eyeballs is real.

        Ironically, in this case, "attention" is all that the authors want.

      • spiritplumber 12 hours ago

        Turn something into a metric and it will be misused. Ever always was

        • jampekka 7 hours ago

          Attention is all you need!

        • TeMPOraL 5 hours ago

          > Ever always was

          Always has been.

          silent gunshot

      • verdverm 11 hours ago

        This is a riff on the original "attention is all you need" paper, there has been a few of these lately

        • Matthyze 9 hours ago

          A few? A multitude.

          • verdverm 8 hours ago

            This one might be right if they have in fact unified multiple attention approaches into a single framework

            see Section 3.4

    • jdefr89 4 hours ago

      Same. At this point “All we need” is about a thousand different things..

      • taylorius 2 hours ago

        A catchy title is all you need (to get attention).

      • philipov 3 hours ago

        All you need is love.

    • amelius 7 hours ago

      And they don't formally show that the titles are correct, therefore I don't think these papers belong in CS.

    • WesolyKubeczek 2 hours ago

      Preach, mate. The bloody Beatles and their bloody catchy refrains and hit songs. They sing “Love is all you need” once, and now it’s everywhere! Can’t hide from it. Even scientific papers! Especially scientific papers!

      Bloody hell and brimstone. Been crazy 57 years and a half already.

    • byyoung3 10 hours ago

      haha same

  • jdefr89 4 hours ago

    Every day there are literally tons of papers with “XYX is All you need” at this point we apparently need thousands of things…

  • esafak 11 hours ago

    Tensor decomposition has traditionally suffered from high computational complexity. Is it an issue here?

    • verdverm 11 hours ago

      My math is rusty, but it looks to have a higher complexity than the original attention. I cannot say if it is an issue. Generally it seems we are willing to spend more computation at training time if it produces better results at inference time. In this case they are reducing the resources needed at inference time (an order of magnitude for the KV cache) or enabling longer sequences given the same resources.

      There's another paper I saw yesterday, "Element-wise Attention is All You Need" which looks like an early preprint, written by a solo author with a solo A800, and tested on some smaller problems. If the results hold up for language benchmarks, it could reduce resource requirements during training as well. It looks to have a lower complexity when scaling

      https://arxiv.org/abs/2501.05730

    • davmre 7 hours ago

      They're not proposing to apply tensor decomposition to an existing collection of weights. It's an architecture in which the K, V, and Q tensors are constructed as a product of factors. The model works with the factors directly and you just need to compute their product on the forward pass (and adjoints on the backwards pass), so there's no decomposition.

    • absolutelastone 11 hours ago

      Looks like it's just a matrix decomposition in the paper. I'm guessing anyway. These attention papers are always a painful mix of mathematical, quasi-mathematical, and information retrieval jargon.

      There is something in the github repo about higher-order decompositions. Don't find where the method for factoring is given.

    • dartos 11 hours ago

      At a sniff test it would make sense.

      Trading computational complexity for space.

  • hangonhn 8 hours ago

    For those of us who are lay people outside of machine learning and AI, what was the critical insight that made “attention all you need” in the original Transformer paper?

    • yorwba 7 hours ago

      The abstract https://arxiv.org/abs/1706.03762 explains it well:

      "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely."

      They did not invent attention, but while previous language models had used attention as an auxiliary mechanism, they removed everything but the attention and the models still worked. Really, the title already says it all.

    • danielbln 7 hours ago

      I believe the insight was the introduction of the attention mechanisms that allow the NN to look at all words (well, embeddings) in parallel and make connections between them, instead of processing things purely sequential.

    • imtringued 3 hours ago

      I don't remember the contents of that paper, but I can give you some context based on my knowledge of "traditional" theoretical computer science.

      In theoretical CS you have state machines, pushdown automatons and turing machines. What may surprise you is that the difference between those three does not lie in the way the algorithm for them is represented. They all use state transition diagrams!

      Pushdown automatons are more powerful than state machines, because they have a stack onto which they can push new data, peek at the top of the stack or pop data off the stack.

      Now here is the kicker! How do you get a turing machine? You take a pushdown automaton and remove the restriction that you can only push, peek or pop at the head! You can now move the head pointing at the stack, your stack has turned into a tape!

      The key difference lies in the datastructure that the transition diagram is manipulating, not the algorithm itself!

      The attention mechanism of the transformer architecture is analogous to a tape that can only be written to once per blank and in theory that alone is enough to emulate a full Turing machine with read, write, rewrite and delete semantics.

    • freilanzer 7 hours ago

      That attention works and is highly parallelisable.

  • cute_boi 11 hours ago

    > a novel attention mechanism

    Why do every paper has to mention this word "novel" and these titles are getting crazier day by day.

    • LPisGood 36 minutes ago

      This is not new at all, by the way. Bringing novel ideas and techniques is kind of the whole point of research, and explicitly describing what novel thing you did is a good thing to do in an intro/abstract.

    • NitpickLawyer 9 hours ago

      If your paper is scored / gated on "novel factor" by admission committees, then applicants will over-use that term.

    • verdverm 11 hours ago

      There are a number of papers which aim to improve the attention aspect of models, all being some derivation of the original "Attention is All You Need" paper. A pattern of "'blank' Attention is All You Need" has emerged

    • patrick451 11 hours ago

      Because to publish in a real journal, you typically need both novelty and for your work to be "interesting". The job of the abstract and introduction of a paper (where the word "novel" normally lives) is to sell the reviewer that the paper should be published and to sell you that you should read and cite it.

  • joshdavham 11 hours ago

    I'm sorry but can people please stop naming their papers "X is all you need"? It's super annoying.

    • edflsafoiewq 22 minutes ago

      Why? It clearly and instantly communicates the genre of result the paper presents.

    • recursive 10 hours ago

      Are you saying... you consider it harmful?

      • joshdavham 2 hours ago

        > Are you saying... you consider it harmful?

        No.

      • oliverx0 3 hours ago

        I see what you did there

      • pepinator 9 hours ago

        A more precise title would be better, so, yeah, it's harmful.

    • WithinReason 8 hours ago

      Consider submitting an article titled "all you need is considered harmful"

      • joshdavham 2 hours ago

        I don’t follow.

        • gashmol an hour ago

          They are refering to a meme a Dutch guy created

    • Deutschland314 5 hours ago

      It's clearly a play on words as it looks like a follow up paper

  • thunkingdeep 9 hours ago

    If you don’t pay to read papers, you don’t get to complain about the titles, imo.

    I hate ads, but I’m not paying for YouTube Premium either. That’s how it goes. I get ads.

    • Vampiero 8 hours ago

      > I hate ads, but I’m not paying for YouTube Premium either. That’s how it goes. I get ads.

      No that's not how it goes. You get Ublock Origin and then you don't get ads. Simple as that.

      If you don't like ads and don't fight against them it means you accept ads and want to see more of them shoved down our collective throat. At least from the perspective of marketers and industries who rely on ads. That's how we ended up in this predicament in the first place. Lazy compliance.

      If Youtube isn't sustainable without ads, it should die so that natural selection can take over and so that a better ecosystem can finally take its place. Every single "Youtuber" hates the platform, mostly because it has zero transparency being a Google product. The viewers hate it too because it constantly takes down their favorite videos and creators and because it's full of ads.

      The only reason it's (still) the main site for hosting videos is quite literally just ad-fueled inertia due to the intrinsic cost of hosting videos. If ads didn't exist the only sustainable solution would be something less centralized like Peertube. And to me that's a desirable outcome.

      • philipov 3 hours ago

        "The forest must burn to make room for new trees."

    • jampekka 7 hours ago

      Authors or their institutions don't get a penny from paywalled papers either. Oftentimes they have to pay to publish. Authors choose the titles, sometimes reviewers (not paid a dime either) can demand changes to the title, and the academic editor (paid ziltch too) can require the changes.

      This doesn't apply to arXiv though, as it is not peer reviewed nor edited and is funded by various institutions.

      Admittedly the academic publishing system is so corrupt that it's hard to phantom, so it's easy to misunderstand it.

      Typically you do pay for the papers (and publisher profits), either through taxes or inflated product prices.

    • black_puppydog 8 hours ago

      Authors are from, let's see, china and california. Then I guess a good chunk of the HN crowd is entitled to bitching about the title?