7 comments

  • quantadev 9 months ago

    > can reproduce the outputs of an implicit linear model with least squares loss after one step of gradient descent.

    Makes you wonder if we're training LLMs the hard way. For example, if computers had been invented before Calculus, we'd have been using "Numerical Integration" (iterating the differential squares to sum up areas, etc) and "Numerical Differentiation" (ditto for calculating slopes).

    So I wonder if we're simply in a pre-Calculus-like phase of NN/Perceptrons, where we haven't yet realized there's a mathematical way to "solve" a bunch of equations simultaneously and arrive at the best (or some local minima) model weights for a given NN architecture and set of training data.

    From a theoretical standpoint it IS a black box problem like this where the set of training data goes in, and an array of model weights comes out. If I were to guess I'd bet there'll be some kind of "random seed" we can add as input, and for each seed we'll get a different (local minima/maxima for model weights).

    But I'm not a mathematician and there may be some sort of PROOF that what I just said can definitely never be done?

  • billconan 9 months ago

    > We show that SSMs with local self-attention, a form of input-dependent input processing, can perform in-context learning analogously to transformers, i.e. through gradient descent steps on an implicit linear regression problem.

    I don't understand. The benefit of SSMs is better scalability than self-attention. Now this adds self-attention back?

  • eli_gottlieb 9 months ago

    >Our key insight is that the diagonal linear recurrent layer can act as a gradient accumulator

    So they're sort of reinventing the discrete-time differentiator from signal processing, but parameterized neurally?

  • agnosticmantis 9 months ago

    These papers don’t explain how pertained LLMs learn in-context, because the simplified models in these papers are either pretrained for the same task that’s tested in-context, or the weights are handpicked by humans to do GD at inference time.

    See this video for a good discussion: https://youtu.be/-yo2672UikU

  • roger_ 9 months ago

    I'd love to see SSMs replace transformers but adapting them to non-causal, 2D+ inputs doesn't seem that straightforward.

    Is there a non-autoregressive future?

  • derefr 9 months ago

    So, I'm just a layman when it comes to AI/ML, but I do understand computability — what's possible to do with a given machine, and how we can build higher-computational-power primitives out of lower-computational-power primitives by plugging those primitives together with "glue" like parallel feed-forward chains (e.g. an ALU adder's carry bits) and loops over static sub-states of execution.

    My own mental model for what Transformers must necessarily be doing, in order to be able to compute what they compute, given:

    1. the primitives they're made of (for Transformers: matmul a learned matrix; vector-add a learned bias vector; normalize; softmax)

    2. what those primitives can compute over a single layer

    3. the low-ish total number of layers in a Transformer model

    ...is that they were already effectively "state space models" in practice. So this doesn't really surprise me!

    (To be explicit, my assertion is that, for a given latent space between layers N and N+1 in a Transformer model, that latent space encodes a set of state variables [think CPU registers] used by the Nth serial computation steps of an arbitrary set of learned algorithms — where these algorithms are limited to those where every computation step is possible to encode in the form of a fused-matmul-plus-vadd, such that the algorithm itself can be learned as a depthwise-extruded sequence of weights across the layers; and where the learned algorithms can and do share state variables, both as inputs and as outputs; and where these state variables are all attenuated by an activation probability [in a Transformer: attention] such that the algorithms' outputs form a pre-multiplied conditional probability of the output given the confidence of the inputs — in turn such that the same state variable can be a low-confidence output for one algorithm, and a high-confidence output for another algorithm, and the high-confidence component of the output will swamp the low-confidence output.)

  • dsalaj 9 months ago

    Deep state-space models (Deep SSMs) have shown capabilities for in-context learning on autoregressive tasks, similar to transformers. However, the architectural requirements and mechanisms enabling this in recurrent networks remain unclear. This study demonstrates that state-space model architectures can perform gradient-based learning and use it for in-context learning.