A polynomial autoencoder beats PCA on transformer embeddings

(ivanpleshkov.dev)

17 points | by timvisee 3 days ago ago

3 comments

  • yobbo 11 minutes ago

    My understanding after scanning the code examples is the technique expands the dimensionality of each data point with a set consisting of the quadratic coefficients of its existing dimensions. I thought it sounded like kernel PCA.

  • magicalhippo an hour ago

    I'm just a casual LLM user, but your description of the anisotropy made me think about the recent work on KV cache quantization techniques such as TurboQuant where they apply a random rotation on each vector before quantizing, as I understood it precisely to make it more isotropic.

    But for RAG that might be too much work per vector?

  • pleshkov 3 days ago

    Author here — questions and pushback both welcome.