SMLL: Using 200MB of Neural Network to Save 400 Bytes

(frankchiarulli.com)

15 points | by fcjr 2 days ago ago

3 comments

  • f_devd a day ago

    Having worked on compression algos, any NN is just way to slow for (de-)compression. A potential usage of them is for coarse prior estimation in something like rANS, but even then the overhead cost would need to carefully weighted against something like Markov chains since the relative cost is just so large.

  • msephton 2 days ago

    No mention of decompression speed and validation, or did I miss something?

    • savalione 2 days ago

      It's in the post: Benchmarks -> Speed

      tl;dr: SMLL is approximately 10,000x slower than Gzip