Train Your Own LLM from Scratch

(github.com)

53 points | by kristianpaul 2 hours ago ago

7 comments

  • jvican 40 minutes ago

    If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/

  • hiroakiaizawa 13 minutes ago

    Nice. What scale does this realistically reach on a single machine?

  • baalimago 33 minutes ago

    Train your LM from scratch*

    I doubt you have a machine big enough to make it "Large".

    • nucleardog 9 minutes ago

      Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

      And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

      I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

      But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

      This is about learning concepts, and the rest of this is mostly moot.

      On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

  • iamnotarobotman an hour ago

    This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!