I do wonder about the usefulness about this massive context dumping exercise. 100M is a ridiculous amount. Usually to get good results on practical tasks you need to actually think about what you are dumping into context.
I also have my gripes about the way 2 hop is mentioned here. With figure 3 being the canonical example of what I would consider too trivial/misleading (The exact text match of "Eric Watts" being in the question and in the context). It leads to the natural question of how does it do compared to an LLM with a grep tool.
What I would consider more interesting is practical synthesis over such a large context where you can't just string lookup answers. For example maybe dumping all of Intel's x86 manuals into context and then asking an LLM to try to write assembly or something.
The sky seems like the limit to me. 100M doesn't actually seem like that much when you get into vision models or embodied robots operating with contexts on the order of several days or weeks.
The more we can drive towards selective attention over larger and larger sets of "working memory", the better, I think.
100M tokens should be enough to put all but the absolutely biggest code bases into a single context. It’s probably also about as much as a single average person in the West reads in a lifetime (make of that what you will philosophically); all x86 manuals should fit nicely with room to spare.
I do wonder about the usefulness about this massive context dumping exercise. 100M is a ridiculous amount. Usually to get good results on practical tasks you need to actually think about what you are dumping into context.
I also have my gripes about the way 2 hop is mentioned here. With figure 3 being the canonical example of what I would consider too trivial/misleading (The exact text match of "Eric Watts" being in the question and in the context). It leads to the natural question of how does it do compared to an LLM with a grep tool.
What I would consider more interesting is practical synthesis over such a large context where you can't just string lookup answers. For example maybe dumping all of Intel's x86 manuals into context and then asking an LLM to try to write assembly or something.
The sky seems like the limit to me. 100M doesn't actually seem like that much when you get into vision models or embodied robots operating with contexts on the order of several days or weeks.
The more we can drive towards selective attention over larger and larger sets of "working memory", the better, I think.
100M tokens should be enough to put all but the absolutely biggest code bases into a single context. It’s probably also about as much as a single average person in the West reads in a lifetime (make of that what you will philosophically); all x86 manuals should fit nicely with room to spare.
Neat. Can't wait for our language, framework specific tools for models. I don't need my models writing shakespeare, unless I'm working on shakespeare.