22 comments

  • kristopolous 8 minutes ago

    That M versus B is way too subtle. 0.026B is my suggestion

  • simonw 42 minutes ago

    Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!

    • quantumleaper 29 minutes ago

      Should be quick and easy with WebGPU, too.

      • simonw 5 minutes ago

        That's an even better idea, I bet this could run in Transformers.js.

      • ilaksh 18 minutes ago

        Good idea. Could you make that.

    • HenryNdubuaku 35 minutes ago

      thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.

  • simonw an hour ago

    Looks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:

    > Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.

  • murkt 26 minutes ago

    Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.

  • ilaksh an hour ago

    Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

    But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

    E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`

    • HenryNdubuaku an hour ago

      So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.

      • ilaksh an hour ago

        Oh gotcha. Fixed my comment.

  • cmrdporcupine 44 minutes ago

    This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.

  • ac29 26 minutes ago

    FYI, distilling Gemini is explicitly against the ToS:

    "You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

    • xgulfie a few seconds ago

      This is being downvoted but it's worth noting if only for the "be careful" aspect.

    • ilaksh 20 minutes ago

      I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.

    • ForHackernews 7 minutes ago

      So is copying all the books in the world.

    • vablings 24 minutes ago

      Oh no! They stole the model weights! Distillation "attacks" is such bullshit