17 comments

  • Philpax 3 hours ago

    This is a wrapper around WebLLM [0] and transformers.js [1]. What exactly are you offering on top of those two libraries?

    [0]: https://github.com/mlc-ai/web-llm [1]: https://huggingface.co/docs/transformers.js/en/index

    • sauravpanda 2 hours ago

      This is the start so yes at the current state, we aren't offering much, if you check the GitHub repo, we don't directly use Transformers.js but have forked their code to ts and removed things that caused build issues in some frameworks like next, etc due to node modules.

      We are adding features like RAG and observability integrations so people can use these llms to perform more complicated tasks!

  • Matthyze 2 hours ago

    When I read the title, I thought the project would be an LLM browser plugin (or something of the sort) that would automatically use the current page as context. However, after viewing the GitHub project, it seems like a browser interface for local LLMs. Is my understanding correct? This is not my domain of expertise.

    • shreyash_gupta an hour ago

      Yes, it's currently a framework for running LLMs locally in the browser. Browser extension for page context is in our roadmap, but right now we're focused on optimizing multimodal LLMs to work efficiently in the browser environment, so that we can use them for a variety of use cases.

  • bazmattaz 3 hours ago

    This is great. If I was a developer I would have two projects in mind for this;

    1. Decline cookie notices automatically with a browser extension

    2. Build a powerful autocorrect/complete browser extension to fix my poor typing skills

    • cloudking 2 hours ago
      • sauravpanda 2 hours ago

        Would love to help, wanna give it a try to browser? I can help fix any issues you run with or just jump on a call!

        This seems like the perfect use case for Browserai!

    • sauravpanda 2 hours ago

      Haha, we are thinking of 2. It makes sense, but I would love for you to check it out!

  • hazelnut 2 hours ago

    How does it compare to WebLLM (https://github.com/mlc-ai/web-llm)?

    • sauravpanda 2 hours ago

      We use Webllm under the hood and for text-to-text generation, the model compression is awesome and RAM usage is also less. But we are conducting more experiments, One thing we noticed is some quantized models using MLC sometimes start throwing gibberish, so will get back to you after more experiments on which is better.

  • janalsncm 2 hours ago

    I don’t see any encoders (BERT family) available yet. How will you do RAG, BM25/tf-idf?

    • sauravpanda 2 hours ago

      Oh yes, because the library was so large, we decided to start by removing some things and porting, to be honest, one of the bad decisions of my life trying to Port JS to TS but luckily it only took 3 days and a few headaches!

      Will add the encoders as needed, should be easy now, but a great point.

  • 3abiton 2 hours ago

    How's the performance and features compared to pinokio?

  • oxyboy 2 hours ago

    Would it be good for language translation?

    • shreyash_gupta an hour ago

      Yes, you can perform language translation using the supported large language models.

  • astlouis44 2 hours ago

    Deepseek R1 just got ported to WebGPU as well! Exciting future for local web AI:

    Thread - https://news.ycombinator.com/item?id=42795782

    • sauravpanda 2 hours ago

      Yes, we do plan to add it soon, we are focusing on something cool right now! Stay tuned!