This is the start so yes at the current state, we aren't offering much, if you check the GitHub repo, we don't directly use Transformers.js but have forked their code to ts and removed things that caused build issues in some frameworks like next, etc due to node modules.
We are adding features like RAG and observability integrations so people can use these llms to perform more complicated tasks!
When I read the title, I thought the project would be an LLM browser plugin (or something of the sort) that would automatically use the current page as context. However, after viewing the GitHub project, it seems like a browser interface for local LLMs. Is my understanding correct? This is not my domain of expertise.
Yes, it's currently a framework for running LLMs locally in the browser. Browser extension for page context is in our roadmap, but right now we're focused on optimizing multimodal LLMs to work efficiently in the browser environment, so that we can use them for a variety of use cases.
We use Webllm under the hood and for text-to-text generation, the model compression is awesome and RAM usage is also less. But we are conducting more experiments, One thing we noticed is some quantized models using MLC sometimes start throwing gibberish, so will get back to you after more experiments on which is better.
Oh yes, because the library was so large, we decided to start by removing some things and porting, to be honest, one of the bad decisions of my life trying to Port JS to TS but luckily it only took 3 days and a few headaches!
Will add the encoders as needed, should be easy now, but a great point.
This is a wrapper around WebLLM [0] and transformers.js [1]. What exactly are you offering on top of those two libraries?
[0]: https://github.com/mlc-ai/web-llm [1]: https://huggingface.co/docs/transformers.js/en/index
This is the start so yes at the current state, we aren't offering much, if you check the GitHub repo, we don't directly use Transformers.js but have forked their code to ts and removed things that caused build issues in some frameworks like next, etc due to node modules.
We are adding features like RAG and observability integrations so people can use these llms to perform more complicated tasks!
When I read the title, I thought the project would be an LLM browser plugin (or something of the sort) that would automatically use the current page as context. However, after viewing the GitHub project, it seems like a browser interface for local LLMs. Is my understanding correct? This is not my domain of expertise.
Yes, it's currently a framework for running LLMs locally in the browser. Browser extension for page context is in our roadmap, but right now we're focused on optimizing multimodal LLMs to work efficiently in the browser environment, so that we can use them for a variety of use cases.
This is great. If I was a developer I would have two projects in mind for this;
1. Decline cookie notices automatically with a browser extension
2. Build a powerful autocorrect/complete browser extension to fix my poor typing skills
1. related: https://github.com/brave/cookiemonster
Would love to help, wanna give it a try to browser? I can help fix any issues you run with or just jump on a call!
This seems like the perfect use case for Browserai!
Haha, we are thinking of 2. It makes sense, but I would love for you to check it out!
How does it compare to WebLLM (https://github.com/mlc-ai/web-llm)?
We use Webllm under the hood and for text-to-text generation, the model compression is awesome and RAM usage is also less. But we are conducting more experiments, One thing we noticed is some quantized models using MLC sometimes start throwing gibberish, so will get back to you after more experiments on which is better.
I don’t see any encoders (BERT family) available yet. How will you do RAG, BM25/tf-idf?
Oh yes, because the library was so large, we decided to start by removing some things and porting, to be honest, one of the bad decisions of my life trying to Port JS to TS but luckily it only took 3 days and a few headaches!
Will add the encoders as needed, should be easy now, but a great point.
How's the performance and features compared to pinokio?
Would it be good for language translation?
Yes, you can perform language translation using the supported large language models.
Deepseek R1 just got ported to WebGPU as well! Exciting future for local web AI:
Thread - https://news.ycombinator.com/item?id=42795782
Yes, we do plan to add it soon, we are focusing on something cool right now! Stay tuned!