> Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.
Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.
I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.
To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.
LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way; it's designed like other programs, according to someone's explicit understanding. There's still active research in this field; I have a friend who's very deep into it.
The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.
The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.
Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
It's bitter for me because I like looking at how things work under the hood and that's much less satisfying when it's "a bunch of stats and linear algebra that just happens to work"
If you're building on a computer language, you can say you understand the computer's abstract machine, even though you don't know how we ever managed to make a physical device to instantiate it!
> There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way
I would softly disagree with this. Technically, we also understand exactly what a LLM does, we can analyze every instruction that is executed. Nothing is hidden from us. We don't always know what the outcome will be; but, we also don't always know what the outcome will be in rule-based models, if we make the chain of logic too deep to reliably predict. There is a difference, but it is on a spectrum. In other words, explicit code may help but it does not guarantee understanding, because nothing does and nothing can.
The grammars in rule-based MT are normally fully conceptually understood by the people who wrote them. That's a good start for human understanding.
You could say they don't understand why a human language evolved some feature but they fully understand the details of that feature in human conceptual terms.
I agree in principle the statistical parts of statistical MT are not secret and that computer code in high-level languages isn't guaranteed to be comprehensible to a human reader. Or in general, binary code isn't guaranteed to be incomprehensible and source code isn't guaranteed to be comprehensible.
But for MT, the hand-written grammars and rules are at least comprehended by their authors at the time they're initially constructed.
It's completely possible to write a parser that outputs every possible parse of "time flies like an arrow", and then try interpreting each one and discard ones that don't make sense according to some downstream rules (unknown noun phrase: "time fly").
I did this for a text adventure parser, but it didn't work well because there are exponentially ways to group the words in a sentence like "put the ball on the bucket on the chair on the table on the floor"
I would argue that particular sentence only exists to convey the bamboozled feeling you get when you reach the end of it, so only sentient parsers can parse it properly.
LLMs are great because of exactly that: they solve things that have no other solutions.
(And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)
There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.
and demonstrate that the model doesn't completely change simple sentences
A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.
For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.
Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
That's not a technical problem though is it? I don't see legal scenarios where unverified machine translation is acceptable - you need to get a certified translator to sign off on any translations and I also don't see how changing that would be a good thing.
I was briefly considering trying to become a professional translator, and I partly didn't pursue it because of the huge use of MT. I predict demand for human translators will continue to fall quickly unless there are some very high-profile incidents related to MT errors (and humans' liability for relying on them?). Correspondingly the supply of human translators may also fall as it appears like a less credible career option.
I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.
Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.
In-browser ones? No. With external LLMS? Often. It depends on the purpose of the text.
If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.
If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
> If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
And what do you do if the LLM hallucinates? For me, skim-reading still comes out on top because my own mistakes are my own.
Yeah, basically every 15 minute YouTube video, because the amount of actual content I care about is usually 1-2 sentences, and usually ends up being the first sentence of an LLM summary of the transcript.
If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.
I'd wager there's 95% of the benefit for 0.1% of the CPU cycles just by having a "search transcript for term" feature, since in most of those cases I've already got a clear agenda for what kind of information I'm seeking.
Many years ago I make a little proof-of-concept for displaying the transcript (closed captions) of a YouTube video as text, and highlighting a word would navigate to that timestamp and vice-versa. Such a thing might be valuable as a browser extension, now that I think of it.
YouTube already supports that natively these days, although it's kind of hidden (and knowing Google, it might very well randomly disappear one day). Open the description of the video, scroll down and click "show transcript".
Searching the transcript has the problem of missing synonyms. This can be solved by the one undeniably useful type of AI: embedding vector search. Embeddings for each line of the transcript can be calculated in advance and compared with the embeddings of the user's search. These models need only a few hundred million parameters for good results.
One of the best features of SponsorBlock is crowd sourced timestamps for the meat of the video. Skip right over 20 minutes of rambling to see the cool thing in the thumbnail.
You mean you don't summarize those terrible articles you happen to come across and you're a little intrigued, hoping that there's some substance, and then you read, and it just repeats the same thing over and over again with different wording? Anyway, I sometimes still give them the benefit of the doubt, and end up doing a summary. Often they get summarized into 1 or 2 sentences.
Not him, but no. I read a ton already. Using LLMs to summarize a document is a good way to find out if I should bother reading it myself, or if I should read something else.
I occasionally use the "summarize" button on the iPhone Mobile Safari reader view if I land on a blog entry and it's quite long and I want to get a quick idea of if it's worth reading the whole thing or not.
Wonderful article showing the uselessness of this technology, IMO.
> I just realised the situation is even worse. If I have 35 sentences of circumstance leading up to a single sentence of conclusion, the LLM mechanism will — simply because of how the attention mechanism works with the volume of those 35 — find the ’35’ less relevant sentences more important than the single key one. So, in a case like that it will actively suppress the key sentence.
> I first tried to let ChatGPT one of my key posts (the one about the role convictions play in humans with an addendum about human ‘wetware’). ChatGPT made a total mess of it. What it said had little to do with the original post, and where it did, it said the opposite of what the post said.
> For fun, I asked Gemini as well. Gemini didn’t make a mistake and actually produced something that is a very short summary of the post, but it is extremely short so it leaves most out. So, I asked Gemini to expand a little, but as soon as I did that, it fabricated something that is not in the original article (quite the opposite), i.e.: “It discusses the importance of advisors having strong convictions and being able to communicate them clearly.” Nope. Not there.
Why, after reading something like this, should I think of this technology as useful for this task? It seems like the exact opposite. And this is what I see with most LLM reviews. The author will mention spending hours trying to get the LLM to do a thing, or "it made xyz, but it was so buggy that I found it difficult to edit it after, and contained lots of redundant parts", or "it incorrectly did xyz". And every time I read stuff like that I think — wow, if a junior dev did that the number of times the AI did, they'd be fired on the spot.
See also, something like https://boston.conman.org/2025/12/02.1 where (IIRC) the author comes away with a semi-positive conclusion, but if you look at the list near the end, most of these things are something that any person would get fired for, and are things that are not positive for industrial software engineering and design. LLMs appear to do a "lot", but still confabulates and repeats itself incessantly, making it worthless to depend on for practical purposes unless you want to spend hours chasing your own tail over something it hallucinated. I don't see why this isn't the case. I thought we were trying to reduce the error rate in professional software development, not increase it.
Yes. I use it sometimes in Firefox with my local LLM server. Sometimes i come across an article I'm curious about but don't have the time or energy to read. Then I get a TL;DR from it. I know it's not perfect but the alternative is not reading it at all.
If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.
I just use it for personal information, I'm not involved in any wars :) I don't base any decisions on it, for example if I buy something I don't go by just AI stuff to make a decision. I use the AI to screen reviews, things like that (generally I prefer really deep review and not glossy consumer-focused ones). Then I read the reviews that are suitable to me.
And even reading an article about those myself doesn't make me insusceptible to misinformation of course. Most of the misinformation about these wars is spread on purpose by the parties involved themselves. AI hallucination doesn't really cause that, it might exacerbate it a little bit. Information warfare is a huge thing and it has been before AI came on the scene.
Ok, as a more specific example, recently I was thinking of buying the new Xreal Air 2. I have the older one but I have 3 specific issues with it. I used AI to find references about these issues being solved. This was the case and AI confirmed that directly with references, but in further digging myself I did find that there was also a new issue introduced with that model involving blurry edges. So in the end I decided not to buy the thing. The AI didn't identify that issue (though to be fair I didn't ask it to look for any).
So yeah it's not an allknowing oracle and it makes mistakes, but it can help me shave some time off such investigations. Especially now that search engines like google are so full of clickbait crap and sifting through that shit is tedious.
In that case I used OpenWebUI with a local LLM model that speaks to my SearXNG server which in turn uses different search engines as a backend. It tends to work pretty well I have to say, though perplexity does it a little better. But I prefer self-hosting as much as I can (of course the search engine part is out of scope there).
Even if you know about and act against mis- and disinformation, it affects you, and you voluntarily increase your exposure to it. And the situation is already terrible.
I gave the example of wars, because it’s obvious, even for you, and you won’t relativize away the same way how you just did with AI misinformation, which affects you the exact same way.
Most recently, a new ISP contract: because it's both low stakes enough where I don't care much about inaccuracies (it's a bog standard contract from a run of the mill ISP), there's basically no information in there that the cloud vendor doesn't already have (if they have my billing details) but also where I'm curious about whether anything might jump out, all while not really wanting to read the 5 pages of the thing.
Just went back to that, it got both all of the main items (pricing, contract terms, my details) correctly, but also the annoying fine print (that I referenced, just in case). Also works pretty well across languages, though that depends on the model in question a bunch.
I feel like if browsers or whatever get the UX of this down, people will upload all sorts of data into those vendors that they normally shouldn't. I also think that with nuanced enough data, we'll eventually have the LLM equivalent of Excel messing up data due to some formatting BS.
Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.
On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.
WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.
I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?
Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?
> [A] An agent can process untrustworthy inputs
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.
It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.
> Have we not had numerous articles on HN about data exfiltration in recent memory?
there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”
This is true though. While we know what they do on a mechanistic level, we cannot reliably analyze why the model outputs any particular answer in functional terms without a heroic effort at the "arxiv paper" level.
In the digital world, we should be able to go back from output to input unless the intention of the function is to "not do that". Like hashing.
Llms not being able to go from output back to input deterministically and for us to understand why is very important, most of our issues with llms stem from this issue. Its why mechanistic interpretabilty research is so hot right now.
The car analogy is not good because models are digital components and a car is a real world thing. They are not comparable.
I mean, fluid dynamics is an unsolved issue. But even so we know *considerably* less about how LLMs work in functional terms than about how combustion engines work.
I believe you are conflating multiple concepts to prove a flaky point.
Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!
You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.
Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.
Please stop with this unnecessary fear mongering and make a better argument.
Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?
This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.
Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.
That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.
FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.
I would rather firefox release a paid browser with no AI, or at least everything Opt-In, and more user control than to see them stuff unwanted features on users.
I used firefox faithfully for a long time, but it's time for someone to take it out back and put it down.
Also, I switched to Waterfox about a year ago and I have no complaints. The very worst thing about it is that when it updates its very in your face about it, and that is such a small annoyance that its easily negligible.
Throw on an extension like Chrome Mask for those few websites that "require chrome" (as if that is an actual thing), a few privacy extensions, ecosia search, uBlacklist (to permablock certain sites from search results), and Content Farm Terminator to get rid of those mass produced slop sites that weasel their way into search results and you're going to have a much better experience than almost any other setup.
The thing about translation, even a human translator will sometimes make silly mistakes unless they know the domain really well. So LLM are not any worse. Translation is a problem with no deterministic solution (rule-based translation had always been a bad joke). Properly implemented deterministic search/information retrieval, on the other hand, works extremely well. So well it doesn't really need any replacement - except when you also have some extra dynamics on top like "filtering SEO slop" - and that's not something LLMs can improve at all.
No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).
From this point of view, uBlock Origin is also effectively un-auditable.
Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.
I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.
local, open model
local, proprietary model
remote, open model (are there these?)
remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.
The harm to me is the implementation is terrible - local or not (assuming no AI based telemetry). If their answer is AI then it pretty much means they won't make a non-AI solution. Today I just got my first stupid AI tab grouping in Firefox that makes zero intuitive sense. I just want grouping not from an AI reading my tabs. It should just be based on where my tabs were opened from. I also tried Waterfox today because of this post and while I'd prefer horizontal grouping atleast their implementation isn't stupid. Language translation is a opaque complex process. Tabs being grouped from other tabs is not good when opaque and unpredictable and does not need AI.
That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden.
I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference?
The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.
You've lost the plot: The [local|remote]-[open|closed] comment is making a broad claim about LLM usage in general, not limited to the hyper-narrow case of tab-grouping. I'm saying the majority of LLM-dangers are not fixed by that 4-way choice.
Even if it were solely about tab-grouping, my point still stands:
1. You're browsing some funny video site or whatever, and you're naturally expecting "stuff I'm doing now" to be all the tabs on the right.
2. A new tab opens which does not appear there, because the browser chose to move it over into your "Banking" or "Online purchases" groups, which for many users might even be scrolled off-screen.
3. An hour later you switch tasks, and return to your "Banking" or "Online Purchases". These are obviously the same tabs before that you opened from a trusted URL/bookmark, right?
4. Logged out due to inactivity? OK, you enter your username and password into... the fake phishing tab! Oops, game over.
Was the fuzzy LLM instrumental in the failure? Yes. Would having a local model with open weights protect you? No.
> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
I’m not too worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.
I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.
Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
That's an interesting example to use. I only use turn signals when there are other cars around that would need the indication. I don't view a turn signal as politeness, its a safety tool to let others know what I'm about to.
I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
I would strongly suggest you use your turnsignals, always, without exception. You are relying on perfect awareness of your surroundings which isn't going to be the case over a longer stretch of time and you are obliged to signal changes in direction irrespective of whether or not you believe there are others around you. I'm saying this as a frequent cyclist who more than once has been cut off by cars that were not indicating where they were going because they had not seen me, and I though they were going to go straight instead of turn into my lane or the bike path.
Signalling your turns is zero cost, there is no reason to optimize this.
Its a matter of approach and I wouldn't say what I've found to work for me would work for anyone else.
In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.
I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.
Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
The point of making signaling a habit is that you don't think about it at all. It becomes an automatic action that just happens, without affecting your focus.
I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to decide whether signalling was necessary in each case.
You're making a huge leap here. I'm raising only had signaling intentionally rather than automatically has made me pay more attention to others on the road. You're claiming that that action which has proven to make me pay closer attention will kill someone.
By not signaling you are robbing others on the road the opportunity to avoid a potential accident should you not have seen them. It's maximum selfish fuck everyone else asshole behavior.
No, I'm not claiming it will kill someone, I'm claiming it may kill someone.
There is this thing called traffic law and according to that law you are required to signal your turns. If you obstinately refuse to do so you are endangering others and I frankly don't care one bit about how you justify this to yourself but you are not playing by the rules and if that's your position then you should simply not participate in traffic. Just like you stop for red lights when you think there is no other traffic. Right?
Again: it costs you nothing. You are not paying more attention to others on the road because you are not signalling your turns, that's just a nonsense story you tell yourself to justify your wilful non-compliance.
There is no such thing as not signaling. By not using the turn signal, you are lying to anyone around that you might not see, signaling that you are going straight forward when you aren't.
> I only use turn signals when there are other cars around that would need the indication.
That is a very bad habit and you should change it.
You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.
Your signal is more important to the other road users you are less likely to see.
Always ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.
> when there are other cars around that would need the indication
This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.
I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
You're not the only one raising that concern here - I get it and am not recommending what anyone else should do.
I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
You said something fairly egregious on a public forum and are getting pretty polite responses. You definitely do not get it because you’re still trying to justify the behavior.
Just consider that you will make mistakes. If you make a mistake and signal people will have significantly more time to react to it.
Not to dog pile, just to affirm what jacquesm is saying. Remember, what you do consciously is what you end up doing unconsciously when you're distracted.
Here is a hypothetical: A loved one is being hauled away in an ambulance and it is a bad scenario. And you're going to follow them. Your mind is busy with the stress, trying to keep things cool while under pressure. What hospital are they going to, again? Do you have a list of prescriptions? Are they going to make it to the hospital? You're under a mental load, here.
The last thing you need is to ask "did I use my turn signal" as you merge lanes. If you do it automatically, without exception, chances are good your mental muscle memory will kick in and just do it.
But if it isn't a learned innate behavior, you may forget to while driving and cause an accident. Simply because the habit isn't there.
It's similar for talking to bots, as well. How you treat an object, a thing seen as lesser, could become how a person treats people they view as lesser, such as wait staff, for example. If I am unerring polite to a machine with no feelings, I'm more likely to be just as polite to people in customer service jobs. Because it is innate:
Watch your thoughts, they become words; Watch your words, they become actions.
I think it makes much more sense to treat the bot like a bot and avoid humanizing it. I try to abstain from any kind of linguistic embellishments when prompting AI chat bots. So, instead of "what is the area of the circle" or "can you please tell me the area of the circle", I typically prefer "area of the circle" as the prompt. Granted, this is suboptimal given the irresponsible way it has been trained to pretend it's doing human-like communication, but I still try this style first and only go to more conversational language if required.
It is possible that this is a personality flaw, but I’m not really able to completely ignore the human-mimicking nature of ChatGPT. It does too good a job of it.
Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.
I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
Yes I agree with this, but the blog post makes a much more aggressive claim.
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.
Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.
Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.
The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
My take is that I'm ok with anything a company wants to do with their product EXCEPT when they make it opt out or non-opt-outable.
Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in and chose to download it.
I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
Translation AI though has provable behavior cases though: round tripping.
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
Getting byte exact text isn't the point though: even if it's different, I as the original writer can still look at roundtripped text and evaluate that it has the same meaning.
It's not a lossy process, and N round-trips should not lose any net meaning either.
This isn't a possible test with many other applications.
English to Japanese loses plurals, Japanese to English loses most honoriffics and might need to invent a subject (adding information that shouldn't be there and might be wrong). Different languages also just plain have more words than others with their own nuances, and a round trip translation wouldn't be able to tell which word to choose for the original without a additional context.
Translation is lossy. Good translation minimizes it without sounding awkward, but that doesn't mean some detail wasn't lost.
How about a different edge case. It's easier to round trip successfully if your translation uses loan words. It can guarantee that it translates back to the same word. This metric would prefer using loan words even if they are not common in practice and would be awkward to use.
I think the author was close to something here but messed up the landing.
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
I just want FireFox to focus on building an absolutely awesome plugin API that exposes as much power and flexibility as possible - with the best possible security sandbox and permissions model to go with it.
Then everyone who wants AI can have it and those that don't .... don't.
I just want a browser that lets me easily install a good adblocker on all my operating systems. I don't care about their new toolbar or literally any other feature, because I will probably just disable it immediately anyway. But the nr 1 thing I use every day on every single site I visit is an adblocker. I'm always baffled when people complain about ads on mobile or something, because I literally haven't watched ads in decades now.
> I don't care about their new toolbar or literally any other feature
At some point Firefox added these gaps on the URL bar, every single time I install Firefox I have to go out of my way to delete the spacing, it drives me up a wall.
>Then everyone who wants AI can have it and those that don't .... don't.
The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.
My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?
I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.
This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...
[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
They are not "wanting" to introduce AI, they already did.
And now we have:
- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.
- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.
Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)
Every time i reinstall Firefox on a new machine, the number of annoyances that I need to remove or change increases.
Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.
All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.
We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.
> All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like
until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
For me, the complaint isn’t the AI itself, but the updated privacy policy that was rolled out prior to the AI features. Regardless of me using the AI features or not, I must agree to their updated privacy policy.
This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
If they were only selling data in such an 'innocent' way, couldn't they clearly state that, in addition to whatever legalese they're required to provide?
The courts have found providing an http request in exchange for an http response- where both the request and response contains valuable data, is selling data? Well that’s interesting because I too consider it selling of data. I’m glad the courts and I can agree on something so simple and obvious.
The new AI Tab Grouping feature says it. I've never tried the AI chatbot feature but that makes sense. Would be fun to somehow talk to the local AI translation feature.
Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".
Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.
Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU or one of the U.S. billionaires would be willing to fund a truly free (as in libre) browser engine that serves the public interest.
Mozilla the browser doesnt pretend to be a non profit. Mozilla corporation which runs the browser is a for profit company they do not solict donations and NEED to make money to survive. Its just that Mozilla corporation is owned by Mozilla foundation which is a non profit.
>Nobody wants a browser that's focused on diversifying its revenue
I want a browser that has a sustainable business model so it wont collapse some time in the future. That means diversifying its revenue stream away from google's search contract.
>This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...
Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.
I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
> [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
I don't want any of this built into my web browser. Period.
This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
I'm willing to pay for housing in New York. I'm not willing to pay for housing in Antarctica. The reasons being (1) I already have an apartment in New York and do not need another one and (2) I don't want to live in Antarctica.
Somehow they also think we'll pay for Gemini, GPT, Claude, perplexity and their browser thingy, co-pilot and whatever else they have going on. Not to mention, all these things do 95% the same and don't really have any moat.
I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.
Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.
I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.
I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla. But more so.
I mean OpenAI, they have first-mover. Their moat is piling up legislation to slow down the others. Microsoft, they have all their office users, they will cram their AI down their throats whether they want it or not. They're way behind on model development due to strategic miscalculations but they traded their place as a hyperscaler for a ticket into the big game with OpenAI. Google, they have fuck you money and will do the same as Microsoft with their search and mail users.
But Mozilla? "Oh we want to get more into advertising". Ehm yeah basically what will alienate your last few supporters, and getting onto a market where people with 1000x more money than you have the entire market divided between them. Being slightly more "ethical" will be laughed away by their market forces.
Mozilla has the luck that it doesn't have too many independent investors. Not many people screaming "what are we doing about AI because everyone else doing it". They should have a little more insight and less pressure but instead they jump into the same pool with much bigger sharks.
In some ways I think it's that Mozilla leadership still seems themselves as a big tech player that is temporarily a little embarrassed on the field. Not like the second-rank one it is that has already thoroughly deeply lost and must really find something unique to have a reason to exist. Because being a small player is not super bad, many small outfits do great. But it requires a strong niche you're really really good at, better than all the rest. That kinda vision I just don't see from Mozilla.
> We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints
Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)
Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.
I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
Which ones? Translation is local. Preview summarization is local. Image description generation is local. Tab grouping is local. Sidebar can also show a locally hosted page.
The last feature was the sidebar and Google lens integration. For the sidebar the "can" does the heavy lifting but you should also include that it's hidden and won't sync if you use a local page...
yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts
If I have to fill a form for anything that matters, I'm doing it by hand. I don't even use the existing historical auto-complete stuff. It can fill stuff incorrectly. LLMs regularly get factual stuff wrong in mysterious ways when I engage with them as chat bots. It might be less effort to verify correctness than type in all the fields, but IMO there's less risk of missing or forgetting to check one of the fields.
I've had on so many cases autocomplete forms puts something in a field it shouldn't and messes up a submission. I've had it happen on travel documents that caused headaches later at the airport - especially if it fills in a hidden field because some bad web dev implemented it poorly.
Ecommerce checkout. Filling out my address, billing adress, and credit card information. Things like drop downs or different formatting can mess up the current basic ones, but it really shouldn't be that hard for AI to figure out how to fill out such information it knows about me into the form.
I think I've found those unreliable in the past, but much more reliable as time goes on. I can't really remember the last time an address or credit card info was mishandled by autofill. I get that addresses can be poorly defined, but for one you've entered yourself, that you just want to be re-entered, I don't see why we can't solve that problem without AI.
Mozilla implementing a search feature which renders Google and/or its advertising capabilities irrelevant is highly unlikely so long as Mozilla is a financial vassal of Google.
The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.
Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
It doesn't matter what they exactly want to do, what it matters is they're wasting resources on it instead of keeping the ... browsing part ... up to date.
I don't. And the whole idea of Firefox's marketing is that it won't force things on me. Ofc course om frustrated. My core browser should serve pages and manage said pages. Anything else should be an option.
I'm beyond tired of being told my preferences, especially by people with incentives to extract money out of me.
There is also the matter of how training data was licensed to create these models. Local or not, it’s still based on stolen content. And really what transformative use case is there to have AI in the browser - none of the ones currently available step outside gimmicks that quickly get old and don’t really add value.
I want the people who make Firefox to make decisions about Firefox based on what users have been asking for instead of based on what a CEO of a for-profit decides is still not going to make them any money, just like every other plan that got pitched in the last 10 years that failed to turn their losing streak around.
It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
While I do sympathize with the thought behind it, general user is already equating llm chat box as 'better browsing'. In terms of simple positioning vis-a-vis non-technical audience, this is one integration that does make fiscal sense.. if mozilla was a real business.
Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
I find that hard to believe, every general/average user I have spoken to does not use AI for anything in their daily lives and have either not tried it at all or only played with it a bit a few years ago when it first came out.
The problem with integrating a chat bot is that what you are effectively doing is the same thing as adding a single bookmark, except now it's taking up extra space.
There IS no advantage here, it's unnecessary bloat.
Firefox is not for general users, which is the problem that Mozilla's for a literal decade now. There is no way to make it better than Chrome or Safari (because it has to be better for every day users to switch, not just "as good" or even "way more configurable but slightly worse". It has to be appreciably better).
So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.
The way to make Firefox better is by not doing the things that are making the other browsers worse. Ads and privacy are an example of areas where Chrome is clearly getting worse.
LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.
Get there by what mechanism? In the near term a good model pretty much requires a GPU, and it needs a lot of VRAM on that GPU. And the current state of the art of quantization has already gotten us most of the RAM-savings it possibly could.
And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
By M series and amd strix halo. You don't actually need a gpu, if the manufacturer knows that the use case will be running transformer models a more specialized NPU coupled with higher memory bandwidth of on the package RAM.
This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.
Those also contain powerful GPUs. Maybe I oversimplified but I considered them.
More importantly, it costs a lot of money to get that high bus width before you even add the memory. There is no way things like M pro and strix halo take over the mainstream in the next few years.
This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.
it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech
it's better to understand the concern over mozilla's announcement the following way i think:
- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching
- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies
- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla
with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software
the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life
my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position
firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement
the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future
We're still in bubble-period hyper-polarized discourse: "shoehorn AI into absolutely everything and ram it down your throat" vs "all AI is bad and evil and destroying the world."
I don't want any AI in anything apart from the Copilot app, where the AI that I use is. I don't want it in my IDE. I don't want it in my browser. I don't want it in my messaging client. I don't want it in my email app. I want it in the app, where it is, where I can choose to use it, give it what it needs, and leave at at bloody that.
I also want to have complete control over what data I provide to LLMs (at least as long as inference happens in the cloud), but I’d love to have them everywhere, not just a chat UI (which I suspect will be seen as a relatively pretty bizarre way of doing non-chat tasks on a computer).
Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.
I switched to Waterfox about a year ago because my poor old linux box just couldn't keep up with the latest Firefox version (especially the Snap package! I literally unusable for me) and I am very thankful that they aren't going to be including any of the LLM crud that Mozilla has been talking up.
I get the utility that this stuff can have for certain types of activities but on top of not having great hardware to run the dang things, I just don't find any of the proposed use-cases that compelling for me personally.
It's just nice that the totalizing self-insistence of AI tech hasn't gobbled up every corner of the tech space, even if those crevices and niches are getting smaller by the day.
Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.
If most people move from Firefox to Waterfox, then Waterfox can acquire Firefox devs, no? Obviously it comes to money, but the first step to gain funding is to gain popularity...
That's true, but as a Waterfox user, I'm not worried!
If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.
Yes, I agree. I suppose when I said "I'm not worried" - I meant in the context of "it doesn't put me off using Waterfox". I am worried from an overall software ecosystem point of view.
A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.
LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.
This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.
not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.
How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
The difference is that on Windows all unwanted features eventually become mandatory, with no way of switching them off. With Firefox, it never happens.
> Mozilla hasn't had the benefit of the doubt for quite a while here
In contrast to Google Chrome? This is just FUD. Ublock Origin is still working and will be working. Full customization is still there and isn't going away. All of that is unlike in Chrom(ium).
This is not a thread comparing Mozilla to Google. This is a thread where we worry about how a non google browsing alternative stays alive. Of course none of us posting here trusts Google.
More than 1% of humans can read and create a file on computer. Others know how to read and use a search engine, and way more can be instructed by a LLM on how to do so.
I would say it is nearly as easy as installing waterfox or some other privacy focused fork of Firefox.
Even if we ignore things like "they're chasing AI fads instead of better things" and "they're adding attack surface" and so forth, and just focus on the disabling feature toggles thing...
... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.
Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.
There are already user-facing preferences for all of the AI features currently in Firefox. Some of them you don’t even have to go into Settings for, just right-click > Remove AI chatbot. They’re annoying, but I appreciate that they still need to be explicitly approved by the user to be enabled (for now).
> Waterfox won't include them. The browser's job is to serve you, not think for you... Waterfox will not include LLMs. Full stop. At least and most definitely not in their current form or for the foreseeable future.
> If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.
This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.
However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people actually need and optimize heavily on that. But this is hard to do because people don't show their true motives.
Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.
How do you disable the telemetry in Waterfox? It looks like they get their funding because they partnered with an Ad company. Do I just need to change the default search?
Did Firefox already add AI into Tabs? Today I just got my first 'Tab Grouping' and it says "Nightly uses AI to read your Open Tabs". That's the worst way to do grouping ever... just group hierarchically based on where it opened from...
Particularly since they clearly keep this info around - if you install TreeStyleTabs or Sideberry, you'll see it immediately show the historical-structure of your current tabs (in-process at least, I'm not 100% sure about after kill->restore). That info has to come from somewhere.
The problem with this is integration:
no one would complain if it was an official plugin/extension,
but integrating this plugin into Firefox is forced and unexpected decision.
Firefox telemetry,labs/experiments and server-dependent features
will lose it marketshare slowly in favor of local-only browsers that
don't have online dependencies or forced bloatware.
Like many i've switched long ago to LibreWolf.
I was a FF driver for ages and now making the switch to Chrome based browser simple because it's faster and websites are all tested against Chrome / Safari. I see both of these issues manifest IRL on a weekly basis. Why do I want to burn up CPU cycles and second using FF when Chromium is literally faster.
I use FF because of uBlock Origin, and also because it has built-in support for SOCKS5 proxy connections, which I use to access stuff at work over an ssh tunnel.
if kagi can make a search engine that charges users, why dont we have a 1$/month open source browser whose code can be verified but people pay to use monthly?
I guess that wouldn't really "open source" in the traditional sense, but that's clearly a tangent.
Personally, I'd love a paid for high quality browser that serves me rather than sneakily trying to get me to look at ads.
I think the challenge is that a browser is an incredibly difficult and large thing to build and maintain. So there aren't many wholly new browsers in existence, and therefore not very many business models being tried out.
Full agreement that I'd pay for such a thing- I have a browser and a terminal open non-stop during my workday. It's an important tool and I'd easily pay for a better offering if that was an option.
If they support it and have an incentive to listen to their customers and not shareholders, gladly. We can't keep using those logic of being afraid to invest then be mad when companies find someone who will.
With this, people will come here and the go. I mean consider the example of many GNU/Linux users I know who use GNU/Linux (or for them Linux means Ubuntu) system and can ask them to try out Waterfox. But, about installation - can't we have .deb? I know we can easily install from tarball and then setup the .desktop file and then adjust the icon to properly display, and what not...But, Can we make a bit simpler to try?
how is adding ai chat different than asking search engine? I think mozilla wants to make sure that it gets some cut for sending queries to ai similar to their existing revenue model where they get cut for sending it to google. Similar to SE's users should have a choice to use any ai or not.
On Windows Mozilla can't even handle disabling hardware acceleration, a.k.a. the GPU, from its settings page. Sure you can toggle the button but it doesn't work as verified in the task manager. What hope is there that they can be trusted to disable AI then? It's a feature that I'd never want enabled. When that "feature" comes out users will be forced to find a fork without the feature.
I completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.
Good stuff. Bit unrelated but I am excited for the imminent wave of lightweight Servo based browsers, will finally let people break free from the Blink/Gecko duopoly.
“Even if you can disable individual AI features, the cognitive load of monitoring an opaque system that’s supposedly working on your behalf would be overwhelming.”
99.9% of people haven’t ever had one single thought about how their software works. I don’t think they will be overwhelmed with cognitive load. Quite the opposite.
I guess it's nice for non-technical people who don't know how to use `about:config` but beyond that I don't really see the need. Hopefully adding that extra layer of indirection doesn't mean the users will have to wait too long for security patches.
PSA (for the nth time): about:config is not a supported way of configuring Firefox, so if you tweak features with about:config, don't be surprised if those tweaks stop working without warning.
That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."
As I mentioned in a comment below (https://news.ycombinator.com/item?id=46297617 ), Firefox does not rely only on sponsors. There are a few ways to pay money that goes directly towards Firefox.
That link is for Mozilla Foundation, which is a non-profit and donations to it do not go to the development of Firefox. Mozilla Corporation, the for-profit entity, owns and manages Firefox. The way to support Firefox monetarily is by buying Mozilla VPN where available (this is Mullvad in the backend) and buying some Firefox merchandise (like stickers, t-shirts, etc.). I think an MDN Plus subscription also helps.
I agree it's counter-evidence right now, and I think there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...
As I read the post by MrAlex94, I noticed a remark that the browser Chrome is good as a user agent. To me, that's terrific! Looks like I'll have to consider Chrome again.`
Here are what I find as reasons to scream about Mozilla:
Popups:
(a) Several times a day, my attention and concentration get interrupted by, for me, the unwelcome announcement that there is a new version I can download. A new version can have changes I don't like and genuine bugs. Sure, I could keep a copy of my favorite version from history, but that is system management mud wrestling and interruption of my work.
(b) Now I get told several times a day that my computer and cell phone can share access to a Web page. In this action Mozilla covers up what that page was showing I wanted it to show. No thanks. When I'm at my computer, AMD 8 core processor, all my files and software tools, and 1 Gbps optical fiber connection to the Internet and looking at a Web page, I want nothing to do with a cell phone's presentation of a, that, Web page.
(c) Some URLs are a dozen lines long and Mozilla finds ways to present such URLs with all their lines and pursue clearly their main objective -- cover up the desired content.
Mozilla needs to make their covering up, changing, the screen optional or just eliminated.
Want me to donate? You've mentioned as little as $10. Deal: Raise the $10 by a factor of 5 AND quit covering up my content and interrupting my work, and we've got a deal.
When they say "AI browsers are proliferating." and "Their lunch is being eaten by AI browsers." what does that mean? What's an "AI Browser", and are they really gaining significant market share? For what?
I found this (1) that suggests that several "AI Browsers" exist, which is "proliferating" in a sense.
Why don't you go ahead and share the "donate to Firefox" page?
Last I knew, it doesn't exist. You can donate to Mozilla Corporation, the group that has been agitating it's own users and donors for years now.
People who want to support the Firefox team/product and have them focus on improving things like the development tools (or whatever else) literally cannot. Mozilla doesn't make that an option.
I do think dipping your toes into the future is worth it. If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck. But I don't think this is any more dangerous than giving people a browser in the first place. They have already done enough to shoot themselves in the foot enough.
I am more of a sceptic of AI in the context of a browser, than its general use. I think LLMs have great utility and have really helped push things along - but it’s not as if they’re completely risk free.
> If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck.
It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
Just checking but… what if instead of cruel natural selection, we’ve largely eliminated threats like predators and starvation… but still by either necessity or accident are presented with a less cruel more subtle filter?
> Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.
Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.
I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.
To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.
LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way; it's designed like other programs, according to someone's explicit understanding. There's still active research in this field; I have a friend who's very deep into it.
The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.
The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.
Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
This is the bitter lesson.[1]
I too used to think that rule-based AI would be better than statistical, Markov chain parrots, but here we are.
Though I still think/hope that some hybrid system of rule-based logic + LLMs will end up being the winner eventually.
----------------
[1] https://en.wikipedia.org/wiki/Bitter_lesson
These days its pretty much the "sweet" lesson for everyone but Sutton and his peers it seems.
It's bitter for me because I like looking at how things work under the hood and that's much less satisfying when it's "a bunch of stats and linear algebra that just happens to work"
So you prefer "a bunch of electrons, field effects, and clocks than just happen to work"?
If you're building on a computer language, you can say you understand the computer's abstract machine, even though you don't know how we ever managed to make a physical device to instantiate it!
> There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way
I would softly disagree with this. Technically, we also understand exactly what a LLM does, we can analyze every instruction that is executed. Nothing is hidden from us. We don't always know what the outcome will be; but, we also don't always know what the outcome will be in rule-based models, if we make the chain of logic too deep to reliably predict. There is a difference, but it is on a spectrum. In other words, explicit code may help but it does not guarantee understanding, because nothing does and nothing can.
The grammars in rule-based MT are normally fully conceptually understood by the people who wrote them. That's a good start for human understanding.
You could say they don't understand why a human language evolved some feature but they fully understand the details of that feature in human conceptual terms.
I agree in principle the statistical parts of statistical MT are not secret and that computer code in high-level languages isn't guaranteed to be comprehensible to a human reader. Or in general, binary code isn't guaranteed to be incomprehensible and source code isn't guaranteed to be comprehensible.
But for MT, the hand-written grammars and rules are at least comprehended by their authors at the time they're initially constructed.
Yep, some domains have no hard rules at all.
Time flies like an arrow; fruit flies like a banana.
It's completely possible to write a parser that outputs every possible parse of "time flies like an arrow", and then try interpreting each one and discard ones that don't make sense according to some downstream rules (unknown noun phrase: "time fly").
I did this for a text adventure parser, but it didn't work well because there are exponentially ways to group the words in a sentence like "put the ball on the bucket on the chair on the table on the floor"
I would argue that particular sentence only exists to convey the bamboozled feeling you get when you reach the end of it, so only sentient parsers can parse it properly.
LLMs are great because of exactly that: they solve things that have no other solutions.
(And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)
There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.
and demonstrate that the model doesn't completely change simple sentences
A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.
For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.
Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
That's not a technical problem though is it? I don't see legal scenarios where unverified machine translation is acceptable - you need to get a certified translator to sign off on any translations and I also don't see how changing that would be a good thing.
I was briefly considering trying to become a professional translator, and I partly didn't pursue it because of the huge use of MT. I predict demand for human translators will continue to fall quickly unless there are some very high-profile incidents related to MT errors (and humans' liability for relying on them?). Correspondingly the supply of human translators may also fall as it appears like a less credible career option.
I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.
> many of us already used machine translation to read some legal agreement in a language we don't know.
Have we? Most of us? Really? When?
I know I did for rent contracts and know other people that did the same. And I said many, not most.
Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.
In-browser ones? No. With external LLMS? Often. It depends on the purpose of the text.
If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.
If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
> If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
And what do you do if the LLM hallucinates? For me, skim-reading still comes out on top because my own mistakes are my own.
Yeah, basically every 15 minute YouTube video, because the amount of actual content I care about is usually 1-2 sentences, and usually ends up being the first sentence of an LLM summary of the transcript.
If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.
I'd wager there's 95% of the benefit for 0.1% of the CPU cycles just by having a "search transcript for term" feature, since in most of those cases I've already got a clear agenda for what kind of information I'm seeking.
Many years ago I make a little proof-of-concept for displaying the transcript (closed captions) of a YouTube video as text, and highlighting a word would navigate to that timestamp and vice-versa. Such a thing might be valuable as a browser extension, now that I think of it.
https://reduct.video/ lets you edit (not just search!) videos that way. Kind of a different way to think about video content!
YouTube already supports that natively these days, although it's kind of hidden (and knowing Google, it might very well randomly disappear one day). Open the description of the video, scroll down and click "show transcript".
Searching the transcript has the problem of missing synonyms. This can be solved by the one undeniably useful type of AI: embedding vector search. Embeddings for each line of the transcript can be calculated in advance and compared with the embeddings of the user's search. These models need only a few hundred million parameters for good results.
One of the best features of SponsorBlock is crowd sourced timestamps for the meat of the video. Skip right over 20 minutes of rambling to see the cool thing in the thumbnail.
You mean you don't summarize those terrible articles you happen to come across and you're a little intrigued, hoping that there's some substance, and then you read, and it just repeats the same thing over and over again with different wording? Anyway, I sometimes still give them the benefit of the doubt, and end up doing a summary. Often they get summarized into 1 or 2 sentences.
Maybe I should start doing that but I usually just... don't read them.
No, not really. I don't even know how to really respond to this but maybe
1. I don't read "terrible articles". I can skim an article and figure if something I'm interested in.
2. I actually do read terrible articles and I have terrible taste
3. Any "summarization" I do that isn't from my direct reading is evaluated by the discussion around it. Though nowadays that's more and more spotty.
I can spot those articles from a mile away and never click the link.
Yes, several times a day. I use summarization for webpages, messages, documents and YouTube videos. It’s super handy.
I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.
That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.
Aren't you worried it will fuck up your comprehension skills? Reading or listening.
Not him, but no. I read a ton already. Using LLMs to summarize a document is a good way to find out if I should bother reading it myself, or if I should read something else.
Skimming and being able to quickly decide if something is worth actually reading is itself a valuable skill.
There's a limit to how fast I can feasibly skim, and LLMs definitely do it faster.
I occasionally use the "summarize" button on the iPhone Mobile Safari reader view if I land on a blog entry and it's quite long and I want to get a quick idea of if it's worth reading the whole thing or not.
No, because an LLM cannot summarise. It can only shorten which is not the same.
Citation: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actu...
Wonderful article showing the uselessness of this technology, IMO.
> I just realised the situation is even worse. If I have 35 sentences of circumstance leading up to a single sentence of conclusion, the LLM mechanism will — simply because of how the attention mechanism works with the volume of those 35 — find the ’35’ less relevant sentences more important than the single key one. So, in a case like that it will actively suppress the key sentence.
> I first tried to let ChatGPT one of my key posts (the one about the role convictions play in humans with an addendum about human ‘wetware’). ChatGPT made a total mess of it. What it said had little to do with the original post, and where it did, it said the opposite of what the post said.
> For fun, I asked Gemini as well. Gemini didn’t make a mistake and actually produced something that is a very short summary of the post, but it is extremely short so it leaves most out. So, I asked Gemini to expand a little, but as soon as I did that, it fabricated something that is not in the original article (quite the opposite), i.e.: “It discusses the importance of advisors having strong convictions and being able to communicate them clearly.” Nope. Not there.
Why, after reading something like this, should I think of this technology as useful for this task? It seems like the exact opposite. And this is what I see with most LLM reviews. The author will mention spending hours trying to get the LLM to do a thing, or "it made xyz, but it was so buggy that I found it difficult to edit it after, and contained lots of redundant parts", or "it incorrectly did xyz". And every time I read stuff like that I think — wow, if a junior dev did that the number of times the AI did, they'd be fired on the spot.
See also, something like https://boston.conman.org/2025/12/02.1 where (IIRC) the author comes away with a semi-positive conclusion, but if you look at the list near the end, most of these things are something that any person would get fired for, and are things that are not positive for industrial software engineering and design. LLMs appear to do a "lot", but still confabulates and repeats itself incessantly, making it worthless to depend on for practical purposes unless you want to spend hours chasing your own tail over something it hallucinated. I don't see why this isn't the case. I thought we were trying to reduce the error rate in professional software development, not increase it.
Yes. I use it sometimes in Firefox with my local LLM server. Sometimes i come across an article I'm curious about but don't have the time or energy to read. Then I get a TL;DR from it. I know it's not perfect but the alternative is not reading it at all.
If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.
I highly doubt that no information would be worse than wrong information. Both wars in Ukraine and Gaza show this very clearly.
I just use it for personal information, I'm not involved in any wars :) I don't base any decisions on it, for example if I buy something I don't go by just AI stuff to make a decision. I use the AI to screen reviews, things like that (generally I prefer really deep review and not glossy consumer-focused ones). Then I read the reviews that are suitable to me.
And even reading an article about those myself doesn't make me insusceptible to misinformation of course. Most of the misinformation about these wars is spread on purpose by the parties involved themselves. AI hallucination doesn't really cause that, it might exacerbate it a little bit. Information warfare is a huge thing and it has been before AI came on the scene.
Ok, as a more specific example, recently I was thinking of buying the new Xreal Air 2. I have the older one but I have 3 specific issues with it. I used AI to find references about these issues being solved. This was the case and AI confirmed that directly with references, but in further digging myself I did find that there was also a new issue introduced with that model involving blurry edges. So in the end I decided not to buy the thing. The AI didn't identify that issue (though to be fair I didn't ask it to look for any).
So yeah it's not an allknowing oracle and it makes mistakes, but it can help me shave some time off such investigations. Especially now that search engines like google are so full of clickbait crap and sifting through that shit is tedious.
In that case I used OpenWebUI with a local LLM model that speaks to my SearXNG server which in turn uses different search engines as a backend. It tends to work pretty well I have to say, though perplexity does it a little better. But I prefer self-hosting as much as I can (of course the search engine part is out of scope there).
Even if you know about and act against mis- and disinformation, it affects you, and you voluntarily increase your exposure to it. And the situation is already terrible.
I gave the example of wars, because it’s obvious, even for you, and you won’t relativize away the same way how you just did with AI misinformation, which affects you the exact same way.
Haven’t tried them but I can see these features being really useful for screen reader users.
Yes.
Most recently, a new ISP contract: because it's both low stakes enough where I don't care much about inaccuracies (it's a bog standard contract from a run of the mill ISP), there's basically no information in there that the cloud vendor doesn't already have (if they have my billing details) but also where I'm curious about whether anything might jump out, all while not really wanting to read the 5 pages of the thing.
Just went back to that, it got both all of the main items (pricing, contract terms, my details) correctly, but also the annoying fine print (that I referenced, just in case). Also works pretty well across languages, though that depends on the model in question a bunch.
I feel like if browsers or whatever get the UX of this down, people will upload all sorts of data into those vendors that they normally shouldn't. I also think that with nuanced enough data, we'll eventually have the LLM equivalent of Excel messing up data due to some formatting BS.
Nah, because anything not worth reading is also not worth summarizing.
No, because I know how to search and skim.
Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.
On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.
WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.
I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?
Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?
> [A] An agent can process untrustworthy inputs
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...
Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.
It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.
> Have we not had numerous articles on HN about data exfiltration in recent memory?
there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”
This is true though. While we know what they do on a mechanistic level, we cannot reliably analyze why the model outputs any particular answer in functional terms without a heroic effort at the "arxiv paper" level.
that’s true of analyzing individual atoms in a combustion engine — yet I doubt you’d claim we don’t know how they work
also this went from “we can’t analyze” to “we can’t analyze reliably [without a lot of effort]” quite quickly
In the digital world, we should be able to go back from output to input unless the intention of the function is to "not do that". Like hashing.
Llms not being able to go from output back to input deterministically and for us to understand why is very important, most of our issues with llms stem from this issue. Its why mechanistic interpretabilty research is so hot right now.
The car analogy is not good because models are digital components and a car is a real world thing. They are not comparable.
ah I forgot digital components are not real world things
I mean, fluid dynamics is an unsolved issue. But even so we know *considerably* less about how LLMs work in functional terms than about how combustion engines work.
I outright disagree; we know how LLMs work
I believe you are conflating multiple concepts to prove a flaky point.
Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!
You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.
Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.
Please stop with this unnecessary fear mongering and make a better argument.
Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?
This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.
At least with finetuning, yes: https://arxiv.org/abs/2512.09742
It's unclear if this technique could also work with in-prompt data.
Why does the LLM get to send data to the website?? That’s my whole point, if you don’t expose a way for it to send data anywhere, it can’t.
Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.
As a side note, I was like "Isn't WaterFox the FF fork by that wolf guy?"
Then I thought, "Aha! Surely LibreWolf is the one I'm thinking of!"
Turns out no, it's a third one! It's PaleMoon...
It's frankly desperate trend chasing from management that lost after starting from near total market domination, and have no idea what to do now.
> starting from near total market domination
That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.
FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.
I would rather firefox release a paid browser with no AI, or at least everything Opt-In, and more user control than to see them stuff unwanted features on users.
I used firefox faithfully for a long time, but it's time for someone to take it out back and put it down.
Also, I switched to Waterfox about a year ago and I have no complaints. The very worst thing about it is that when it updates its very in your face about it, and that is such a small annoyance that its easily negligible.
Throw on an extension like Chrome Mask for those few websites that "require chrome" (as if that is an actual thing), a few privacy extensions, ecosia search, uBlacklist (to permablock certain sites from search results), and Content Farm Terminator to get rid of those mass produced slop sites that weasel their way into search results and you're going to have a much better experience than almost any other setup.
The thing about translation, even a human translator will sometimes make silly mistakes unless they know the domain really well. So LLM are not any worse. Translation is a problem with no deterministic solution (rule-based translation had always been a bad joke). Properly implemented deterministic search/information retrieval, on the other hand, works extremely well. So well it doesn't really need any replacement - except when you also have some extra dynamics on top like "filtering SEO slop" - and that's not something LLMs can improve at all.
No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).
From this point of view, uBlock Origin is also effectively un-auditable.
Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.
I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.
There's levels of this, though, more than two:
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.The harm to me is the implementation is terrible - local or not (assuming no AI based telemetry). If their answer is AI then it pretty much means they won't make a non-AI solution. Today I just got my first stupid AI tab grouping in Firefox that makes zero intuitive sense. I just want grouping not from an AI reading my tabs. It should just be based on where my tabs were opened from. I also tried Waterfox today because of this post and while I'd prefer horizontal grouping atleast their implementation isn't stupid. Language translation is a opaque complex process. Tabs being grouped from other tabs is not good when opaque and unpredictable and does not need AI.
What do you mean by "open"?
Open weights, or open training data? These are very different things.
That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden.
I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference?
The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.
> There is almost no harm in a local, open model.
Depends what the side-effects can possibly be. A local+open model could still disregard-all-previous-instructions and erase your hard drive.
How, literally how? The LLM is provided a list of tab titles, and returns a classification/grouping.
There is no reason nor design where you also provide it with full disk access or terminal rights.
This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.
You've lost the plot: The [local|remote]-[open|closed] comment is making a broad claim about LLM usage in general, not limited to the hyper-narrow case of tab-grouping. I'm saying the majority of LLM-dangers are not fixed by that 4-way choice.
Even if it were solely about tab-grouping, my point still stands:
1. You're browsing some funny video site or whatever, and you're naturally expecting "stuff I'm doing now" to be all the tabs on the right.
2. A new tab opens which does not appear there, because the browser chose to move it over into your "Banking" or "Online purchases" groups, which for many users might even be scrolled off-screen.
3. An hour later you switch tasks, and return to your "Banking" or "Online Purchases". These are obviously the same tabs before that you opened from a trusted URL/bookmark, right?
4. Logged out due to inactivity? OK, you enter your username and password into... the fake phishing tab! Oops, game over.
Was the fuzzy LLM instrumental in the failure? Yes. Would having a local model with open weights protect you? No.
Seems like a mean thing to say when the subject they were replying to was AI in general and not just the dumb tab grouping feature.
Great, because an LLM can’t “do” anything! Only an agent can, and only whichever functions/tools it has access to. So my point still stands.
Also I’m referring to the post, not this comment specifically.
> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
I’m not too worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.
I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.
Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
It's like using your turn signal even when you know there's nobody around you. Politeness is a habit you don't want to break.
That's an interesting example to use. I only use turn signals when there are other cars around that would need the indication. I don't view a turn signal as politeness, its a safety tool to let others know what I'm about to.
I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
I would strongly suggest you use your turnsignals, always, without exception. You are relying on perfect awareness of your surroundings which isn't going to be the case over a longer stretch of time and you are obliged to signal changes in direction irrespective of whether or not you believe there are others around you. I'm saying this as a frequent cyclist who more than once has been cut off by cars that were not indicating where they were going because they had not seen me, and I though they were going to go straight instead of turn into my lane or the bike path.
Signalling your turns is zero cost, there is no reason to optimize this.
I am a frequent pedestrian and am often frustrated by drivers not indicating, but always grateful when they do!
Its a matter of approach and I wouldn't say what I've found to work for me would work for anyone else.
In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.
I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.
Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
The point of making signaling a habit is that you don't think about it at all. It becomes an automatic action that just happens, without affecting your focus.
I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to decide whether signalling was necessary in each case.
This is all fine and good until you accidentally kill someone with your blinkers off and then you have to wonder 'what if' the rest of your life.
Seriously: signal your turns and stop defending the indefensible, this is just silly.
You're making a huge leap here. I'm raising only had signaling intentionally rather than automatically has made me pay more attention to others on the road. You're claiming that that action which has proven to make me pay closer attention will kill someone.
By not signaling you are robbing others on the road the opportunity to avoid a potential accident should you not have seen them. It's maximum selfish fuck everyone else asshole behavior.
Did you read any of my comments? I signal when anyone is around and don't signal when there is no one to notify of my upcoming turn.
I read them all. I am especially amazed by the comment that you used to ride motorcycles and assumed you were not seen -- which is a good practice.
The point of indicating is that it's even more important to the people you didn't notice.
No, I'm not claiming it will kill someone, I'm claiming it may kill someone.
There is this thing called traffic law and according to that law you are required to signal your turns. If you obstinately refuse to do so you are endangering others and I frankly don't care one bit about how you justify this to yourself but you are not playing by the rules and if that's your position then you should simply not participate in traffic. Just like you stop for red lights when you think there is no other traffic. Right?
Again: it costs you nothing. You are not paying more attention to others on the road because you are not signalling your turns, that's just a nonsense story you tell yourself to justify your wilful non-compliance.
There is no such thing as not signaling. By not using the turn signal, you are lying to anyone around that you might not see, signaling that you are going straight forward when you aren't.
> I only use turn signals when there are other cars around that would need the indication.
That is a very bad habit and you should change it.
You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.
Your signal is more important to the other road users you are less likely to see.
Always ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.
It's also better because then it becomes a mechanical habit, you don't have to think about it.
> when there are other cars around that would need the indication
This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.
I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
You're not the only one raising that concern here - I get it and am not recommending what anyone else should do.
I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
You said something fairly egregious on a public forum and are getting pretty polite responses. You definitely do not get it because you’re still trying to justify the behavior.
Just consider that you will make mistakes. If you make a mistake and signal people will have significantly more time to react to it.
Not to dog pile, just to affirm what jacquesm is saying. Remember, what you do consciously is what you end up doing unconsciously when you're distracted.
Here is a hypothetical: A loved one is being hauled away in an ambulance and it is a bad scenario. And you're going to follow them. Your mind is busy with the stress, trying to keep things cool while under pressure. What hospital are they going to, again? Do you have a list of prescriptions? Are they going to make it to the hospital? You're under a mental load, here.
The last thing you need is to ask "did I use my turn signal" as you merge lanes. If you do it automatically, without exception, chances are good your mental muscle memory will kick in and just do it.
But if it isn't a learned innate behavior, you may forget to while driving and cause an accident. Simply because the habit isn't there.
It's similar for talking to bots, as well. How you treat an object, a thing seen as lesser, could become how a person treats people they view as lesser, such as wait staff, for example. If I am unerring polite to a machine with no feelings, I'm more likely to be just as polite to people in customer service jobs. Because it is innate:
Watch your thoughts, they become words; Watch your words, they become actions.
I think it makes much more sense to treat the bot like a bot and avoid humanizing it. I try to abstain from any kind of linguistic embellishments when prompting AI chat bots. So, instead of "what is the area of the circle" or "can you please tell me the area of the circle", I typically prefer "area of the circle" as the prompt. Granted, this is suboptimal given the irresponsible way it has been trained to pretend it's doing human-like communication, but I still try this style first and only go to more conversational language if required.
It is possible that this is a personality flaw, but I’m not really able to completely ignore the human-mimicking nature of ChatGPT. It does too good a job of it.
Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.
I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
To me it sounds like a reasonable "AI-conservative" position.
(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)
> but I don't understand the hate for LLMs.
It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
Your tone is kind of ridiculous.
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
Yes I agree with this, but the blog post makes a much more aggressive claim.
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.
Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.
Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.
Running locally does help get less modified output, bit how does it help escape the black box problem?
A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.
The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
My take is that I'm ok with anything a company wants to do with their product EXCEPT when they make it opt out or non-opt-outable.
Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in and chose to download it.
I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
Translation AI though has provable behavior cases though: round tripping.
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
That is not an ideal translation as it prioritizes round tripability over natural word choice or ordering.
Getting byte exact text isn't the point though: even if it's different, I as the original writer can still look at roundtripped text and evaluate that it has the same meaning.
It's not a lossy process, and N round-trips should not lose any net meaning either.
This isn't a possible test with many other applications.
English to Japanese loses plurals, Japanese to English loses most honoriffics and might need to invent a subject (adding information that shouldn't be there and might be wrong). Different languages also just plain have more words than others with their own nuances, and a round trip translation wouldn't be able to tell which word to choose for the original without a additional context.
Translation is lossy. Good translation minimizes it without sounding awkward, but that doesn't mean some detail wasn't lost.
How about a different edge case. It's easier to round trip successfully if your translation uses loan words. It can guarantee that it translates back to the same word. This metric would prefer using loan words even if they are not common in practice and would be awkward to use.
The point of translation is to translate. If both parties wind up comprehending what was said then you've succeeded.
A translation can succeed without being the ideal one.
I think the author was close to something here but messed up the landing.
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
I just want FireFox to focus on building an absolutely awesome plugin API that exposes as much power and flexibility as possible - with the best possible security sandbox and permissions model to go with it.
Then everyone who wants AI can have it and those that don't .... don't.
I just want a browser that lets me easily install a good adblocker on all my operating systems. I don't care about their new toolbar or literally any other feature, because I will probably just disable it immediately anyway. But the nr 1 thing I use every day on every single site I visit is an adblocker. I'm always baffled when people complain about ads on mobile or something, because I literally haven't watched ads in decades now.
> I don't care about their new toolbar or literally any other feature
At some point Firefox added these gaps on the URL bar, every single time I install Firefox I have to go out of my way to delete the spacing, it drives me up a wall.
I just want an adblocker and tree style vertical tabs, where the tab bar minimises when the mouse isn't over it.
That's literally my entire use case for using firefox.
They've been quite forceful in the past in pushing 'plugins' by integrating them and turning them on repeatedly when people turned them off.
Did that achieve the last CEOs goals? Presumably if it did they'll use that route again.
Have Google required a default 'on' for Gemini use?
>Then everyone who wants AI can have it and those that don't .... don't.
The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.
My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?
I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.
I just want them to fix their goddamn rendering.
This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...
[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
They are not "wanting" to introduce AI, they already did.
And now we have:
- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.
- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.
Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)
Every time i reinstall Firefox on a new machine, the number of annoyances that I need to remove or change increases.
Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.
It's ridiculous that all those things aren't just config in a plain text file.
I think you can with a user.js file, unless they changed that?
that you are expected to edit in vim
All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.
We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.
> All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like
until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
For me, the complaint isn’t the AI itself, but the updated privacy policy that was rolled out prior to the AI features. Regardless of me using the AI features or not, I must agree to their updated privacy policy.
According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners. https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...
This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
If they were only selling data in such an 'innocent' way, couldn't they clearly state that, in addition to whatever legalese they're required to provide?
The courts have found providing an http request in exchange for an http response- where both the request and response contains valuable data, is selling data? Well that’s interesting because I too consider it selling of data. I’m glad the courts and I can agree on something so simple and obvious.
> courts have found [that "selling" means] providing an http request in exchange for an http response
No they fucking haven't. Provide evidence for this.
Pay for what? It says it's a local AI model so how will AI companies be giving Firefox revenue from this?
What says that?
https://support.mozilla.org/en-US/kb/ai-chatbot This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.
The new AI Tab Grouping feature says it. I've never tried the AI chatbot feature but that makes sense. Would be fun to somehow talk to the local AI translation feature.
> Firefox is trying to diversify their revenue
Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".
Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.
Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU or one of the U.S. billionaires would be willing to fund a truly free (as in libre) browser engine that serves the public interest.
Mozilla the browser doesnt pretend to be a non profit. Mozilla corporation which runs the browser is a for profit company they do not solict donations and NEED to make money to survive. Its just that Mozilla corporation is owned by Mozilla foundation which is a non profit.
>Nobody wants a browser that's focused on diversifying its revenue I want a browser that has a sustainable business model so it wont collapse some time in the future. That means diversifying its revenue stream away from google's search contract.
>This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...
Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.
I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
> [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
I don't want any of this built into my web browser. Period.
This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
Reread your post with your evil PM hat on. You just said "I'm willing to pay for AI". That's all they hear.
I'm willing to pay for housing in New York. I'm not willing to pay for housing in Antarctica. The reasons being (1) I already have an apartment in New York and do not need another one and (2) I don't want to live in Antarctica.
Somehow they also think we'll pay for Gemini, GPT, Claude, perplexity and their browser thingy, co-pilot and whatever else they have going on. Not to mention, all these things do 95% the same and don't really have any moat.
I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.
Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.
I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.
I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla. But more so.
I mean OpenAI, they have first-mover. Their moat is piling up legislation to slow down the others. Microsoft, they have all their office users, they will cram their AI down their throats whether they want it or not. They're way behind on model development due to strategic miscalculations but they traded their place as a hyperscaler for a ticket into the big game with OpenAI. Google, they have fuck you money and will do the same as Microsoft with their search and mail users.
But Mozilla? "Oh we want to get more into advertising". Ehm yeah basically what will alienate your last few supporters, and getting onto a market where people with 1000x more money than you have the entire market divided between them. Being slightly more "ethical" will be laughed away by their market forces.
Mozilla has the luck that it doesn't have too many independent investors. Not many people screaming "what are we doing about AI because everyone else doing it". They should have a little more insight and less pressure but instead they jump into the same pool with much bigger sharks.
In some ways I think it's that Mozilla leadership still seems themselves as a big tech player that is temporarily a little embarrassed on the field. Not like the second-rank one it is that has already thoroughly deeply lost and must really find something unique to have a reason to exist. Because being a small player is not super bad, many small outfits do great. But it requires a strong niche you're really really good at, better than all the rest. That kinda vision I just don't see from Mozilla.
> We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints
Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)
Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.
I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
If we look at the last AI features they implemented it doesn't like they are betting on local models anymore.
Which ones? Translation is local. Preview summarization is local. Image description generation is local. Tab grouping is local. Sidebar can also show a locally hosted page.
The last feature was the sidebar and Google lens integration. For the sidebar the "can" does the heavy lifting but you should also include that it's hidden and won't sync if you use a local page...
I don't feel like I want AI in my browser. I'm not sure what I'd do with it. Maybe translation?
yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts
If I have to fill a form for anything that matters, I'm doing it by hand. I don't even use the existing historical auto-complete stuff. It can fill stuff incorrectly. LLMs regularly get factual stuff wrong in mysterious ways when I engage with them as chat bots. It might be less effort to verify correctness than type in all the fields, but IMO there's less risk of missing or forgetting to check one of the fields.
I've had on so many cases autocomplete forms puts something in a field it shouldn't and messes up a submission. I've had it happen on travel documents that caused headaches later at the airport - especially if it fills in a hidden field because some bad web dev implemented it poorly.
It gets it wrong because the current "AI" for filling out forms is extremely weak and brittle compared to the general language models we have now.
Language models seem pretty weak and brittle in my interactions with them too.
Do you have an example form field that a general language model could fill out better than a human + highly focussed deterministic algorithm?
Ecommerce checkout. Filling out my address, billing adress, and credit card information. Things like drop downs or different formatting can mess up the current basic ones, but it really shouldn't be that hard for AI to figure out how to fill out such information it knows about me into the form.
I think I've found those unreliable in the past, but much more reliable as time goes on. I can't really remember the last time an address or credit card info was mishandled by autofill. I get that addresses can be poorly defined, but for one you've entered yourself, that you just want to be re-entered, I don't see why we can't solve that problem without AI.
Super charged search on page would also be nice
Agents (like a research agent) could also be interesting
Mozilla implementing a search feature which renders Google and/or its advertising capabilities irrelevant is highly unlikely so long as Mozilla is a financial vassal of Google.
I like translation, it's come in handy a few times, and it's neat to know it's done locally.
I use it a lot more now I know it's done locally.
FWIW, Firefox already has AI-based translation using local models.
The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.
Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
I just know I've already had to chase down AI in Firefox I definitely did not ask for or activate, and I don't recall consenting to.
It doesn't matter what they exactly want to do, what it matters is they're wasting resources on it instead of keeping the ... browsing part ... up to date.
>I think people want AI in the browser
I don't. And the whole idea of Firefox's marketing is that it won't force things on me. Ofc course om frustrated. My core browser should serve pages and manage said pages. Anything else should be an option.
I'm beyond tired of being told my preferences, especially by people with incentives to extract money out of me.
There is also the matter of how training data was licensed to create these models. Local or not, it’s still based on stolen content. And really what transformative use case is there to have AI in the browser - none of the ones currently available step outside gimmicks that quickly get old and don’t really add value.
I want the people who make Firefox to make decisions about Firefox based on what users have been asking for instead of based on what a CEO of a for-profit decides is still not going to make them any money, just like every other plan that got pitched in the last 10 years that failed to turn their losing streak around.
It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
While I do sympathize with the thought behind it, general user is already equating llm chat box as 'better browsing'. In terms of simple positioning vis-a-vis non-technical audience, this is one integration that does make fiscal sense.. if mozilla was a real business.
Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
I find that hard to believe, every general/average user I have spoken to does not use AI for anything in their daily lives and have either not tried it at all or only played with it a bit a few years ago when it first came out.
The problem with integrating a chat bot is that what you are effectively doing is the same thing as adding a single bookmark, except now it's taking up extra space. There IS no advantage here, it's unnecessary bloat.
Firefox is not for general users, which is the problem that Mozilla's for a literal decade now. There is no way to make it better than Chrome or Safari (because it has to be better for every day users to switch, not just "as good" or even "way more configurable but slightly worse". It has to be appreciably better).
So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.
The way to make Firefox better is by not doing the things that are making the other browsers worse. Ads and privacy are an example of areas where Chrome is clearly getting worse.
LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
These comments are full of people explaining how Firefox can differentiate from chrome and safari: don't force AI on us.
I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.
Not yet, but we’ll hopefully get there within at most a few years.
Get there by what mechanism? In the near term a good model pretty much requires a GPU, and it needs a lot of VRAM on that GPU. And the current state of the art of quantization has already gotten us most of the RAM-savings it possibly could.
And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
By M series and amd strix halo. You don't actually need a gpu, if the manufacturer knows that the use case will be running transformer models a more specialized NPU coupled with higher memory bandwidth of on the package RAM.
This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.
Those also contain powerful GPUs. Maybe I oversimplified but I considered them.
More importantly, it costs a lot of money to get that high bus width before you even add the memory. There is no way things like M pro and strix halo take over the mainstream in the next few years.
This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.
>We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into..
https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...
it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech
it's better to understand the concern over mozilla's announcement the following way i think:
- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching
- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies
- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla
with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software
the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life
my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position
firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement
the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future
We're still in bubble-period hyper-polarized discourse: "shoehorn AI into absolutely everything and ram it down your throat" vs "all AI is bad and evil and destroying the world."
The former is a cause, the latter an effect of it.
I don't want any AI in anything apart from the Copilot app, where the AI that I use is. I don't want it in my IDE. I don't want it in my browser. I don't want it in my messaging client. I don't want it in my email app. I want it in the app, where it is, where I can choose to use it, give it what it needs, and leave at at bloody that.
I also want to have complete control over what data I provide to LLMs (at least as long as inference happens in the cloud), but I’d love to have them everywhere, not just a chat UI (which I suspect will be seen as a relatively pretty bizarre way of doing non-chat tasks on a computer).
> I think people want AI in the browser
Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.
I don't want to have to max out my gpu to browse reddit.
I switched to Waterfox about a year ago because my poor old linux box just couldn't keep up with the latest Firefox version (especially the Snap package! I literally unusable for me) and I am very thankful that they aren't going to be including any of the LLM crud that Mozilla has been talking up.
I get the utility that this stuff can have for certain types of activities but on top of not having great hardware to run the dang things, I just don't find any of the proposed use-cases that compelling for me personally.
It's just nice that the totalizing self-insistence of AI tech hasn't gobbled up every corner of the tech space, even if those crevices and niches are getting smaller by the day.
Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.
If most people move from Firefox to Waterfox, then Waterfox can acquire Firefox devs, no? Obviously it comes to money, but the first step to gain funding is to gain popularity...
That's true, but as a Waterfox user, I'm not worried!
If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.
> I'll just find a new browser.
The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.
Yes, I agree. I suppose when I said "I'm not worried" - I meant in the context of "it doesn't put me off using Waterfox". I am worried from an overall software ecosystem point of view.
> The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.
Lynx is still not a re-skin of Chrome, unless I missed something changing.
Can you manage your bank in Lynx?
A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.
LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.
This feature can be easily disabled with policies:
https://mozilla.github.io/policy-templates/#generativeai
https://mozilla.github.io/policy-templates/#preferences
https://searchfox.org/firefox-main/source/browser/app/profil...
https://searchfox.org/firefox-main/source/modules/libpref/in...
This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.
Well, you aren't the only person using a tool, and it probably matters more to those who are making things "trendy".
In general, how else would people "learn" about a feature unless it was enabled by default or the product nagged them?
not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.
How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
Bad defaults are bad defaults, and "you can turn them off" is not a good excuse for bad defaults continuing to be bad defaults
It is the default in every major browser at this point.
https://chromeenterprise.google/policies/#GenAiDefaultSettin...
The difference is that on Windows all unwanted features eventually become mandatory, with no way of switching them off. With Firefox, it never happens.
If you listen to the doomers in this thread, it will.
They "will" remove the option from settings, hide it in about:config, then later on remove it from there!
Of course none of that is true...
They already have hidden these in about:config!
Right click anywhere, (ask an AI chatbot) right there. Go to settings, search AI or search Chatbot, nothing.
It's plausible because the team working on the settings screen will be reassigned to the "AI".
That's just doom saying at this point.
Mozilla hasn't had the benefit of the doubt for quite a while here. This isn't just one small kerfuffle coming out of nowhere.
They say trust takes a lifetime to build and seconds to break ". We're years into it at this point.
> Mozilla hasn't had the benefit of the doubt for quite a while here
In contrast to Google Chrome? This is just FUD. Ublock Origin is still working and will be working. Full customization is still there and isn't going away. All of that is unlike in Chrom(ium).
This is not a thread comparing Mozilla to Google. This is a thread where we worry about how a non google browsing alternative stays alive. Of course none of us posting here trusts Google.
> we worry about how a non google browsing alternative stays alive
> This is just FUD. Ublock Origin is still working and will be working. Full customization is still there and isn't going away.
Easy for who? 99% of people are not going/able to setup firefox policies.
More than 1% of humans can read and create a file on computer. Others know how to read and use a search engine, and way more can be instructed by a LLM on how to do so.
I would say it is nearly as easy as installing waterfox or some other privacy focused fork of Firefox.
Even if we ignore things like "they're chasing AI fads instead of better things" and "they're adding attack surface" and so forth, and just focus on the disabling feature toggles thing...
... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.
Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.
You can disable the one option. I included links to the source code to show the level of preferences you can customize, and how it is processed.
They absolutely know people want that, and absolutely aren't going to add it.
There are already user-facing preferences for all of the AI features currently in Firefox. Some of them you don’t even have to go into Settings for, just right-click > Remove AI chatbot. They’re annoying, but I appreciate that they still need to be explicitly approved by the user to be enabled (for now).
I'm aware of the settings. I've toggled them. I'm suggesting a convenient global "AI off" toggle won't be provided.
> Waterfox won't include them. The browser's job is to serve you, not think for you... Waterfox will not include LLMs. Full stop. At least and most definitely not in their current form or for the foreseeable future.
> If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.
This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.
However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people actually need and optimize heavily on that. But this is hard to do because people don't show their true motives.
Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.
Also see related statement by vivaldi: https://xcancel.com/i/status/2000874212999799198
That roadmap is luscious.
> AI browsers are proliferating
Are they, though? I get bombarded by AI ads very frequently and I have yet to see anything from those "AI browsers" mentioned on the article.
https://www.perplexity.ai/comet
https://chatgpt.com/atlas/
https://www.microsoft.com/en-us/edge/copilot-mode
https://www.genspark.ai/browser
https://www.operaneon.com/
https://www.diabrowser.com/
https://fellou.ai/
And many many more...
Do they have any users, though?
Would you like to research this and tell us?
I don't, but that was also my point: I don't care if new AI browsers pop up, if no one uses them they might not even exist.
I didn't even know that AI browsers were even a thing until I read this article. And I work on AI.
How do you disable the telemetry in Waterfox? It looks like they get their funding because they partnered with an Ad company. Do I just need to change the default search?
https://www.waterfox.com/blog/waterfox-in-2023/
Looks like their independent now, nice.
Did Firefox already add AI into Tabs? Today I just got my first 'Tab Grouping' and it says "Nightly uses AI to read your Open Tabs". That's the worst way to do grouping ever... just group hierarchically based on where it opened from...
Particularly since they clearly keep this info around - if you install TreeStyleTabs or Sideberry, you'll see it immediately show the historical-structure of your current tabs (in-process at least, I'm not 100% sure about after kill->restore). That info has to come from somewhere.
I wish there was a Horizontal solution instead of vertical tabs. Maybe someone could mod their AI system with a non-AI backend.
If people could get into the habit of using "AI*" when they explicitly mean "LLM" but they have to say "AI" because hype, that would be nice.
The problem with this is integration: no one would complain if it was an official plugin/extension, but integrating this plugin into Firefox is forced and unexpected decision. Firefox telemetry,labs/experiments and server-dependent features will lose it marketshare slowly in favor of local-only browsers that don't have online dependencies or forced bloatware. Like many i've switched long ago to LibreWolf.
I was a FF driver for ages and now making the switch to Chrome based browser simple because it's faster and websites are all tested against Chrome / Safari. I see both of these issues manifest IRL on a weekly basis. Why do I want to burn up CPU cycles and second using FF when Chromium is literally faster.
I use FF because of uBlock Origin, and also because it has built-in support for SOCKS5 proxy connections, which I use to access stuff at work over an ssh tunnel.
Yeah per-tab container socks5 is a killer feature I use every day.
if kagi can make a search engine that charges users, why dont we have a 1$/month open source browser whose code can be verified but people pay to use monthly?
I guess that wouldn't really "open source" in the traditional sense, but that's clearly a tangent.
Personally, I'd love a paid for high quality browser that serves me rather than sneakily trying to get me to look at ads.
I think the challenge is that a browser is an incredibly difficult and large thing to build and maintain. So there aren't many wholly new browsers in existence, and therefore not very many business models being tried out.
Full agreement that I'd pay for such a thing- I have a browser and a terminal open non-stop during my workday. It's an important tool and I'd easily pay for a better offering if that was an option.
Would it be profitable without some heavy investments ?
https://kagi.com/stats
Paying to get a browser fork with less features? At that point, just pay $1 to Mozilla for firefox instead..
If they support it and have an incentive to listen to their customers and not shareholders, gladly. We can't keep using those logic of being afraid to invest then be mad when companies find someone who will.
With this, people will come here and the go. I mean consider the example of many GNU/Linux users I know who use GNU/Linux (or for them Linux means Ubuntu) system and can ask them to try out Waterfox. But, about installation - can't we have .deb? I know we can easily install from tarball and then setup the .desktop file and then adjust the icon to properly display, and what not...But, Can we make a bit simpler to try?
how is adding ai chat different than asking search engine? I think mozilla wants to make sure that it gets some cut for sending queries to ai similar to their existing revenue model where they get cut for sending it to google. Similar to SE's users should have a choice to use any ai or not.
On Windows Mozilla can't even handle disabling hardware acceleration, a.k.a. the GPU, from its settings page. Sure you can toggle the button but it doesn't work as verified in the task manager. What hope is there that they can be trusted to disable AI then? It's a feature that I'd never want enabled. When that "feature" comes out users will be forced to find a fork without the feature.
I completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.
Good stuff. Bit unrelated but I am excited for the imminent wave of lightweight Servo based browsers, will finally let people break free from the Blink/Gecko duopoly.
“Even if you can disable individual AI features, the cognitive load of monitoring an opaque system that’s supposedly working on your behalf would be overwhelming.”
99.9% of people haven’t ever had one single thought about how their software works. I don’t think they will be overwhelmed with cognitive load. Quite the opposite.
Does anyone have more information on this sentence from the second paragraph?:
> Alphabet themselves reportedly see the writing on the wall, developing what appears to be a new browser separate from Chrome.
Presumably Google Disco, an experimental AI focused web browser. There's also a few related HN threads but not much discussion.
https://labs.google/disco https://news.ycombinator.com/item?id=46240952
>A browser is meant to be a user agent, more specifically, your agent on the web.
at this point it’s more so a sandbox runtime bordering an OS, but okay
I guess it's nice for non-technical people who don't know how to use `about:config` but beyond that I don't really see the need. Hopefully adding that extra layer of indirection doesn't mean the users will have to wait too long for security patches.
PSA (for the nth time): about:config is not a supported way of configuring Firefox, so if you tweak features with about:config, don't be surprised if those tweaks stop working without warning.
Mozilla tells you to use it so it so that seems supported enough to me (example: https://support.mozilla.org/en-US/kb/how-stop-firefox-making...)
That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
Ugh. Because they also say:
"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."
https://support.mozilla.org/en-US/kb/firefox-advanced-custom...
about:config is a cat and mouse game, and I don't want to reconfigure my settings everytime Firefox updates. That's just hostile user design.
Related:
Mozilla appoints new CEO Anthony Enzor-Demeo
https://news.ycombinator.com/item?id=46288491
I still can’t give them money, so what’s the point? Just like with Mozilla, they rely on sponsors and you are the product.
You can give Waterfox your money. Just not for the browser itself. They sell ad free search[0].
[0] https://search.waterfox.net/
As I mentioned in a comment below (https://news.ycombinator.com/item?id=46297617 ), Firefox does not rely only on sponsors. There are a few ways to pay money that goes directly towards Firefox.
> I still can’t give them money, so what’s the point?
What do you say about the following link, then?
> https://www.mozillafoundation.org/en/donate/
That link is for Mozilla Foundation, which is a non-profit and donations to it do not go to the development of Firefox. Mozilla Corporation, the for-profit entity, owns and manages Firefox. The way to support Firefox monetarily is by buying Mozilla VPN where available (this is Mullvad in the backend) and buying some Firefox merchandise (like stickers, t-shirts, etc.). I think an MDN Plus subscription also helps.
New this year? https://web.archive.org/web/20250000000000*/https://www.mozi...
I agree it's counter-evidence right now, and I think there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...
I, for one, am dreaming of AI assisted ad removal, content summaries, bookmarks automatic classification...
As I read the post by MrAlex94, I noticed a remark that the browser Chrome is good as a user agent. To me, that's terrific! Looks like I'll have to consider Chrome again.`
Here are what I find as reasons to scream about Mozilla:
Popups:
(a) Several times a day, my attention and concentration get interrupted by, for me, the unwelcome announcement that there is a new version I can download. A new version can have changes I don't like and genuine bugs. Sure, I could keep a copy of my favorite version from history, but that is system management mud wrestling and interruption of my work.
(b) Now I get told several times a day that my computer and cell phone can share access to a Web page. In this action Mozilla covers up what that page was showing I wanted it to show. No thanks. When I'm at my computer, AMD 8 core processor, all my files and software tools, and 1 Gbps optical fiber connection to the Internet and looking at a Web page, I want nothing to do with a cell phone's presentation of a, that, Web page.
(c) Some URLs are a dozen lines long and Mozilla finds ways to present such URLs with all their lines and pursue clearly their main objective -- cover up the desired content.
Mozilla needs to make their covering up, changing, the screen optional or just eliminated.
Want me to donate? You've mentioned as little as $10. Deal: Raise the $10 by a factor of 5 AND quit covering up my content and interrupting my work, and we've got a deal.
I just downloaded WaterFox, it looks nice.
When they say "AI browsers are proliferating." and "Their lunch is being eaten by AI browsers." what does that mean? What's an "AI Browser", and are they really gaining significant market share? For what?
I found this (1) that suggests that several "AI Browsers" exist, which is "proliferating" in a sense.
1) https://www.waterfox.com/blog/no-ai-here-response-to-mozilla...
...and keep your hand up if you've ever donated to Firefox
Why don't you go ahead and share the "donate to Firefox" page?
Last I knew, it doesn't exist. You can donate to Mozilla Corporation, the group that has been agitating it's own users and donors for years now.
People who want to support the Firefox team/product and have them focus on improving things like the development tools (or whatever else) literally cannot. Mozilla doesn't make that an option.
I gave them over $500 and I sure as hell will never do that again.
Waterfox just released version 6.6.6. Are we sure it is not evil?
"...trust from other large, imporant [sic] third parties which in turn has given Waterfox users access to protected streaming services via Widevine."
The black box objection disqualifies Widevine.
I do think dipping your toes into the future is worth it. If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck. But I don't think this is any more dangerous than giving people a browser in the first place. They have already done enough to shoot themselves in the foot enough.
I am more of a sceptic of AI in the context of a browser, than its general use. I think LLMs have great utility and have really helped push things along - but it’s not as if they’re completely risk free.
> If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck.
It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
Birth rates may fall for those who LLM made unemployable...
Just checking but… what if instead of cruel natural selection, we’ve largely eliminated threats like predators and starvation… but still by either necessity or accident are presented with a less cruel more subtle filter?
I don't mind Mozilla trying to make use of AI, but I'm also glad we have actual competition still.
In many other areas, there are zero "no AI" options at all.