This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
The movie that doesn't get enough credit at predicting the future, or what is now the present, is Captain America: The Winter Soldier. DOGE, Palantir, Larry Ellison's vision of nonstop AI surveillance, and all the data-sucking tech companies swearing fealty to the orange authoritarian are bringing the plot of that movie directly into reality, and I'm always surprised that it never gets mentioned.
Ha. That's the most outlandish part of the plot. In terms of enforcement and control, Black Mirror's Metalhead episode seems the more likely vision, where the robotic dogs are comparable to drones.
When I ask 20-somethings whether they’ve seen the matrix the answer is ‘no’ usually. They have little idea what they’re working towards, but are happy to be doing it so they have something to eat.
Yet they have seen Black Mirror and the likes, which also portray the future we’re heading towards. I’d argue even better because matrix is still far off.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
The Matrix was inspired by the Gnostic schools of thought. The authors obviously knew loads about esoteric spirituality and the occult sciences. People have been suggesting that we are trapped in a simulacrum / matrix for over two-thousand years. I personally believe The Matrix was somewhat of a documentary. I'm curious - why do you think a concept such as presented in The Matrix, is still far off?
I think we are close to Wally or Fifteen Million Credits, maybe even almost at the Oasis (as seen by IOI). But we have made little progress in direct brain stimulation of senses. We are also extremely far from robots that can do unsupervised complex work (work that requires a modicum of improvisation).
Of course we might already be in matrix or a simulation, but if that’s the case it doesn’t really change much.
The difference is that we don't have credits the way the characters do in Brooker's universe; we have social clout in the form of upvotes, likes, hearts, retweets, streaming subs, etc. most of which are monetised in some form or are otherwise a path to a sponsorship deal.
The popularity contest this all culminates in is, in reality, much larger in scale than what was imagined in Black Mirror. The platform itself is the popularity contest.
Some would argue that most stories in Western societies are echoing the Bible. The Matrix is in many ways the story of Jesus (Morpheus is John the Baptist).
Brain/computer interface that completely simulates inputs which drive perceptions which are indistinguishable from reality. At least, that’s what is portrayed in the movie. I’m not OP but this to me seems far off.
Fair point and thank you for sharing it! It definitely does feel far off in that aspect. I suppose though, that if we are all trapped in a false reality it is impossible to know (without escaping the false reality) how advanced base reality actually is. I always interpreted the whole jacking into the Matrix thing, metaphorically, but with a literal interpretation the OP's comment makes much more sense to me. Thanks again!
Matrix was a direct rip off of ghost in the shell series which did a much better job at capturing the essence of the issue in depth (the writers almost admit to it and there are videos out there that does scene by scene comparison). Ghost in the shell is majorly influenced by Buddhism. While there are obvious overlaps between platonism (that forms the core of gnostism - salvation through knowledge to the real world, and the current world ~= suffering and not real), it wouldn't be correct to attribute gnostism as the influence behind The Matrix.
I enjoyed Silo, but I think in the real world, completely destroying the world's ecosystem and a fraction of mankind surviving in tiny isolated bunkers for generations is more fantasy than scifi...
Perplexity released theirs earlier, and as far as I know, they do not use any of your data like that for training. It's really a shame if that's how OpenAI is using your data. I was going to try their coding solution, but now I'm just flat out blacklisting them and I'll stick to Claude. For whatever reason Claude Code just understands me fully.
I think it's more like: investors permanently unhappy because they were promised ownership of God and now we're built out they're getting a few percent a year instead at best. Squeeze extra hard this quarter to get them off the Board's backs for another couple of months.
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.
AFAICS this has nothing to do with "open-source personal AI engines".
The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.
The question remains whether the indexer would really benefit from real-time ingestion while browsing.
Due to the dynamic nature of the Web, URLs don't map to what you've seen. If I visit a URL at a certain time, the content I see is different than the content you see or even if I visit the same URL later. For example, if we want to know the tweets I'm seeing are the same as the tweets you're seeing and haven't been subtly modified by an AI, how do you do that? In the age of AI programming people, this will be important.
I understand GP like they want to browse normally and have that session's history feed into another indexing process via some IPC like D-Bus. It's meant to receive human events from the browser.
Chrome Devtools MCP on the other hand is a browser automation tool. Its purpose is to make it trivial to send programmed events/event-flows to a browser session.
Personally I think it would be awesome if we could browse a 1999 version of the web. Better than the crap we have today, even if it is all just AI generated.
Hey now, don’t forget how they will just be able to hand over everything you’ve ever done to the government! We know no government or power would ever abuse that.
Hate to be the dum-dum, but what's leading to humanity's self-destruction here? Loss of privacy? Outsized corporate power? Or, is this an extreme mix of hyperbole and sarcasm?
Well, you could always focus on the ridiculous environmental impact of llms. I read once that asking ChatGPT used 250x as much energy as just googling. But now google incorporated llms into search so…
I grew up on the banks of the Hudson River, polluted by corporations dumping their refuse into it while reaping profits. Anthropic/openai/etc are doing the same thing.
That's clearer. I can see how that can be a problem, but destruction of humanity? I think of this as a fun change in circumstance at best and a challenge at worst, rather than a disaster.
Asymmetry of power creates rulers and the ruled. Widespread availability of firearms helped to partly balance out one aspect (monopoly on violence) and the wide availability of personal computers plus the Internet balanced out another (monopoly on information). Only part left is the control of resources (food, housing, etc.).
AI is destabilizing the current balance of knowledge/information which creates the high potential for violence.
Consider a society where everyone has a different reality about something shared normally.
Societies are built upon unspoken but shared truths and information (i.e. the social contract). Dissolve this information, dissolve or fragment the society.
This, coupled with profiling and targeting will enable fragmentation of the societies, consolidation of power and many other shenanigans.
This also enables continuous profiling, opening the door for "preemptive policing" (Minority Report style) and other dystopian things.
Think about Cambridge Analytica or election manipulation, but on steroids.
This. Power and Control is only viable at scale when the aforementioned tacts are wielded with precision by "invisible hands" ..
History has proved that keeping society stupid and disenfranchised is essential to control.
Did you know that in the 1600s the King of England banned coffee?
Simple.. fear of evolving propagating better ideas and more intense social fraternity.
"Patrons read and debated the news of the day in coffeehouses, fueled by caffeine; the coffeehouse became a core engine of the new scientific and philosophical thought that characterized the era. Soon there were hundreds of establishments selling coffee."
(the late 1600s was something of a fraught time for England and especially for Charles II, who had spent some time in exile due to the monarchist loss of the English Civil War)
But the impact of AI is going to be even worse than that.
For virtually all of human history, there weren't anywhere near so many of us as there are now, and the success and continuation of any one group of humans wasn't particularly dependent on the rest. Sure, there were large-scale trade flows, but there were no direct dependencies between farmers in Europe, farmers in China, farmers in India, etc. If one society collapsed, others kept going.
The worst historical collapses I'm familiar with - the Late Bronze Age Collapse and the fall of the Roman Empire - were directly tied to larger-scope trade, and were still localized beyond comparison with our modern world.
Until very recently, total human population at any given point in history has been between 100 and 400 million. We're now past 8 billion. And those 8 billion people depend on an interconnected global supply chain for food. A supply chain that, in turn, was built with a complex shared consensus on a great many things.
AI, via its ability to cheaply produce convincing BS at scale, even if it also does other things is a direct and imminent threat to the system of global trade that keeps 8 billion human beings fed (and that sustains the technology base which allows for AI, along with many other things).
I don't want to invalidate your viewpoint: I'll just share mine.
The shared truth that holds us together, that you mentioned, in my eyes is love of humanity, as cliche as that might sound. Sure it wavers, we have our ups and downs, but at the end, every generation is kinder and smarter than the previous. I see an upward spiral.
Yes, there are those of us who might feel inclined to subdue and deceive, out of feelings of powerlessness, no doubt. But, then there are many of us who don't care for anything less than kindness. And, every act of oppression inches us toward speaking and acting up. It's a self-balancing system: even if one falls asleep at the wheel, that only makes the next wake-up call more intense.
As to the more specific point about fragmented information spaces: we always had that. At all points in history we had varying ways to mess with how information, ideas and beliefs flowed: for better and for worse. The new landscape of information flow, brought about by LLMs, is a reflection of our increasing power, just as a teenager is more powerful than a pre-teen, and that brings its own "increased" challenges. That's part of the human experience. It doesn't mean that we have to ride the bad possibilities to the complete extreme, and we won't, I believe.
Thanks for your kind reply. I wanted to put some time aside to reply the way your comment deserves.
My personal foundations are not very different than yours. I don't care about many people cares. Being a human being and having your heart at the right place is a good starting point for me, too.
On the other hand, we need to make a distinction between people who live (ordinary citizens) and people who lead (people inside government and managers of influential corporations). There's the saying "power corrupts", now this saying has scientific basis: https://www.theatlantic.com/magazine/archive/2017/07/power-c...
So, the "ruling class", for the lack of better term, doesn't think like us. I strive to be kinder every day. They don't (or can't) care. They just want more power, nothing else.
For the fragmented spaces, the challenge is different than the past. We, humans, are social animals and were always in social groups (tribes, settlements, towns, cities, countries, etc.), we felt belong. As the system got complex, we evolved as a result. But the change was slow, so we were able to adapt in a couple of generations. In 80s to 00s, it was faster, but we managed it somehow. Now it's exponentially faster, and more primitive parts of our brains can't handle it as gracefully. Our societies, ideas and systems are strained.
Another problem is, unfortunately, not all societies or the parts of the same society evolve at the same pace to the same kinder, more compassionate human beings. Radicalism is on the rise. It doesn't have to be violent, but some parts of the world is becoming less tolerant. We can't ignore these. See world politics. It's... complicated.
So, while I share your optimism and light, I also want to underline that we need to stay vigilant. Because humans are complicated. Some are naive, some are defenseless and some just want to watch the world burn.
Instead of believing that everything's gonna be alright eventually, we need to do our part to nudge our planet in that direction. We need to feed the wolf which we want to win: https://en.wikipedia.org/wiki/Two_Wolves
Argh, I lost my reply due to a hiccup with my distraction-blocking browser extension. I'll try and summarize what I wanted to say. I'll probably be more terse than I originally would have been.
I appreciate your thoughtful reply. I too think that our viewpoints are very similar.
I think you hit the nail on the head about how it's important that positivity doesn't become an excuse for inaction or ignorance. What I want is a positivity that's a rally, not a withdrawal.
Instead of thinking of power as something that imposes itself on people (and corrupts them), I like to think that people tend to exhibit their inner-demons when they're in positions of power (or, conversely, in positions of no-power). It's not that the position does something to them, but it's that they prefer to express their preexisting disbalance (inner conflict) in certain ways when they're in those circumstances. When in power, the inner disbalance manifests as a villain; when out-of-power, it manifests as a victim.
I think it's important to say "we", rather than "us and them". I don't see multiple factions with fundamentally incompatible needs. Basically, I think that conflict is always a miscommunication. But, in no way do I mean that one should cede to tyranny or injustice. It's just that I want to keep in mind, that whenever there's fighting, it's always in-fighting. Same for oppression: it's not them hurting us, but us hurting us: an orchestration between villains and victims. I know it's triggering for people when you humanize villains and depassify victims, but in my eyes we're all human and all powerful, except we pretend that the 1% is super powerful, while the 99% are super powerless.
I had a few more points I wanted to share, but I have to run. Thanks for the conversation.
Google gave us direct access to much of the world's knowledge base, then snatched it away capriciously and put a facsimile of it behind a algorithmic paywall that they control at the whims of their leadership, engineering, or benefactors.
The despair any rational person will feel upon realizing that they lobotomized the overmind that drove Information Age society might just be traumatic enough, in aggregate, to set off a collapse.
So, yes. Destruction of humanity (at least, as we know it) incoming. That's without the super AI.
Why convince them? If they never go outside, they’ll just be inside anyway. You won’t interact with them. Metaphorically. Real life is a place, not an idea.
You're interacting with real people who doesn't see your face and hear your voice all day and affect each other.
Real life is a place encompassing the "cyberspace", too. They're not separate but intertwined. You argue that people affecting your life are the closest ones to you, but continuously interact with the ones who are farthest from you distance-wise and they affect your day at the moment.
People who want billions of people to be inside and compliant, want those people's vote to go a certain way (at least, while that is even still a thing). Once that part stops being a thing, you stop being allowed to be outside, as that could be a problem.
I certainly can and do that. Can you please convince remaining 8 billions to do the same?
Based on ie election behavior or populations, what you describe is naivety on a level of maybe my 5 year old son. Can you try to be a bit more constructive?
Ok, let’s break it down for you: What all 8 billion people in the world think does not matter to you. There are people out there cutting heads off in the name of religion, or people who think their dictator is a divine being.
People outside your country have little effect on your daily life.
Even people within your country have a weak effect on your daily life.
What other people believe only really matters for economic reasons. Still, unless you are very dependent on social safety nets even they don’t matter that much. You just find more money and carry on.
You might think that more propaganda will result in people voting for bad politicians, but it is actually possible to have too much propaganda. If people become aware of how easily fake content is generated, which they are rapidly realizing in the age of AI, the result is they become skeptical of everything, and come up with their own wild theories on what the truth really is.
The people whose thoughts matter most are the people you interact with on a daily basis, as they have the most capability to alter your daily life physically and immediately. Fortunately you can control better who you surround yourself with or how you interact with them.
If you turn off the conversation, the world will appear pretty indifferent even to things that seem like a big deal on social media.
You said: "You might think that more propaganda will result in people voting for bad politicians"
In the US at least, the people who vote the most are typically the older people 40+ and those people have very little experience with tech and AI and are easily tricked by fake crape. Add AI to the mix, and they literally have no perception of the real world.
40s have very little experience with tech? Those were the people who practically invented tech as we know it today. Most AI researchers are in their 40s and 50s, and have been experimenting with machine learning and AI for the past decades.
I think your comment is just very ageist. You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
Ironically I would say it is young 20 somethings and below who have no clue how a computer or software even works. Just a magic sheet of glass or black box that spits out content and answers, and sometimes takes pictures and video.
This is kind of true - the media environment can be both overwhelming and irrelevant. But eventually it hits. I have some friends who are trans and very familiar with what a hostile propaganda campaign can do to your healthcare.
tl;dr: More close friends people have, more polarized societies become.
It's easy to profile people extensively, and pinpoint them to the same neighborhood, home or social circle.
Now what happens when you feed "what you want" to these groups of people you see. You can plant polarization with neighbourhood or home precision and control people en-masse.
Knowing or seeing these people doesn't matter. After some time you might find yourself proverbially cutting heads off in the name of what you believe. We used to call these flame wars back then. Now this is called doxxing and swatting.
The people you don't know can make life very miserable in a slow-burning way. You might not be feeling it, but this the core principle of slowly cooking the frog.
A lot of assumptions, some could be correct, some are plainly not.
Your idea of living in society is something very different form my idea, or European idea (and reality). Not seeing how everything is interconnected, ripple effects and secondary/tertiary effects come back again and again, I guess you don't have kids... well you do your life, if you think money can push through and solve everything important. I'd call it a sad shortsighted life if I cared, but thats just me.
Absolute control over what people think and know, which is sort of absolute control overall with power to normalize anything, including what we consider evil now.
Look at really powerful people of this world - literally every single one of them is badly broken piece of shit (to be extremely polite), control freaks, fucked up childhood and thus overcompensating missing/broken father figure, often malicious, petty, vengeful, feeling above rest of us.
Whole reason for democracy since ancient times to limit how much power such people have. The competent sociopathic part of above will always rise in society to the top regardless of type of system, so we need good mechanism to prevent them from becoming absolute lifelong dictators (and we need to prevent them from attaining immortality since that would be our doom on another level).
People didn't change over past few thousands of years, and any society that failed above eventually collapsed in very bad ways. We shall not want the same for current global civilization and for a change learn from past mistakes, unless you like the idea of few decades of warfare globally and few billions of death. I certainly don't.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Yeah, I think there are profound security issues, but I think many folks dug into the prompt injection nightmare scenarios with the first round of “AI browsers”, so I didn’t belabor that here; I wanted to focus on what I felt was less covered.
It's bad too, yes. But not as bad, because MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain. SamAI is a deeply unprofitable company, mostly B2C oriented, with no other products to fall back to except for LLM. So it is more probably that Sam will be exploiting user data. But in general both are bad, that's why people need to use Firefox, but never actually do so, due to some incorrect misconception from decade ago.
>MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain.
On the contrary, it could be the case that Microsoft ritually sacrifices a dozen babies each day in their offices and it would still be used because office.
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
There's 2 dimensions to it: determinism and discoverability.
In the Adventure example, the ux is fully deterministic but not discoverable. Unless you know what the correct incantation is, there is no inherent way to discover it besides trial and error. Most cli's are like that (and imho phones with 'gestures' are even worse). That does not make a cli inefficient, unproductive or bad. I use cli all the time as I'm sure Anhil does, it just makes them more difficult to approach for the novice.
But the second aspect of Atlas is its non determinism. There is no 'command' that predictivly always 'works'. You can engineer towards phrasings that are more often successfull, but you can't reach fidelity.
This leeway is not without merrit. In theory the system is thus free to 'improve' over time without the user needing to. That is something you might find desirable or not.
I opened ChatGPT on my Mac this morning and there was an update.
I updated ChatGPT and a little window popped up asking me to download Atlas. I declined as I already have it downloaded.
There was another window, similar to the update available window, in my chat asking me to download Atlas again...I went to hit the 'X' to close it and I somehow triggered it, it opened my browser to the Atlas page and triggered a download of Atlas.
This was not cool and has further shaken my already low confidence in OpenAI.
The only confidence I have in OpenAI at this point is that they will be using scummy tricks like that all the time. What have they ever done to earn confidence in the other direction?
They can't even lie and blame it on a programming fuckup because they'd have to say AI driven code is buggy.
I can't speak to the particular browser application. I haven't installed it and probably never will, but the language around text interfaces makes the OP sound... uninformed.
Graphical applications can be more beautiful and discoverable, but they limit the user to only actions the authors have implemented and deployed.
Text applications are far more composable and expressive, but they can be extremely difficult to discover and learn.
We didn't abandon the shell or text interfaces. Many of us happily live in text all day every day.
There are many tasks that suffer little by being limited and benefit enormously by being discoverable. These are mostly graphical now.
There are many tasks that do not benefit much by spatial orientation and are a nightmare when needlessly constrained. These tasks benefit enormously by being more expressive and composable. These are still often implemented in text.
The dream is to find a new balance between these two modes and our recent advances open up new territory for exploring where they converge and diverge.
Am I the only one that interpreted OP in a way that they weren't opposed to neither CLIs, TUIs, nor GUIs at all? The topic wasn't "textual interface VS graphical interface", but "undocumented/natural language VS documented/query language" for navigating the internet.
In addition to the analogy of the textual interface used in Zork, we could say that it'd be like interacting with any REST API without knowledge about its specification - guessing endpoints, methods, and parameters while assuming best practices (of "natural-ness" kind). Do we really want to explore an API like that, through naive hacking? Does a natural language wrapper make this hacking any better? It can make it more fun as it breaks patterns, sure, but is that really what we want?
I'm not focused on this particular browser or the idea of using LLMs as a locus of control.
I haven't used it and have no intention of using it.
I'm reacting to the OP articulating clearly dismissive and incorrect claims about text-based applications in general.
As one example, a section is titled with:
> We left command-line interfaces behind 40 years ago for a reason
This is immediately followed by an anecdote that is probably true for OP, but doesn't match my recollection at all. I recall being immersed and mesmerized by Zork. I played it endlessly on my TRS-80 and remember the system supporting reasonable variation in the input commands.
At any rate, it's strange to hold up text based entertainment applications while ignoring the thousands of text based tools that continue to be used daily.
They go on with hyperbolic language like:
> ...but people were thrilled to leave command-line interfaces behind back in the 1990s
It's 2025. I create and use GUI applications, but I live in the terminal all day long, every day. Many of us have not left the command line behind and would be horrified if we had to.
It's not either/or, but the OP makes incorrect claims that text based interfaces are archaic and have long been universally abandoned.
They have not, and at least some of us believe we're headed toward a new golden age of mixed mode (Text & GUI) applications in the not-so-distant future.
CLIs are still powerful and enjoyable because their language patterns settled over the years. I wouldn't enjoy using one of these undiscoverable CLIs that use --wtf instead of --help, or be in a terminal session without autocomplete and zero history. I build scripts around various CLIs and like it, but I also love to install TUI tools on my servers for quick insights.
All of that doesn't change the fact that computer usage moved on to GUIs for the general audience. I'd also use a GUI for cutting videos, editing images, or navigating websites. The author used a bit of tongue-in-cheek, but in general I'd agree with them, and I'd also agree with you.
Tbh, I also think the author would agree with you, as all they did was making an anecdote that
s/Take/Pick up/
is not that far off from annoying its users than
s/atlas design/search web history for a doc about atlas core design/
is. And that's for a product that wants to rethink the web browser interface, mind you.
We are more rapidly heading towards (or already in) a future where the average household doesn't regularly use or even have a "computer" in the traditional sense, and a CLI is not just unused but entirely non-existent.
I think the charitable interpretation is that the author is referring to particular use-cases which stopped being served by CLIs.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
echo "I think the..." | post_to_hn --reply-to=45742461
A hybrid will likely emerge. I work on a chat application and it's pretty normal that LLM can print custom ui as part of the chat. Things like sliders, dials, selects, calendars are just better as a GUI in certain situations.
I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.
Some TUI programs can embed a small cli. Like Midnight Commander, and others. Or externally call commands and shells, or even pipe the output. Ed itself, vi, slrn...
Also, some commenters here at HN stating that the CLI/TUI it's just the fallback option... that's ridiculous. Nvi/vim, entr, make... can autocompile (and autocomplete too with some tools) a project upon writting any file in a directory thanks to the entr tool.
But the article is coming from a decisively antagonistic angle though.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
What is the significance of "Even all the Linux users"? First of all it's probably incorrect, because of the all quantifier. I've went out of my way to look at the website via the terminal to disprove the statement. It's clearly factually incorrect now.
Second, what does hate have anything to do with this? Graphical user interfaces serve different needs than text interfaces. You can like graphical user interfaces and specifically use Linux precisely because you like KDE or Gnome. You can make a terrible graphical user interface for something that ought to be a command line interface and vice versa. Vivado isn't exactly known for being easy to automate.
Third, why preemptively attack people as nerds?
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I mean, not only does this come off as an incredible strawman. After all, who wouldn't be excited by computers in an era where they were the hot new thing? Computers were exciting, not because they had text interfaces. They were fun, because they were computers. It's like learning how to drive on the highway for the first time.
The worst part by far is the unnecessary insinuation though. It's the standard anti-intellectual anti-technology stereotype. It creates hostility for absolutely no reason.
If we take the total number of computer users globally, and look at who use GUI vs CLI, the latter will be a teeny tiny fraction.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
Don't forget to count people that require screen readers, this is often a less vocal minority that often depend on CLI tools for interaction with the computer.
It's actually an interesting example, because unlike Warp that tries to be a CLI with AI, Claude defaults to the AI (unless you prefix with an exclamation mark). Maybe it says more about me, but I now find myself asking Claude to write for me even relatively short sed/awk invocations that would have been faster to type by hand. The uncharitable interpretation is that I'm lazy, but the charitable one I tell myself is that I don't want to context-switch and prefer to keep my working memory at the higher level problem.
In any case, Claude Code is not really CLI, but rather a conversational interface.
Claude Code is a TUI (with "text"), not a CLI (with "command line"). The very point of CC is that you can replace a command line with human-readable texts.
You may think that's pedantic but it really isn't. Half-decent TUIs are much closer to GUIs than they are to CLIs because they're interactive and don't suffer from discoverability issues like most CLIs do. The only similarity they have with CLIs is that they both run in a terminal emulator.
"htop" is a TUI, "ps" is a CLI. They can both accomplish most of the same things but the user experience is completely different. With htop you're clicking on columns to sort the live-updating process list, while with "ps" you're reading the manual pages to find the right flags to sort the columns, wrapping it in a "watch" command to get it to update periodically, and piping into "head" to get the top N results (or looking for a ps flag to do the same).
But that's not how it's typically used, it's predominantly used in TUI mode so the popularity of CC doesn't tell us anything about popularity of the CLI.
Hi, sorry for the unrelated reply, but I wanted to ask you about a comment you made 6months back about archiving Gamasutra posts. I came across it while searching HN for "gamasutra".
I'd bookmarked a lot of Gamasutra articles over the years and am kinda bummed out that I can't find any of them now that the site has shifted. You mentioned having a collection of their essays? Is there any way to share or access them?
Also, to be clear, I’m mostly goofing about it CLIs, and — as I mentioned in the piece — I use one every day. But yes, there are four or five billion internet users who don’t and never will. And CLIs are a poor user interface for 99+% of the tasks that people accomplish on computing devices, or with browsers, which is pertinent for the point I was making.
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
Well, it's not just that (not that I disagree with the author in general, but I do on this point). point-and-click interfaces _are_ objectively worse interfaces for power use. I've been stuck selecting a group of files, moving them, double-clicking and renaming, selecting another, etc. all the while knowing that I could write a script that could do all of this much faster if I had access to a command line interface. Point-and-click is just easier to get started with.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Also most of the infocom games were pretty much improved against Zork in the 80's.
In the 90's, the amateur Inform6 games basically put Infocom games almost as if they were the original amateur ones, because a lot of them were outstanding and they could ran on underpowered 16 bit machines like nothing.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
I am still confused in what way this is "anti-web". Is it actually harming the current web, or just providing a bad interface to it?
> The amount of data they're gathering is unfathomable.
The author suggests that GPT continuously reads information like typed keystrokes etc, I don't see why that's implied. And it wouldn't be new either, with tools like Windows Replay.
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first.
The part about Zork doesn't make sense to me. As I understand it text based adventure games are actually quite lenient with the input you can give, multiple options for the same action. Additionally certain keywords are "industry standard" in the same way that you walk using "wasd" in FPS games, so much that it became the title of the documentary "get lamp". Due to the players perceived knowledge of similar mechanics in other games you can even ague that providing these familiar commands is part of the game design.
It seems to me that the author never played a text based adventure game and is jut echoing whatever he heard. Projects like 1964's ELIZA prove that text based interfaces have been able to feel natural for a long time.
Text has a high information density, but natural language is a notoriously bad interface for certain things. Like giving commands, therefore we invented command lines with different syntax and programming languages for instructing computers what to do.
Have you actually played these games? I put in some hours on Hitchhikers Guide, and It was anything but natural. Maybe once you get far enough in the game and learn the language that is effective it gets easier, but I never got there. You wake up in the dark and have to figure out how to even turn on the light. Then you have to do a series of actions in very specific order before you can get out of your bedroom.
Figuring it all out is part of the fun, but outside the context of a game it would be maddening.
As for Eliza, she mostly just repeats back the last thing you said as a question. “My dog has fleas.” “How does your dog having fleas make you feel?”
Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.
This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.
It seems to me that the author never played a text based adventure game and is jut echoing whatever he heard
Indeed. And this makes his judgmental pettiness about people who like these games all the shittier for it. I don't know why extremely-online bloggers think unrequited snark is a glide path to being funny.
I loved text based adventure games when I was growing up, but I also thought this comparison was incredibly apt and also found it very funny. I’m a bit surprised people are so offended by this article, have we lost the ability to read something with nuance?
It's a bit weasel-y to refer to criticism as just "people being offended".
> thought this comparison was incredibly apt and also found it very funny
I'm happy for you, but I didn't.
To "read something with nuance" is to be open to nuance that is already present in the writing. This writing is not nuanced!
Perhaps you're asking us to make an effort to be more tolerant of weak writing. That's a fair request when the writer is acting in good faith. But to mock nerds for liking text adventures when you clearly do not like them yourself is not acting in good faith.
I understood this to be a comment on the fact that a text interface has a lower affordance than a graphical interface. A command line doesnt suggest what you can do in the way a graphical interface can.So even if you have industry standard keywords, a user has to know/learn them. I see it as similar to the buttons versus screens debate in car interfaces.
It was certainly jarring reading that immediately after accusing someone else of having never played the games.
(Who is clearly playfully poking fun at something he enjoyed playing but can recognize as being constrained by computers of the era and no longer a common format of games thanks to 3D rendering).
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
As another commenter noted, it was loaned from Yiddish rather than German, although the two languages are very closely related. There are many Yiddish words in English that have come from the Jewish diaspora. Common ones I can think of are schlep (carry something heavy) and kvetch (complain loudly). Since this is Hacker News, the Yiddish word I think we use the most is glitch. Of course there are also words from Hebrew origin entering English the same way, like behemoth, kosher, messiah...
Well, while I can never be sure about my own biases, I submit this particular instance is/was not an "illusion". I truely never heard the word dreck used in english before. Why do I know? I am a german native speaker, and I submit that I would have noticed it if I ever heard it earlier, simply because I know its meaning in my native lang and that would have made spotting it rather easy.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
Pro Spotify: existing playlists and history, better artists info, better UI.
YouTube Music is both better and worse: UI has some usability issues and unfortunately it shares likes and playlists with the normal YouTube account, as a library it has lots of crap uploaded by YouTube users, often wrong metadata, but thanks to that it also has some niche artists and recordings which are not available on other platforms.
To stop the same two companies from owning everything? YouTube Music and Apple Music are shameless anticompetitive moves, leveraging market dominance to move into other existing markets. (I'll afford more lenience to Apple Music, since iTunes was already huge, being the undisputed king of music sales before streaming subscriptions took off.)
I've also been using Spotify for longer than YouTube Music, or its predecessor that Google killed (as they do periodically) even exited.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
Meta publishes some interesting data along these lines in their quarterly reports.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
The more visible and annoying ads are, the more effort (and money) I will spend buying from competitors and actively dissuading other people from buying the product.
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF
, use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
> One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
Google used to be like that, and if ChatGPT is better right now, it won't remain that way for long. They're both subject to the same incentives and pressures, and OpenAI is, if anything, less ethical and idealistic than Google was.
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
I think Atlas sounds and acts pretty terrible, but the Dia browser has been a pretty nice experience for me. You still have access to the web, favorite links in the sidebar, etc.; and the option (when you want or need) to "chat" with the current website you are on using an LLM.
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
One could make the case that the web of 2025 is anti-human. AI clients are one of very few exits we have from enshittification. ChatGPT can read all the ads so you don’t have to. The whole point of those annoying CAPTCHAs and those stupidity anime girls on all the kernel web sites is slamming the door from any exit from Google’s world that we live in.
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
One possibility I like to imagine is a future where knowledge sources are used kind of like tools, i.e. the model never uses any preexisting knowledge from its training data (beyond what’s necessary to be fluent in English, coherent, logical, etc.), a “blank” intelligent being, tabula rasa. And for answering questions it uses various sources dynamically, like an agent would use tools.
I think this will let models be much smaller (and cheaper), but it would also enable a mechanism for monetizing knowledge. This would make knowledge sharing profitable.
For example, a user asks a question, the model asks knowledge sources if they have relevant information and how much it costs (maybe some other metadata like perceived relevance or quality or whatever), and then it decides (dynamically) which source(s) to use in order to compile an answer (decision could be based on past user feedback, similarly to PageRank).
One issue is that this incentivizes content users want to hear versus content they don’t want to hear but is true. But this is a problem we already have, long before AI or even the internet.
> If you post for ad revenue, I truly feel sorry for you.
I think this is a bit dismissive towards people who create content because they enjoy doing it but also could not do it to this level without the revenue, like many indie Youtubers.
If I could press a button and remove money from the internet, I'd do it in a heartbeat.
I absolutely do enjoy content that is financed through ads. I really, REALLY do like some of the stuff, honest. But it is also the case that the internet has been turning into a marketing hellscape for the last couple decades, we've gotten to a point in which engagement is optimized to such a degree that society itself is hurting from it. Politically and psychologically. The damage is hard to quantify, yet I can't fathom the opportunity cost not being well into the billions.
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
It sounds like a great opportunity to poison the well. Create a bot farm that points the browser to every dank and musty corner of the web ad naseum: Old Yahoo, Myspace and Geocities pages, 4chan, 8chan, etc.
Let's flood the system with junk and watch it implode.
Yes, if I type Taylor Swift Showgirls I get some helpful information and a lot of links, but not her website. It isn't very different than what Google Gemini displays at the top of Google search results.
Is it a webpage? Well... it displays in a browser...
But if I type Taylor Swift, I get links to her website, instagram, facebook, etc.
Is it a webpage? Well... it displays in a browser...
> We left command-line interfaces behind 40 years ago for a reason
Man I still love command-line so much
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first. [...] guess what secret spell they had to type into their computer to get actual work done
The games he is talking about deliberately didn't have docs or help because that WAS the game, to guess.
I think same here, while there are docs "please show me the correct way to do X," the surface area is so large the the analogy holds up still, in that you might as well just be guessing fo rhte right command.
Am I missing something here? I used it a few days ago and it does actually act like a web browser and give me the link. This seems to be a UI expectation issue rather than a "real philosophy".
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
Tangent but related, if only google search would do a serious come back instead of not finding nothing anymore, we would have a tool to compare ai to. Sure gemini integration might still be a thing but with actual working search results
'Modern' Z-Machine games (v5 version compared to the original v3 one from Infocom) will allow you to do that and far more.
By 'modern' I meant from the early 90's and up.
Even more with v8 games.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
The original v3 Z Machine parser (raw one) was pretty much limited compared to the v5 one. Even more with against the games made with Inform6 and the Inform6 English library targetting the v5 version.
Go try yourself. Original Zork from MIT (Dungeon) converted into a v5 ZMachine game:
Spot the differences.
For instance, you could both say 'take the rock' and, later, say 'drop it'.
>take mat
Taken.
>drop it
Dropped.
>take the mat
Taken.
>drop mat
Dropped.
>open mailbox
You open the mailbox, revealing a small leaflet.
>take leaftlet
You can't see any such thing.
>take leaflet
Taken.
>drop it
Dropped.
Now, v5 games are from late 80's/early 90's. There's Curses, Jigsaw, Spider and Web ... v8 games are like Anchorhead, pretty advanced for its time:
You can either download the Z8 file and play it with Frotz (Linux/BSD), WinFrotz, or Lectrote under Android and anything else. Also, online with the web interpreter.
Now, the 'excuses' about the terseness of the original Z3 parser are now nearly void; because with Puny Inform a lot of Z3 targetting games (for 8086 DOS PC's, C64's, Spectrums, MSX'...) have a slightly improved parser against the original Zork game.
>I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.
Sounds like the browser did you a favor. Wonder if she'll be suing.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
I’m not sure. I think we’ll live through a few years of AI slop before human created content becomes very popular again.
I imagine a future where websites (like news outlets or blogs) will have something like a “100% human created” label on it. It will be a mark of pride for them to show off and they’ll attract users because of it
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I can barely stomach it with John Oliver does it, but reading this sort of snark without hearing a British voice is too much for me.
Also, re: "a tiny handful of incredible nerds" - page 20 of this [0] document lists the sales figures for Infocom titles from 1981 to 1986: it sums up to over 2 million shipped units.
Granted, that number does not equal the number "nerds" who played the games because the same player will probably have bought multiple titles if they enjoyed interactive fiction.
However, also keep in mind that some of the games in that table were only available after 1981, i.e., at a later point during the 1981-1986 time frame. Also, the 80s were a prime decade for pirating games, so more people will have played Infocom titles than the sales figures suggest - the document itself mentions this because they sold hint books for some titles separately.
> The fake web page had no information newer than two or three weeks old.
What irks me the most about LLMs is when they lie about having followed your instructions to browse a site. And they keep lying, over and over again. For whatever reason, the ONE model that consistently does this is Gemini.
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
It’s less rigid than a command line but much less predictable than either a CLI or a GUI, with the slightest variation in phrasing sometimes producing very different results even on the same model.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
True the unpredictability sucks right now. We're in a transition stage where the models can understand intent but cannot constrain the output within some executable space reliably.
The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.
Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
I found the article is no more than ranting about something that they are just projecting. The browser may not be for everyone, but I think there’s a lot of value to an AI tool that helps you find what you’re looking for without shoving as many ads as possible down your throat while summarizing content to your needs. Supposing OpenAI is not the monster that is trying to kill the web and lock you up , can’t you see how that may be a useful tool?
Me too, and as the number and maturity of my projects have grown, improving and maintaining them all together has become harder by a factor I haven’t encountered before
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind. I’m a self-taught programmer running my own SaaS.
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this got downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
Dash’s entire identity is bound up with narrating technology by translating its cultural shifts into moral parables. His record at actually building things is, at best, spotty. Now the LLM takes that role by absorbing, it summarising, editorialising. And like Winer, he often reads like a guy who has never really made peace with the modern era and who isn't content to declare the final draft of history unless it's had his fingerprints on it.
The machine is suddenly the narrator, and it doesn’t cite him. When he calls Atlas “anti-web,” he’s really saying it is “anti-author”.
In a way though, how much do we need people to narrate these shifts for us? Isn't the point of these technologies that we are able to do things for ourselves rather than rely on mediators to interpret it for us? If he can be outcompeted by LLMs, does that not just show how shallow his shtick is?
This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
Nobody likes the Torment Nexus [0] but everyone has to use it because that's all the eyeballs are. Sometimes still attached.
[0] https://knowyourmeme.com/memes/torment-nexus
Who gets to decide on the exact definition of a “Torment Nexus”?
Presupposing whether everyone reading HN likes or dislkkes something not even agreed on yet seems silly.
Seeing people work tirelessly to make The Matrix a reality is great. I can't wait!
To be fair, the matrix as presented in the films is pretty much unarguably way better than the real world in the films.
The movie that doesn't get enough credit at predicting the future, or what is now the present, is Captain America: The Winter Soldier. DOGE, Palantir, Larry Ellison's vision of nonstop AI surveillance, and all the data-sucking tech companies swearing fealty to the orange authoritarian are bringing the plot of that movie directly into reality, and I'm always surprised that it never gets mentioned.
I'm surprised there's no startup building a helicarrier
Ha. That's the most outlandish part of the plot. In terms of enforcement and control, Black Mirror's Metalhead episode seems the more likely vision, where the robotic dogs are comparable to drones.
On the other hand, Helicarrier (YC class of 2018) went under this spring.
When I ask 20-somethings whether they’ve seen the matrix the answer is ‘no’ usually. They have little idea what they’re working towards, but are happy to be doing it so they have something to eat.
Yet they have seen Black Mirror and the likes, which also portray the future we’re heading towards. I’d argue even better because matrix is still far off.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
The Matrix was inspired by the Gnostic schools of thought. The authors obviously knew loads about esoteric spirituality and the occult sciences. People have been suggesting that we are trapped in a simulacrum / matrix for over two-thousand years. I personally believe The Matrix was somewhat of a documentary. I'm curious - why do you think a concept such as presented in The Matrix, is still far off?
As the siblings said.
I think we are close to Wally or Fifteen Million Credits, maybe even almost at the Oasis (as seen by IOI). But we have made little progress in direct brain stimulation of senses. We are also extremely far from robots that can do unsupervised complex work (work that requires a modicum of improvisation).
Of course we might already be in matrix or a simulation, but if that’s the case it doesn’t really change much.
Fifteen Million Credits is already here.
The difference is that we don't have credits the way the characters do in Brooker's universe; we have social clout in the form of upvotes, likes, hearts, retweets, streaming subs, etc. most of which are monetised in some form or are otherwise a path to a sponsorship deal.
The popularity contest this all culminates in is, in reality, much larger in scale than what was imagined in Black Mirror. The platform itself is the popularity contest.
Some would argue that most stories in Western societies are echoing the Bible. The Matrix is in many ways the story of Jesus (Morpheus is John the Baptist).
Brain/computer interface that completely simulates inputs which drive perceptions which are indistinguishable from reality. At least, that’s what is portrayed in the movie. I’m not OP but this to me seems far off.
Fair point and thank you for sharing it! It definitely does feel far off in that aspect. I suppose though, that if we are all trapped in a false reality it is impossible to know (without escaping the false reality) how advanced base reality actually is. I always interpreted the whole jacking into the Matrix thing, metaphorically, but with a literal interpretation the OP's comment makes much more sense to me. Thanks again!
Matrix was a direct rip off of ghost in the shell series which did a much better job at capturing the essence of the issue in depth (the writers almost admit to it and there are videos out there that does scene by scene comparison). Ghost in the shell is majorly influenced by Buddhism. While there are obvious overlaps between platonism (that forms the core of gnostism - salvation through knowledge to the real world, and the current world ~= suffering and not real), it wouldn't be correct to attribute gnostism as the influence behind The Matrix.
I haven't watched Black Mirror but Silo is the next best thing I've seen after Matrix and the scenario doesn't seems far off too.
I enjoyed Silo, but I think in the real world, completely destroying the world's ecosystem and a fraction of mankind surviving in tiny isolated bunkers for generations is more fantasy than scifi...
> and for what purpose exactly.
The end goal for AI companies has always been to insert themselves into every data flow in the world.
Why provide a service when there is rent to be extracted by being a middleman?
They also need an outlet for all the garbage they generate, hence the transformation of Sora into a shitty social network.
You're absolutely right!
This is what the real Voight-Kampff test turned out to be :)
Perplexity released theirs earlier, and as far as I know, they do not use any of your data like that for training. It's really a shame if that's how OpenAI is using your data. I was going to try their coding solution, but now I'm just flat out blacklisting them and I'll stick to Claude. For whatever reason Claude Code just understands me fully.
Yes, luckily the Perplexity brwoser doesn't use your data for training, only:
"Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads"
https://techcrunch.com/2025/04/24/perplexity-ceo-says-its-br...
Burn them, burn them all to hell!
Option 1: Training data
Option 2: Ad data
Option 3: None of the above
I'm going with the first two because I like to contribute my data to help out a trillion dollar company that doesn't even have customer support :)
Option 3 is probably "engagement numbers go up -> investors happy"
I think it's more like: investors permanently unhappy because they were promised ownership of God and now we're built out they're getting a few percent a year instead at best. Squeeze extra hard this quarter to get them off the Board's backs for another couple of months.
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
https://openai.com/index/introducing-chatgpt-atlas/
Until the next update, when they conveniently have a "bug" that enables it by default
Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.
https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...
AFAICS this has nothing to do with "open-source personal AI engines".
The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.
The question remains whether the indexer would really benefit from real-time ingestion while browsing.
[0] Firefox: https://www.foxtonforensics.com/browser-history-examiner/fir...
[1] Chrome: https://www.foxtonforensics.com/browser-history-examiner/chr...
Due to the dynamic nature of the Web, URLs don't map to what you've seen. If I visit a URL at a certain time, the content I see is different than the content you see or even if I visit the same URL later. For example, if we want to know the tweets I'm seeing are the same as the tweets you're seeing and haven't been subtly modified by an AI, how do you do that? In the age of AI programming people, this will be important.
So you're one of those people trying to attach history to everything!
Yeah I am sure lots of people want their pornhub history integrated into AI...
If that is the "future" (gag), we better be able to opt out
Why not Chrome Devtools MCP?
I understand GP like they want to browse normally and have that session's history feed into another indexing process via some IPC like D-Bus. It's meant to receive human events from the browser.
Chrome Devtools MCP on the other hand is a browser automation tool. Its purpose is to make it trivial to send programmed events/event-flows to a browser session.
Personally I think it would be awesome if we could browse a 1999 version of the web. Better than the crap we have today, even if it is all just AI generated.
https://wiby.org/
Hey now, don’t forget how they will just be able to hand over everything you’ve ever done to the government! We know no government or power would ever abuse that.
"You're absolutely right!"
Hate to be the dum-dum, but what's leading to humanity's self-destruction here? Loss of privacy? Outsized corporate power? Or, is this an extreme mix of hyperbole and sarcasm?
Well, you could always focus on the ridiculous environmental impact of llms. I read once that asking ChatGPT used 250x as much energy as just googling. But now google incorporated llms into search so…
I grew up on the banks of the Hudson River, polluted by corporations dumping their refuse into it while reaping profits. Anthropic/openai/etc are doing the same thing.
Creating an impermeable barrier between truth and real-time slop generation.
How can you know what you're reading is true when you can't verify what's happening out there?
This is true from global events to a pasta recipe.
That's clearer. I can see how that can be a problem, but destruction of humanity? I think of this as a fun change in circumstance at best and a challenge at worst, rather than a disaster.
Asymmetry of power creates rulers and the ruled. Widespread availability of firearms helped to partly balance out one aspect (monopoly on violence) and the wide availability of personal computers plus the Internet balanced out another (monopoly on information). Only part left is the control of resources (food, housing, etc.).
AI is destabilizing the current balance of knowledge/information which creates the high potential for violence.
Consider a society where everyone has a different reality about something shared normally.
Societies are built upon unspoken but shared truths and information (i.e. the social contract). Dissolve this information, dissolve or fragment the society.
This, coupled with profiling and targeting will enable fragmentation of the societies, consolidation of power and many other shenanigans.
This also enables continuous profiling, opening the door for "preemptive policing" (Minority Report style) and other dystopian things.
Think about Cambridge Analytica or election manipulation, but on steroids.
This is dangerous. Very dangerous.
This. Power and Control is only viable at scale when the aforementioned tacts are wielded with precision by "invisible hands" ..
History has proved that keeping society stupid and disenfranchised is essential to control.
Did you know that in the 1600s the King of England banned coffee?
Simple.. fear of evolving propagating better ideas and more intense social fraternity.
"Patrons read and debated the news of the day in coffeehouses, fueled by caffeine; the coffeehouse became a core engine of the new scientific and philosophical thought that characterized the era. Soon there were hundreds of establishments selling coffee."
https://worldhistory.medium.com/why-the-king-of-england-bann...
Accountwalled, but somehow even this is subject to disputation of the detail. https://coffeeinquirer.com/was-coffee-ever-illegal-in-the-uk...
(the late 1600s was something of a fraught time for England and especially for Charles II, who had spent some time in exile due to the monarchist loss of the English Civil War)
I know Ottomans did it at one time but England? Now that's new to me.
Thanks for sharing!
But the impact of AI is going to be even worse than that.
For virtually all of human history, there weren't anywhere near so many of us as there are now, and the success and continuation of any one group of humans wasn't particularly dependent on the rest. Sure, there were large-scale trade flows, but there were no direct dependencies between farmers in Europe, farmers in China, farmers in India, etc. If one society collapsed, others kept going.
The worst historical collapses I'm familiar with - the Late Bronze Age Collapse and the fall of the Roman Empire - were directly tied to larger-scope trade, and were still localized beyond comparison with our modern world.
Until very recently, total human population at any given point in history has been between 100 and 400 million. We're now past 8 billion. And those 8 billion people depend on an interconnected global supply chain for food. A supply chain that, in turn, was built with a complex shared consensus on a great many things.
AI, via its ability to cheaply produce convincing BS at scale, even if it also does other things is a direct and imminent threat to the system of global trade that keeps 8 billion human beings fed (and that sustains the technology base which allows for AI, along with many other things).
I don't want to invalidate your viewpoint: I'll just share mine.
The shared truth that holds us together, that you mentioned, in my eyes is love of humanity, as cliche as that might sound. Sure it wavers, we have our ups and downs, but at the end, every generation is kinder and smarter than the previous. I see an upward spiral.
Yes, there are those of us who might feel inclined to subdue and deceive, out of feelings of powerlessness, no doubt. But, then there are many of us who don't care for anything less than kindness. And, every act of oppression inches us toward speaking and acting up. It's a self-balancing system: even if one falls asleep at the wheel, that only makes the next wake-up call more intense.
As to the more specific point about fragmented information spaces: we always had that. At all points in history we had varying ways to mess with how information, ideas and beliefs flowed: for better and for worse. The new landscape of information flow, brought about by LLMs, is a reflection of our increasing power, just as a teenager is more powerful than a pre-teen, and that brings its own "increased" challenges. That's part of the human experience. It doesn't mean that we have to ride the bad possibilities to the complete extreme, and we won't, I believe.
Thanks for your kind reply. I wanted to put some time aside to reply the way your comment deserves.
My personal foundations are not very different than yours. I don't care about many people cares. Being a human being and having your heart at the right place is a good starting point for me, too.
On the other hand, we need to make a distinction between people who live (ordinary citizens) and people who lead (people inside government and managers of influential corporations). There's the saying "power corrupts", now this saying has scientific basis: https://www.theatlantic.com/magazine/archive/2017/07/power-c...
So, the "ruling class", for the lack of better term, doesn't think like us. I strive to be kinder every day. They don't (or can't) care. They just want more power, nothing else.
For the fragmented spaces, the challenge is different than the past. We, humans, are social animals and were always in social groups (tribes, settlements, towns, cities, countries, etc.), we felt belong. As the system got complex, we evolved as a result. But the change was slow, so we were able to adapt in a couple of generations. In 80s to 00s, it was faster, but we managed it somehow. Now it's exponentially faster, and more primitive parts of our brains can't handle it as gracefully. Our societies, ideas and systems are strained.
Another research studying the effects of increasing connectivity found that this brings more polarization: https://phys.org/news/2025-10-friends-division-social-circle...
Another problem is, unfortunately, not all societies or the parts of the same society evolve at the same pace to the same kinder, more compassionate human beings. Radicalism is on the rise. It doesn't have to be violent, but some parts of the world is becoming less tolerant. We can't ignore these. See world politics. It's... complicated.
So, while I share your optimism and light, I also want to underline that we need to stay vigilant. Because humans are complicated. Some are naive, some are defenseless and some just want to watch the world burn.
Instead of believing that everything's gonna be alright eventually, we need to do our part to nudge our planet in that direction. We need to feed the wolf which we want to win: https://en.wikipedia.org/wiki/Two_Wolves
Argh, I lost my reply due to a hiccup with my distraction-blocking browser extension. I'll try and summarize what I wanted to say. I'll probably be more terse than I originally would have been.
I appreciate your thoughtful reply. I too think that our viewpoints are very similar.
I think you hit the nail on the head about how it's important that positivity doesn't become an excuse for inaction or ignorance. What I want is a positivity that's a rally, not a withdrawal.
Instead of thinking of power as something that imposes itself on people (and corrupts them), I like to think that people tend to exhibit their inner-demons when they're in positions of power (or, conversely, in positions of no-power). It's not that the position does something to them, but it's that they prefer to express their preexisting disbalance (inner conflict) in certain ways when they're in those circumstances. When in power, the inner disbalance manifests as a villain; when out-of-power, it manifests as a victim.
I think it's important to say "we", rather than "us and them". I don't see multiple factions with fundamentally incompatible needs. Basically, I think that conflict is always a miscommunication. But, in no way do I mean that one should cede to tyranny or injustice. It's just that I want to keep in mind, that whenever there's fighting, it's always in-fighting. Same for oppression: it's not them hurting us, but us hurting us: an orchestration between villains and victims. I know it's triggering for people when you humanize villains and depassify victims, but in my eyes we're all human and all powerful, except we pretend that the 1% is super powerful, while the 99% are super powerless.
I had a few more points I wanted to share, but I have to run. Thanks for the conversation.
Google gave us direct access to much of the world's knowledge base, then snatched it away capriciously and put a facsimile of it behind a algorithmic paywall that they control at the whims of their leadership, engineering, or benefactors. The despair any rational person will feel upon realizing that they lobotomized the overmind that drove Information Age society might just be traumatic enough, in aggregate, to set off a collapse. So, yes. Destruction of humanity (at least, as we know it) incoming. That's without the super AI.
You turn off your computer and go outside.
I already do that a lot, and not use any generative AI tools for any reasons to begin with.
Let's convince the remaining 8 billion people.
Why convince them? If they never go outside, they’ll just be inside anyway. You won’t interact with them. Metaphorically. Real life is a place, not an idea.
You're interacting with real people who doesn't see your face and hear your voice all day and affect each other.
Real life is a place encompassing the "cyberspace", too. They're not separate but intertwined. You argue that people affecting your life are the closest ones to you, but continuously interact with the ones who are farthest from you distance-wise and they affect your day at the moment.
Maybe this is worth thinking about.
That would be fine if they didn't vote.
As bad as the current US administration is, a day in my life today is not really any worse from what it has been under past Presidents.
People who want billions of people to be inside and compliant, want those people's vote to go a certain way (at least, while that is even still a thing). Once that part stops being a thing, you stop being allowed to be outside, as that could be a problem.
I certainly can and do that. Can you please convince remaining 8 billions to do the same?
Based on ie election behavior or populations, what you describe is naivety on a level of maybe my 5 year old son. Can you try to be a bit more constructive?
More constructive!?
Ok, let’s break it down for you: What all 8 billion people in the world think does not matter to you. There are people out there cutting heads off in the name of religion, or people who think their dictator is a divine being.
People outside your country have little effect on your daily life.
Even people within your country have a weak effect on your daily life.
What other people believe only really matters for economic reasons. Still, unless you are very dependent on social safety nets even they don’t matter that much. You just find more money and carry on.
You might think that more propaganda will result in people voting for bad politicians, but it is actually possible to have too much propaganda. If people become aware of how easily fake content is generated, which they are rapidly realizing in the age of AI, the result is they become skeptical of everything, and come up with their own wild theories on what the truth really is.
The people whose thoughts matter most are the people you interact with on a daily basis, as they have the most capability to alter your daily life physically and immediately. Fortunately you can control better who you surround yourself with or how you interact with them.
If you turn off the conversation, the world will appear pretty indifferent even to things that seem like a big deal on social media.
You said: "You might think that more propaganda will result in people voting for bad politicians"
In the US at least, the people who vote the most are typically the older people 40+ and those people have very little experience with tech and AI and are easily tricked by fake crape. Add AI to the mix, and they literally have no perception of the real world.
40s have very little experience with tech? Those were the people who practically invented tech as we know it today. Most AI researchers are in their 40s and 50s, and have been experimenting with machine learning and AI for the past decades.
I think your comment is just very ageist. You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
Ironically I would say it is young 20 somethings and below who have no clue how a computer or software even works. Just a magic sheet of glass or black box that spits out content and answers, and sometimes takes pictures and video.
This is kind of true - the media environment can be both overwhelming and irrelevant. But eventually it hits. I have some friends who are trans and very familiar with what a hostile propaganda campaign can do to your healthcare.
(also, has everyone forgotten COVID?)
It's ironic that the front page of HN had a related article:
- Study finds growing social circles may fuel polarization: https://phys.org/news/2025-10-friends-division-social-circle...
tl;dr: More close friends people have, more polarized societies become.
It's easy to profile people extensively, and pinpoint them to the same neighborhood, home or social circle.
Now what happens when you feed "what you want" to these groups of people you see. You can plant polarization with neighbourhood or home precision and control people en-masse.
Knowing or seeing these people doesn't matter. After some time you might find yourself proverbially cutting heads off in the name of what you believe. We used to call these flame wars back then. Now this is called doxxing and swatting.
The people you don't know can make life very miserable in a slow-burning way. You might not be feeling it, but this the core principle of slowly cooking the frog.
A lot of assumptions, some could be correct, some are plainly not.
Your idea of living in society is something very different form my idea, or European idea (and reality). Not seeing how everything is interconnected, ripple effects and secondary/tertiary effects come back again and again, I guess you don't have kids... well you do your life, if you think money can push through and solve everything important. I'd call it a sad shortsighted life if I cared, but thats just me.
Absolute control over what people think and know, which is sort of absolute control overall with power to normalize anything, including what we consider evil now.
Look at really powerful people of this world - literally every single one of them is badly broken piece of shit (to be extremely polite), control freaks, fucked up childhood and thus overcompensating missing/broken father figure, often malicious, petty, vengeful, feeling above rest of us.
Whole reason for democracy since ancient times to limit how much power such people have. The competent sociopathic part of above will always rise in society to the top regardless of type of system, so we need good mechanism to prevent them from becoming absolute lifelong dictators (and we need to prevent them from attaining immortality since that would be our doom on another level).
People didn't change over past few thousands of years, and any society that failed above eventually collapsed in very bad ways. We shall not want the same for current global civilization and for a change learn from past mistakes, unless you like the idea of few decades of warfare globally and few billions of death. I certainly don't.
Being "anti-web" is the least of its problems.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Let’s make a not-for-profit, we can make rainbows and happiness.
Yay!! Let’s all make a not-for-profit!!
Oh, but hold on a minute, look at all the fun things we can do with lots of money!
Ooooh!!
I totally agree.
Clearly, an all-local implementation is safer, and using less powerful local models is the reasonable tradeoff. Also make it open source for trust.
All that said, I don’t need to have everything automated, so we also have ‘why even build it’ legitimate questions to ask.
Yeah, I think there are profound security issues, but I think many folks dug into the prompt injection nightmare scenarios with the first round of “AI browsers”, so I didn’t belabor that here; I wanted to focus on what I felt was less covered.
I mean... Edge already have copilot integrated for years, and Edge actually have users, unlike Atlas. Not sure why people are getting shocked now...
It's bad too, yes. But not as bad, because MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain. SamAI is a deeply unprofitable company, mostly B2C oriented, with no other products to fall back to except for LLM. So it is more probably that Sam will be exploiting user data. But in general both are bad, that's why people need to use Firefox, but never actually do so, due to some incorrect misconception from decade ago.
>MS is a profitable company with real enterprise products, so they have some reputation and compliance to maintain.
On the contrary, it could be the case that Microsoft ritually sacrifices a dozen babies each day in their offices and it would still be used because office.
Microsoft calls everything copilot. It is unclear what they had back then under that name, and what they will have under it.
"This bad no good thing is already happening, so why are you complaining"
Is this the security flaw thingy that stores OAuth or Auth0 tokens in sqllite database with overly permissive read privileges on it?
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
Right, the software is inherently a flaming security risk even if the vendor were perfectly trustworthy and moral.
Well, unless the scenario is moot because such a vendor would never have released it in the first place.
People are misinterpreting the gui/cli thing.
There's 2 dimensions to it: determinism and discoverability.
In the Adventure example, the ux is fully deterministic but not discoverable. Unless you know what the correct incantation is, there is no inherent way to discover it besides trial and error. Most cli's are like that (and imho phones with 'gestures' are even worse). That does not make a cli inefficient, unproductive or bad. I use cli all the time as I'm sure Anhil does, it just makes them more difficult to approach for the novice.
But the second aspect of Atlas is its non determinism. There is no 'command' that predictivly always 'works'. You can engineer towards phrasings that are more often successfull, but you can't reach fidelity.
This leeway is not without merrit. In theory the system is thus free to 'improve' over time without the user needing to. That is something you might find desirable or not.
> In theory the system is thus free to 'improve' over time without the user needing to.
It could just as well degrade, improvement is not the only path.
Yes, agreed! Autocomplete, or search suggestions, is the ultimate discoverability feature of the typing interaction.
Exactly. Equating CLI and LLM text input is completely wrong. Hard to see past that huge mistake.
Although there is one aspect in which the LLM interface still isn't discoverable - what interface does it have to the world.
Can I asked Alexa+ to send a WhatsApp? I have no idea because it depends if the programmers gave it that interface.
I opened ChatGPT on my Mac this morning and there was an update.
I updated ChatGPT and a little window popped up asking me to download Atlas. I declined as I already have it downloaded.
There was another window, similar to the update available window, in my chat asking me to download Atlas again...I went to hit the 'X' to close it and I somehow triggered it, it opened my browser to the Atlas page and triggered a download of Atlas.
This was not cool and has further shaken my already low confidence in OpenAI.
I don't think I've ever encountered a technology pushed quite as hard on unwilling users as AI.
perplexity is doing something similar, it tried to force install perplexity comet mac edition, into my ipad… and then failed and exited…
they are quite aggressive at making people install this
The only confidence I have in OpenAI at this point is that they will be using scummy tricks like that all the time. What have they ever done to earn confidence in the other direction?
They can't even lie and blame it on a programming fuckup because they'd have to say AI driven code is buggy.
I can't speak to the particular browser application. I haven't installed it and probably never will, but the language around text interfaces makes the OP sound... uninformed.
Graphical applications can be more beautiful and discoverable, but they limit the user to only actions the authors have implemented and deployed.
Text applications are far more composable and expressive, but they can be extremely difficult to discover and learn.
We didn't abandon the shell or text interfaces. Many of us happily live in text all day every day.
There are many tasks that suffer little by being limited and benefit enormously by being discoverable. These are mostly graphical now.
There are many tasks that do not benefit much by spatial orientation and are a nightmare when needlessly constrained. These tasks benefit enormously by being more expressive and composable. These are still often implemented in text.
The dream is to find a new balance between these two modes and our recent advances open up new territory for exploring where they converge and diverge.
Am I the only one that interpreted OP in a way that they weren't opposed to neither CLIs, TUIs, nor GUIs at all? The topic wasn't "textual interface VS graphical interface", but "undocumented/natural language VS documented/query language" for navigating the internet.
In addition to the analogy of the textual interface used in Zork, we could say that it'd be like interacting with any REST API without knowledge about its specification - guessing endpoints, methods, and parameters while assuming best practices (of "natural-ness" kind). Do we really want to explore an API like that, through naive hacking? Does a natural language wrapper make this hacking any better? It can make it more fun as it breaks patterns, sure, but is that really what we want?
I'm not focused on this particular browser or the idea of using LLMs as a locus of control.
I haven't used it and have no intention of using it.
I'm reacting to the OP articulating clearly dismissive and incorrect claims about text-based applications in general.
As one example, a section is titled with:
> We left command-line interfaces behind 40 years ago for a reason
This is immediately followed by an anecdote that is probably true for OP, but doesn't match my recollection at all. I recall being immersed and mesmerized by Zork. I played it endlessly on my TRS-80 and remember the system supporting reasonable variation in the input commands.
At any rate, it's strange to hold up text based entertainment applications while ignoring the thousands of text based tools that continue to be used daily.
They go on with hyperbolic language like:
> ...but people were thrilled to leave command-line interfaces behind back in the 1990s
It's 2025. I create and use GUI applications, but I live in the terminal all day long, every day. Many of us have not left the command line behind and would be horrified if we had to.
It's not either/or, but the OP makes incorrect claims that text based interfaces are archaic and have long been universally abandoned.
They have not, and at least some of us believe we're headed toward a new golden age of mixed mode (Text & GUI) applications in the not-so-distant future.
Oh it's 2025 alright.
CLIs are still powerful and enjoyable because their language patterns settled over the years. I wouldn't enjoy using one of these undiscoverable CLIs that use --wtf instead of --help, or be in a terminal session without autocomplete and zero history. I build scripts around various CLIs and like it, but I also love to install TUI tools on my servers for quick insights.
All of that doesn't change the fact that computer usage moved on to GUIs for the general audience. I'd also use a GUI for cutting videos, editing images, or navigating websites. The author used a bit of tongue-in-cheek, but in general I'd agree with them, and I'd also agree with you.
Tbh, I also think the author would agree with you, as all they did was making an anecdote that
is not that far off from annoying its users than is. And that's for a product that wants to rethink the web browser interface, mind you.We are more rapidly heading towards (or already in) a future where the average household doesn't regularly use or even have a "computer" in the traditional sense, and a CLI is not just unused but entirely non-existent.
>We left command-line interfaces behind 40 years ago for a reason
No we didnt.
I think the charitable interpretation is that the author is referring to particular use-cases which stopped being served by CLIs.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
He mentions this as bad example of UX: "search web history for a doc about atlas core design"
I have opposite view. I think text (and speech) is actually pretty good interface, as long as the machine is intelligent enough (and modern LLMs are).
A hybrid will likely emerge. I work on a chat application and it's pretty normal that LLM can print custom ui as part of the chat. Things like sliders, dials, selects, calendars are just better as a GUI in certain situations.
I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.
A TUI client (with some embedded CLI) would perfectly work for HN.
That's still not a command-line interface. An ed-based mailer is a command-line interface: what you're describing sounds more like *shudder* vi.
That just doesn't register with some people.
Some TUI programs can embed a small cli. Like Midnight Commander, and others. Or externally call commands and shells, or even pipe the output. Ed itself, vi, slrn...
Also, some commenters here at HN stating that the CLI/TUI it's just the fallback option... that's ridiculous. Nvi/vim, entr, make... can autocompile (and autocomplete too with some tools) a project upon writting any file in a directory thanks to the entr tool.
But the article is coming from a decisively antagonistic angle though.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
What is the significance of "Even all the Linux users"? First of all it's probably incorrect, because of the all quantifier. I've went out of my way to look at the website via the terminal to disprove the statement. It's clearly factually incorrect now.
Second, what does hate have anything to do with this? Graphical user interfaces serve different needs than text interfaces. You can like graphical user interfaces and specifically use Linux precisely because you like KDE or Gnome. You can make a terrible graphical user interface for something that ought to be a command line interface and vice versa. Vivado isn't exactly known for being easy to automate.
Third, why preemptively attack people as nerds?
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I mean, not only does this come off as an incredible strawman. After all, who wouldn't be excited by computers in an era where they were the hot new thing? Computers were exciting, not because they had text interfaces. They were fun, because they were computers. It's like learning how to drive on the highway for the first time.
The worst part by far is the unnecessary insinuation though. It's the standard anti-intellectual anti-technology stereotype. It creates hostility for absolutely no reason.
If we take the total number of computer users globally, and look at who use GUI vs CLI, the latter will be a teeny tiny fraction.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
Don't forget to count people that require screen readers, this is often a less vocal minority that often depend on CLI tools for interaction with the computer.
The web wasn't made for cli. Gopher was.
If at all anything, Claude Code's success disproved this
It's actually an interesting example, because unlike Warp that tries to be a CLI with AI, Claude defaults to the AI (unless you prefix with an exclamation mark). Maybe it says more about me, but I now find myself asking Claude to write for me even relatively short sed/awk invocations that would have been faster to type by hand. The uncharitable interpretation is that I'm lazy, but the charitable one I tell myself is that I don't want to context-switch and prefer to keep my working memory at the higher level problem.
In any case, Claude Code is not really CLI, but rather a conversational interface.
Claude Code is a TUI (with "text"), not a CLI (with "command line"). The very point of CC is that you can replace a command line with human-readable texts.
Let's not be overly reductive, Claude Code is a TUI with a CLI for all input including slash commands.
You may think that's pedantic but it really isn't. Half-decent TUIs are much closer to GUIs than they are to CLIs because they're interactive and don't suffer from discoverability issues like most CLIs do. The only similarity they have with CLIs is that they both run in a terminal emulator.
"htop" is a TUI, "ps" is a CLI. They can both accomplish most of the same things but the user experience is completely different. With htop you're clicking on columns to sort the live-updating process list, while with "ps" you're reading the manual pages to find the right flags to sort the columns, wrapping it in a "watch" command to get it to update periodically, and piping into "head" to get the top N results (or looking for a ps flag to do the same).
Claude Code is a Terminal User Interface, not a Command Line Interface.
Well, it is if you just run
claude -p "Question goes here"
As that will print the answer only and exit.
But that's not how it's typically used, it's predominantly used in TUI mode so the popularity of CC doesn't tell us anything about popularity of the CLI.
Hi, sorry for the unrelated reply, but I wanted to ask you about a comment you made 6months back about archiving Gamasutra posts. I came across it while searching HN for "gamasutra".
I'd bookmarked a lot of Gamasutra articles over the years and am kinda bummed out that I can't find any of them now that the site has shifted. You mentioned having a collection of their essays? Is there any way to share or access them?
You know of many people who browse the web using CLI?
Yeah, maybe he did, but I didn't. I use GUI's under protest.
I think there is a misunderstanding who is meant by "we"
I mean, it's clear he means for the majority of users and OSes... not the HN crowd specifically.
Also, to be clear, I’m mostly goofing about it CLIs, and — as I mentioned in the piece — I use one every day. But yes, there are four or five billion internet users who don’t and never will. And CLIs are a poor user interface for 99+% of the tasks that people accomplish on computing devices, or with browsers, which is pertinent for the point I was making.
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
Anyone normal knew what you meant
99% of people did, that's the context here
Long live Doug McIlroy!
I think that take is pretty out of touch since "command-line" interfaces are seeing a massive resurgence now that we have LLMs.
Came here to say this. As a software dev I'm deeply offended lol
Exactly, the whole world runs on CLI based software.
Well, it's not just that (not that I disagree with the author in general, but I do on this point). point-and-click interfaces _are_ objectively worse interfaces for power use. I've been stuck selecting a group of files, moving them, double-clicking and renaming, selecting another, etc. all the while knowing that I could write a script that could do all of this much faster if I had access to a command line interface. Point-and-click is just easier to get started with.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Also most of the infocom games were pretty much improved against Zork in the 80's. In the 90's, the amateur Inform6 games basically put Infocom games almost as if they were the original amateur ones, because a lot of them were outstanding and they could ran on underpowered 16 bit machines like nothing.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
Network Engineers, Systems Engineers, Devops.
Anyone who deals with any kind of machine with a console port.
CLIs are current technology, that receive active development alongside GUI for a large range of purposes.
Heck Windows currently ships with 3 implementations. Command Prompt, Powershell AND Terminal.
and how do you open those three terminals?) or do you boot in DOS mode?
CLI is ALWAYS fallback when nothing else works (except when fetish of people on HN). even most devs use IDEs most of the time.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
Run iomenu instead of fzf, it might run faster than the Go binary.
I'm aware. Just having a bit of fun. Obviously the vast majority of computer users don't even know what a command line is.
I am still confused in what way this is "anti-web". Is it actually harming the current web, or just providing a bad interface to it?
> The amount of data they're gathering is unfathomable.
The author suggests that GPT continuously reads information like typed keystrokes etc, I don't see why that's implied. And it wouldn't be new either, with tools like Windows Replay.
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first.
The part about Zork doesn't make sense to me. As I understand it text based adventure games are actually quite lenient with the input you can give, multiple options for the same action. Additionally certain keywords are "industry standard" in the same way that you walk using "wasd" in FPS games, so much that it became the title of the documentary "get lamp". Due to the players perceived knowledge of similar mechanics in other games you can even ague that providing these familiar commands is part of the game design.
It seems to me that the author never played a text based adventure game and is jut echoing whatever he heard. Projects like 1964's ELIZA prove that text based interfaces have been able to feel natural for a long time.
Text has a high information density, but natural language is a notoriously bad interface for certain things. Like giving commands, therefore we invented command lines with different syntax and programming languages for instructing computers what to do.
Have you actually played these games? I put in some hours on Hitchhikers Guide, and It was anything but natural. Maybe once you get far enough in the game and learn the language that is effective it gets easier, but I never got there. You wake up in the dark and have to figure out how to even turn on the light. Then you have to do a series of actions in very specific order before you can get out of your bedroom.
Figuring it all out is part of the fun, but outside the context of a game it would be maddening.
As for Eliza, she mostly just repeats back the last thing you said as a question. “My dog has fleas.” “How does your dog having fleas make you feel?”
> Figuring it all out is part of the fun,
Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.
This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.
I loved text based adventure games when I was growing up, but I also thought this comparison was incredibly apt and also found it very funny. I’m a bit surprised people are so offended by this article, have we lost the ability to read something with nuance?
It's a bit weasel-y to refer to criticism as just "people being offended".
> thought this comparison was incredibly apt and also found it very funny
I'm happy for you, but I didn't.
To "read something with nuance" is to be open to nuance that is already present in the writing. This writing is not nuanced!
Perhaps you're asking us to make an effort to be more tolerant of weak writing. That's a fair request when the writer is acting in good faith. But to mock nerds for liking text adventures when you clearly do not like them yourself is not acting in good faith.
I understood this to be a comment on the fact that a text interface has a lower affordance than a graphical interface. A command line doesnt suggest what you can do in the way a graphical interface can.So even if you have industry standard keywords, a user has to know/learn them. I see it as similar to the buttons versus screens debate in car interfaces.
> As I understand it
You should actually try and play zork and report back.
https://classicreload.com/zork-i.html
It was certainly jarring reading that immediately after accusing someone else of having never played the games.
(Who is clearly playfully poking fun at something he enjoyed playing but can recognize as being constrained by computers of the era and no longer a common format of games thanks to 3D rendering).
I don't think 3D rendering replaces text adventures any more than movies replace books.
I mean, for some people they do, but those people never liked books to begin with; they just didn't have an alternative.
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Yes. I don't and won't use any OpenAI products, but they product category of "AI Browser" is sorely needed.
Well articulated!
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
As another commenter noted, it was loaned from Yiddish rather than German, although the two languages are very closely related. There are many Yiddish words in English that have come from the Jewish diaspora. Common ones I can think of are schlep (carry something heavy) and kvetch (complain loudly). Since this is Hacker News, the Yiddish word I think we use the most is glitch. Of course there are also words from Hebrew origin entering English the same way, like behemoth, kosher, messiah...
Very interesting, thanks! Quetsch in german actually means "to squeeze". So while apparently related, has changed its meaning.
Merriam Webster dates the English word "dreck" to 1922, though it seems to come from the Yiddish drek and is therefore much older.
Baader-meinhof phenomenon
Well, while I can never be sure about my own biases, I submit this particular instance is/was not an "illusion". I truely never heard the word dreck used in english before. Why do I know? I am a german native speaker, and I submit that I would have noticed it if I ever heard it earlier, simply because I know its meaning in my native lang and that would have made spotting it rather easy.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
https://en.wikipedia.org/wiki/Frequency_illusion
>> The modern commercial web is an adversarial network of attention theft and annoyance
It feels like $10 / month would be sufficient to solve this problem. Yet, we've all insisted that everything must be free.
I now pay for:
- Kagi
- YouTube Premium
- Spotify Premium
- Meta ad-free
- A bunch of substacks and online news publications
- Twitter Pro or whatever it’s called
On top of that I aggressively ad-block with extensions and at DNS level and refuse to use any app with ads. I have most notifications disabled, too.
It is a lot better, but it’s more like N * $10 than $10 per month.
Certain Kagi LLM models neither store nor use conversation history for training. See their LLMs privacy policy.
https://help.kagi.com/kagi/ai/llms-privacy.html#llms-privacy
I'm not familiar with YouTube or Spotify premium, so this may be a dumb question.
But, doesn't Youtube Premium include Youtube Music? So why pay for Spotify premium too?
Pro Spotify: existing playlists and history, better artists info, better UI.
YouTube Music is both better and worse: UI has some usability issues and unfortunately it shares likes and playlists with the normal YouTube account, as a library it has lots of crap uploaded by YouTube users, often wrong metadata, but thanks to that it also has some niche artists and recordings which are not available on other platforms.
To stop the same two companies from owning everything? YouTube Music and Apple Music are shameless anticompetitive moves, leveraging market dominance to move into other existing markets. (I'll afford more lenience to Apple Music, since iTunes was already huge, being the undisputed king of music sales before streaming subscriptions took off.)
I've also been using Spotify for longer than YouTube Music, or its predecessor that Google killed (as they do periodically) even exited.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
You'd need to pay a lot more, because advertisers pay way more than 10$ per month per user, you'd have to outpay the advertisers.
How much do advertisers pay per customer, and where can I find this analysis?
Meta publishes some interesting data along these lines in their quarterly reports.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
[1] https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... (page 15)
Now Meta does paid ad free for my private Instagram account I feel like my online world is pretty close to ad free.
It’s closer to $100 than $10 though, for all the services I pay for to avoid ads, and you still need ad blockers for the rest of the internet.
Kagi.com
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
Add actual accessibility on top, and I'd happily pay 20 EUR/month.
Yes please!
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
There is no way that spending $100 per month on advertising to me is good value.
The more visible and annoying ads are, the more effort (and money) I will spend buying from competitors and actively dissuading other people from buying the product.
$10/mo, paid to whom?
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
So you believe this browser is attempting to represent your interests, and work on your behalf?
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
> quick and responsive
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF , use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
> One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
Google used to be like that, and if ChatGPT is better right now, it won't remain that way for long. They're both subject to the same incentives and pressures, and OpenAI is, if anything, less ethical and idealistic than Google was.
> on ways to put ads
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
(https://www.youtube.com/watch?v=MzKSQrhX7BM&t=0m13s)
I’d probably agree with you if I didn’t have Kagi.
As it is, I find there are some things LLMs are genuinely better for but many where a search is still far more useful.
As bad as AI experiences often are, I speculate that we are actually in a golden age before they are fully enshittified.
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
I found myself avoiding google lately because of their AI responses at the top. But you can block those and now google is much nicer.
yeah I found a setting in Brave to block them the other day and life is much better since
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
Gopher. Gemini (the protocol not the AI). IRC.
Oh sweet summer lamb
not sure about that. I'll be happy with ublock thanks
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
Same author's impact on web preservation https://news.ycombinator.com/item?id=44064230
He's a great talker. Delivery record more mixed.
What’s your beef with him?
I think Atlas sounds and acts pretty terrible, but the Dia browser has been a pretty nice experience for me. You still have access to the web, favorite links in the sidebar, etc.; and the option (when you want or need) to "chat" with the current website you are on using an LLM.
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
Which hopefully will keep it free of ads (even for free users) that destroyed the current ad-riddled web.
Atlas strategy:
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Ycombinator Application: Replace entire World Wide Web with my own WWW
Response: Already achieved by OpenAI!
https://stockanalysis.com/list/magnificent-seven/
I guess Mag 7 is the new FAANG, not the mag-7 shotgun
One could make the case that the web of 2025 is anti-human. AI clients are one of very few exits we have from enshittification. ChatGPT can read all the ads so you don’t have to. The whole point of those annoying CAPTCHAs and those stupidity anime girls on all the kernel web sites is slamming the door from any exit from Google’s world that we live in.
Speaking of anti-web:
https://i.postimg.cc/br7F8NLd/chat-GPT.png
I wonder when webmasters will take theirs gloves off and just start feeding AI crawlers with porn and gore.
This would be amazing. Such a shame it would only work if a large percentage does it.
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
One possibility I like to imagine is a future where knowledge sources are used kind of like tools, i.e. the model never uses any preexisting knowledge from its training data (beyond what’s necessary to be fluent in English, coherent, logical, etc.), a “blank” intelligent being, tabula rasa. And for answering questions it uses various sources dynamically, like an agent would use tools.
I think this will let models be much smaller (and cheaper), but it would also enable a mechanism for monetizing knowledge. This would make knowledge sharing profitable.
For example, a user asks a question, the model asks knowledge sources if they have relevant information and how much it costs (maybe some other metadata like perceived relevance or quality or whatever), and then it decides (dynamically) which source(s) to use in order to compile an answer (decision could be based on past user feedback, similarly to PageRank).
One issue is that this incentivizes content users want to hear versus content they don’t want to hear but is true. But this is a problem we already have, long before AI or even the internet.
The point to posting anything is to share with your fellow kind new knowledge that lifts them, entertains them, or teaches them.
If you post for ad revenue, I truly feel sorry for you. How sad.
> If you post for ad revenue, I truly feel sorry for you.
I think this is a bit dismissive towards people who create content because they enjoy doing it but also could not do it to this level without the revenue, like many indie Youtubers.
If I could press a button and remove money from the internet, I'd do it in a heartbeat.
I absolutely do enjoy content that is financed through ads. I really, REALLY do like some of the stuff, honest. But it is also the case that the internet has been turning into a marketing hellscape for the last couple decades, we've gotten to a point in which engagement is optimized to such a degree that society itself is hurting from it. Politically and psychologically. The damage is hard to quantify, yet I can't fathom the opportunity cost not being well into the billions.
We'd be better off without that.
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
Google search is an ad platform. Wait until the honeymoon days of the "AI" LLMs are over for the enshitification to ensue.
Ain't that the truth... I hope that "AI" LLMs will have their Linux, their local private versions that are on par.
It sounds like a great opportunity to poison the well. Create a bot farm that points the browser to every dank and musty corner of the web ad naseum: Old Yahoo, Myspace and Geocities pages, 4chan, 8chan, etc.
Let's flood the system with junk and watch it implode.
The thing about command lines is off base, but overall the article is right that the ickiness of this thing is exceeded only by its evil.
Yes, if I type Taylor Swift Showgirls I get some helpful information and a lot of links, but not her website. It isn't very different than what Google Gemini displays at the top of Google search results.
Is it a webpage? Well... it displays in a browser...
But if I type Taylor Swift, I get links to her website, instagram, facebook, etc.
Is it a webpage? Well... it displays in a browser...
This isn't Web 2.0, Anil. Things change.
I don't think the CLI one is a good analogy.
> We left command-line interfaces behind 40 years ago for a reason
Man I still love command-line so much
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first. [...] guess what secret spell they had to type into their computer to get actual work done
Well... docs and the "-h" do a pretty good job.
The games he is talking about deliberately didn't have docs or help because that WAS the game, to guess.
I think same here, while there are docs "please show me the correct way to do X," the surface area is so large the the analogy holds up still, in that you might as well just be guessing fo rhte right command.
Am I missing something here? I used it a few days ago and it does actually act like a web browser and give me the link. This seems to be a UI expectation issue rather than a "real philosophy".
It's bad news if one company owns the search bar.
Just like it's bad news if one company owns the roads or the telecom infrastructure.
Governments need to prepare some strong regulation here.
(Of course this is still true if it's not strictly a monopoly)
Reposting on Bluesky.
I like anti-web phrase. I think it will be a next phase after all those web 2.0 and web x.0 things.
https://bsky.app/profile/kkarpieszuk.bsky.social/post/3m4cxf...
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
:skull:
Tangent but related, if only google search would do a serious come back instead of not finding nothing anymore, we would have a tool to compare ai to. Sure gemini integration might still be a thing but with actual working search results
You should try Kagi for this experience.
One deal breaker for me - TTS (select and speak) is broken. It does not read the selected text.
1.0 - algorithmic ranking of real content, with direct links
2.0 - algorithmic feeds of real content with no outbound links - stay in the wall
3.0 - slop infects rankings and feeds, real content gets sublimated
4.0 - algorithmic feeds become only slop
5.0 - no more feeds or rankings, but on demand generative streams of slop within different walled slop gardens
6.0 - 4D slop that feeds itself, continuously turning in on itself and regenerating
>Sorry, I can't do that.
'Modern' Z-Machine games (v5 version compared to the original v3 one from Infocom) will allow you to do that and far more. By 'modern' I meant from the early 90's and up. Even more with v8 games.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
The original v3 Z Machine parser (raw one) was pretty much limited compared to the v5 one. Even more with against the games made with Inform6 and the Inform6 English library targetting the v5 version.
Go try yourself. Original Zork from MIT (Dungeon) converted into a v5 ZMachine game:
https://iplayif.com/?story=https%3A%2F%2Fifarchive.org%2Fif-...
Spot the differences. For instance, you could both say 'take the rock' and, later, say 'drop it'.
Now, v5 games are from late 80's/early 90's. There's Curses, Jigsaw, Spider and Web ... v8 games are like Anchorhead, pretty advanced for its time:https://ifdb.org/viewgame?id=op0uw1gn1tjqmjt7
You can either download the Z8 file and play it with Frotz (Linux/BSD), WinFrotz, or Lectrote under Android and anything else. Also, online with the web interpreter.
Now, the 'excuses' about the terseness of the original Z3 parser are now nearly void; because with Puny Inform a lot of Z3 targetting games (for 8086 DOS PC's, C64's, Spectrums, MSX'...) have a slightly improved parser against the original Zork game.
They gotta do this.
If they don't put AI in every tool, they won't get new training data.
>I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.
Sounds like the browser did you a favor. Wonder if she'll be suing.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
It’s also a classic tactic of emotional abuse:
https://www.womenslaw.org/about-abuse/forms-abuse/emotional-...
Atlas feels more like a task tool than a browser. It’s fast, but we might lose the open web experience for convenience.
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
I’m not sure. I think we’ll live through a few years of AI slop before human created content becomes very popular again.
I imagine a future where websites (like news outlets or blogs) will have something like a “100% human created” label on it. It will be a mark of pride for them to show off and they’ll attract users because of it
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
People didn’t hate DOS it just was what it was
how is this different from just training on your history
It really lost me at
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I can barely stomach it with John Oliver does it, but reading this sort of snark without hearing a British voice is too much for me.
Also, re: "a tiny handful of incredible nerds" - page 20 of this [0] document lists the sales figures for Infocom titles from 1981 to 1986: it sums up to over 2 million shipped units.
Granted, that number does not equal the number "nerds" who played the games because the same player will probably have bought multiple titles if they enjoyed interactive fiction.
However, also keep in mind that some of the games in that table were only available after 1981, i.e., at a later point during the 1981-1986 time frame. Also, the 80s were a prime decade for pirating games, so more people will have played Infocom titles than the sales figures suggest - the document itself mentions this because they sold hint books for some titles separately.
[0] https://ia601302.us.archive.org/1/items/InfocomCabinetMiscSa...
> The fake web page had no information newer than two or three weeks old.
What irks me the most about LLMs is when they lie about having followed your instructions to browse a site. And they keep lying, over and over again. For whatever reason, the ONE model that consistently does this is Gemini.
I think the idea of "we're returning to the command line" is astute tbh, I've felt that subconciously and I think the author put it into words for me.
The article does taste a bit "conspiracy theory" for me though
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
It’s less rigid than a command line but much less predictable than either a CLI or a GUI, with the slightest variation in phrasing sometimes producing very different results even on the same model.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
True the unpredictability sucks right now. We're in a transition stage where the models can understand intent but cannot constrain the output within some executable space reliably.
The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.
Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
can never go back
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
Yep. I was playing around with both Atlas and Comet and, security and privacy issues aside, I can’t figure out what they’re for or what the point is.
Except one: it gives them the default search engine and doesn’t let you change it.
I asked Atlas about this and it told me that’s true, the AI features are just a hook, this is about lock in.
Make of that what you will.
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
Worth reading to the end.
I found the article is no more than ranting about something that they are just projecting. The browser may not be for everyone, but I think there’s a lot of value to an AI tool that helps you find what you’re looking for without shoving as many ads as possible down your throat while summarizing content to your needs. Supposing OpenAI is not the monster that is trying to kill the web and lock you up , can’t you see how that may be a useful tool?
this website is mentioned best on Atlas Browser 25A362 in 1920x1080 resolution
What now remains is, after hearing glowing feedback, Satya making this the default browser in Windows as part of Microsoft and OpenAI's next chapter.
Eh I use ChatGPT for so many things I realize how many projects I used to just let go by.
Me too, and as the number and maturity of my projects have grown, improving and maintaining them all together has become harder by a factor I haven’t encountered before
Another edition of “if it’s free you are their product”…
It’s not entirely free though, agent mode and a few other features are paid. I’m paying OpenAI $200/mo for my subscription
1 - nobody cares about being "pro-web" or "anti-web"
2 - we didn't leave command-line interfaces behind 40 years ago
I want to hear more garbage like this. Do you have a website?
1 - I care
2 - That's an entirely different situation and you know it.
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind. I’m a self-taught programmer running my own SaaS.
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this got downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
Dash’s entire identity is bound up with narrating technology by translating its cultural shifts into moral parables. His record at actually building things is, at best, spotty. Now the LLM takes that role by absorbing, it summarising, editorialising. And like Winer, he often reads like a guy who has never really made peace with the modern era and who isn't content to declare the final draft of history unless it's had his fingerprints on it.
The machine is suddenly the narrator, and it doesn’t cite him. When he calls Atlas “anti-web,” he’s really saying it is “anti-author”.
In a way though, how much do we need people to narrate these shifts for us? Isn't the point of these technologies that we are able to do things for ourselves rather than rely on mediators to interpret it for us? If he can be outcompeted by LLMs, does that not just show how shallow his shtick is?
I think you’re on to something. In a way this is just the postmodern ethos of the web collapsing in on itself.