The ideal implementation of AI for Apple is probably to finally make Siri work. This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.
The most important part of this is it doesn’t necessarily feel like AI. The user does not like AI for its own sake or the weirdos who ramble about putting them into a permanent underclass. The user likes messaging their friends and playing music.
Absolutely agreed. It feels like tech companies forgot that they are supposed to add value to users. Theyve been shoving random AI usecases down their users throats with no regard for whether it works for the users flow or not. When theres so much value to be had from AI in normal products. Claude code is the best in this right now, probably because the engineers themselves are users.
This isnt unprecedented, its what happened in the dotcom bubble as well. But then that tech started getting used properly as well. So i think its a matter of time before claude code levels of value is avialable to normal users
I have a grander vision for an ideal Apple “AI”: anti-AI.
I’m picturing a combination of on-board facilities and online services from the Apple cloud that Apple product holders could use to flag and filter LLM slop. As a value added prospect, iPhone users who read HN or used TikTok would be seeing clear UI-level indications of when they’re interacting with slop with options to kill it.
In my estimation it would provide platform benefits without losing capabilities, leverage Apples hardware and not advertising positioning, fix critical issues of spam and scams, and let them market a higher calibre of online experience. Also, they could un-eff Siri - “play album X starting at track Y”, come on, it’s 2026.
This is a similar argument to "Dropbox is a feature, not a product" and it definitely rings true in this instance too. I remember the litany of applications that only supported sync through Dropbox. It had no ecosystem, it's saving grace was that no one yet was operating a service similar at that scale.
All the major AI companies are trying to manufacture their own ecosystems to become less disposable. They'll get away with it for a while, but only insofar as hardware prevents advanced use. Once we get that hardware[1] there will only be two types of AI companies: hardware manufacturers, and labs. Just like sync became trivial and ancillary, so will AI inference.
and the differentiating factor on hardware will be the seamlessness of the interface, in software. the combination of voice, eye tracking, swiping, capture of intent, being able to mumble to myself at a volume only my device can hear. The hardware needs to be little more than something that gets out of the way and acts as an input device with a battery.
I totally agree - the phone as a form factor is not going away. People are always going to want to have a mobile communicator/computer, and want one with a screen and all-day battery life. The phone is not going to be replaced by smart glasses or some other wearable or screen-less pocket device.
It may well be that the user interface of your "phone", and how you use it, changes over time as we progress toward AGI, but as long as Apple keep to the Job's aesthetic of making well designed products that get out of the way and just "do the thing", they should be fine. Of course Apple will eventually fall, as all companies do, but I don't think the reason for it will be that the "phone" market was rendered obsolete by AI.
Perhaps if phones becomes more of a "pocket assistant" than a device to run discrete apps, then they will becomes harder to differentiate based on software, and more of a generic item rather than a status/luxury one ... who knows? Anyone else have any theories of how Apple may eventually fall?
There is one potential AI risk to Apple, that they are at a disadvantage due to not having their own frontier models and datacenters to run them on, but I think there will always be someone willing to sell them API access, and they will adapt as needed. Good enough AI is only going to get cheaper to train and serve, and Apple not trying to compete in this area may well turn out to have been a great decision, just as Microsoft seem to be doing fine letting OpenAI take all the risk.
GPT 3.5 is nearly 4 years old. What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life? For the sake of conversation let’s say the average person is some random person in middle America.
To me there are cool things but nothing so great where if LLMs were deleted I’d cry about it. To contrast mRNA vaccines, gene therapy and crispr seem more impactful in reality, just to mention things from 2020.
Apple's problem might be they were right too early which is sometimes worse than being wrong. The original vision of Siri was substantively correct in how AI would supercharge our phones but huge parts of the vision got forgotten when Siri was acquired by Apple and the original founders left. The original technical choices around Siri constrained it from evolving into something useful.
A funny story that happened the other day: A friend knew he had to be at dinner at a place across town but he forgot why he had to be at that dinner. While we were waiting for his rideshare to come, he was flipping through every kind of app trying to reconstruct the original context for his appointment.
In theory, this is where AI should shine. He should have been able to say "Hey Siri, pull up all of the info that references tonight's dinner appointment" and AI should be the unified interface into a bunch of app-specific data pools.
But of course he's never in 1 million years would have thought about using Siri to do that because of how bad Siri is.
Access to a rational, imperfect yet functional expert in lots of everyday subjects: personal finance, making decisions and plans, relationships, taboo questions, the first steps of a medical/law opinion, general problem solving and breakdown..
Even considering that it’s sometimes wrong or hallucinating, it’s doing an important job by beginning to eliminate gate keeping, be it centered on cost or access.
Im unconvinced. How do you trade this for misinformation and scams that will be coming on unprecedented scale? In any case isn’t it the case that the value there is human expertise and search? At least with gpt 5 using it without search will almost certainly give you wrong information in a variety of topics so the value seems to be in search which is old tech
> What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life?
Coding adjacent, but my small town's small businesses have all dramatically improved their websites with LLMs. Folks who didn't have them before can now build them. Folks who had to rely on a web designer no longer have to.
Was it really that difficult to build a generic website with a template before? Using a LLM instead of a template seems like ridiculous overkill imho but thanks for the anecdote.
> Was it really that difficult to build a generic website with a template before?
Yes. Code looks intimidating if you aren't used to it (and don't have an IDE). And there are lots of steps between having a file of code and having a hosted website.
I don’t see how a llm solves this. It’s not like a llm hosts the website. Sites like squarespace and Wordpress let you modify your site without ever seeing code. They have graphical editors that you can stay in if you wish. I agree llms help, though if you use a product.
I was honestly a bit intrigued to read that article but its written on a stack of weak arguments. for example:
>>technologies have built-in politics that stem from the political views and goals of the people building the technology.
First, its not just technology that has built-in politics. It's everything, think of tshirts, cups, hats sold on political rallied. Second- how does this even hold up in the context of AI? Who do you credit for building "AI"? Is it just the bunch of founders listed in the article? What about Geoffrey hinton? What about Turing or shannon or leibniz?
The practical implementation is what leads to the autocratic and or fascist like tendencies. LLMs in their current state take massive amounts of money/compute/energy to make. Those items in large amounts are typically managed by corporations or governments. Corporations are not democracies. Corporations also have liability considerations they have to work around. And, they have to do all this without pissing off the government they operate under too much. So yes, this is almost always going to lead to a situation that is not individual friendly. The implementation ends up opinionated because it must. There are only a small number of implementations and the company has much less freedom in what it outputs than the average 'open all the freedom gates' idiot thinks.
Really the only solution here, if possible, is hoping that we can train LLMs/AI with far less resources in the future. If so, this can lead to a proliferation of different models optimized for different purposes. But at the end of the day we must remember all models are biased, this includes human brains. At the end of the day, both AI and brains, are a map and not the territory. We are defined by what we filter out.
another "ai is inherently evil" take coming from the "ai is inherently evil" blog.
i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.
and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.
open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.
Steve already gave away the secret [1] (must watch) a long time ago:
"You have to work backwards from the customer experience."
AI was never going to be on Apple's roadmap in a significant way because it's in their DNA to differentiate technology from products.
[1] https://youtu.be/oeqPrUmVz-o?si=ndUU1H5D3pNifWss
"Working backwards" is also, famously, Amazon's philosophy. It's one of my most cherished takeaways from working there.
Agreed.
The ideal implementation of AI for Apple is probably to finally make Siri work. This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.
The most important part of this is it doesn’t necessarily feel like AI. The user does not like AI for its own sake or the weirdos who ramble about putting them into a permanent underclass. The user likes messaging their friends and playing music.
To much of this hype cycle has no user in mind.
Absolutely agreed. It feels like tech companies forgot that they are supposed to add value to users. Theyve been shoving random AI usecases down their users throats with no regard for whether it works for the users flow or not. When theres so much value to be had from AI in normal products. Claude code is the best in this right now, probably because the engineers themselves are users.
This isnt unprecedented, its what happened in the dotcom bubble as well. But then that tech started getting used properly as well. So i think its a matter of time before claude code levels of value is avialable to normal users
I have a grander vision for an ideal Apple “AI”: anti-AI.
I’m picturing a combination of on-board facilities and online services from the Apple cloud that Apple product holders could use to flag and filter LLM slop. As a value added prospect, iPhone users who read HN or used TikTok would be seeing clear UI-level indications of when they’re interacting with slop with options to kill it.
In my estimation it would provide platform benefits without losing capabilities, leverage Apples hardware and not advertising positioning, fix critical issues of spam and scams, and let them market a higher calibre of online experience. Also, they could un-eff Siri - “play album X starting at track Y”, come on, it’s 2026.
> ideal implementation of AI for Apple is probably to finally make Siri work
Wouldn't the simplest solution be to auction off Siri's back end the way Apple does Safari's search bar in iOS?
The thing which kills me is a lot of this was working back in the Newton days.
This is a similar argument to "Dropbox is a feature, not a product" and it definitely rings true in this instance too. I remember the litany of applications that only supported sync through Dropbox. It had no ecosystem, it's saving grace was that no one yet was operating a service similar at that scale.
All the major AI companies are trying to manufacture their own ecosystems to become less disposable. They'll get away with it for a while, but only insofar as hardware prevents advanced use. Once we get that hardware[1] there will only be two types of AI companies: hardware manufacturers, and labs. Just like sync became trivial and ancillary, so will AI inference.
[1] https://taalas.com/the-path-to-ubiquitous-ai/
and the differentiating factor on hardware will be the seamlessness of the interface, in software. the combination of voice, eye tracking, swiping, capture of intent, being able to mumble to myself at a volume only my device can hear. The hardware needs to be little more than something that gets out of the way and acts as an input device with a battery.
The answer as always in these situations is to zoom out.
We are in the midst of a paradigm shift, and the perspective in the daring fireball post aligns exactly with this author’s perspective:
https://rebecca-powell.com/posts/return-on-intelligence-01-e...
I totally agree - the phone as a form factor is not going away. People are always going to want to have a mobile communicator/computer, and want one with a screen and all-day battery life. The phone is not going to be replaced by smart glasses or some other wearable or screen-less pocket device.
It may well be that the user interface of your "phone", and how you use it, changes over time as we progress toward AGI, but as long as Apple keep to the Job's aesthetic of making well designed products that get out of the way and just "do the thing", they should be fine. Of course Apple will eventually fall, as all companies do, but I don't think the reason for it will be that the "phone" market was rendered obsolete by AI.
Perhaps if phones becomes more of a "pocket assistant" than a device to run discrete apps, then they will becomes harder to differentiate based on software, and more of a generic item rather than a status/luxury one ... who knows? Anyone else have any theories of how Apple may eventually fall?
There is one potential AI risk to Apple, that they are at a disadvantage due to not having their own frontier models and datacenters to run them on, but I think there will always be someone willing to sell them API access, and they will adapt as needed. Good enough AI is only going to get cheaper to train and serve, and Apple not trying to compete in this area may well turn out to have been a great decision, just as Microsoft seem to be doing fine letting OpenAI take all the risk.
> the phone as a form factor is not going away
It's not going away in the next few years. Which means Apple doesn't have to rush to release an AI product for the sake of it à la Giannandrea.
GPT 3.5 is nearly 4 years old. What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life? For the sake of conversation let’s say the average person is some random person in middle America.
To me there are cool things but nothing so great where if LLMs were deleted I’d cry about it. To contrast mRNA vaccines, gene therapy and crispr seem more impactful in reality, just to mention things from 2020.
Apple's problem might be they were right too early which is sometimes worse than being wrong. The original vision of Siri was substantively correct in how AI would supercharge our phones but huge parts of the vision got forgotten when Siri was acquired by Apple and the original founders left. The original technical choices around Siri constrained it from evolving into something useful.
A funny story that happened the other day: A friend knew he had to be at dinner at a place across town but he forgot why he had to be at that dinner. While we were waiting for his rideshare to come, he was flipping through every kind of app trying to reconstruct the original context for his appointment.
In theory, this is where AI should shine. He should have been able to say "Hey Siri, pull up all of the info that references tonight's dinner appointment" and AI should be the unified interface into a bunch of app-specific data pools.
But of course he's never in 1 million years would have thought about using Siri to do that because of how bad Siri is.
Access to a rational, imperfect yet functional expert in lots of everyday subjects: personal finance, making decisions and plans, relationships, taboo questions, the first steps of a medical/law opinion, general problem solving and breakdown..
Even considering that it’s sometimes wrong or hallucinating, it’s doing an important job by beginning to eliminate gate keeping, be it centered on cost or access.
Im unconvinced. How do you trade this for misinformation and scams that will be coming on unprecedented scale? In any case isn’t it the case that the value there is human expertise and search? At least with gpt 5 using it without search will almost certainly give you wrong information in a variety of topics so the value seems to be in search which is old tech
> What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life?
Coding adjacent, but my small town's small businesses have all dramatically improved their websites with LLMs. Folks who didn't have them before can now build them. Folks who had to rely on a web designer no longer have to.
Was it really that difficult to build a generic website with a template before? Using a LLM instead of a template seems like ridiculous overkill imho but thanks for the anecdote.
> Was it really that difficult to build a generic website with a template before?
Yes. Code looks intimidating if you aren't used to it (and don't have an IDE). And there are lots of steps between having a file of code and having a hosted website.
I don’t see how a llm solves this. It’s not like a llm hosts the website. Sites like squarespace and Wordpress let you modify your site without ever seeing code. They have graphical editors that you can stay in if you wish. I agree llms help, though if you use a product.
Anything is a product if you can sell it.
AI is a political ideology masquerading as technology https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/
I was honestly a bit intrigued to read that article but its written on a stack of weak arguments. for example:
>>technologies have built-in politics that stem from the political views and goals of the people building the technology.
First, its not just technology that has built-in politics. It's everything, think of tshirts, cups, hats sold on political rallied. Second- how does this even hold up in the context of AI? Who do you credit for building "AI"? Is it just the bunch of founders listed in the article? What about Geoffrey hinton? What about Turing or shannon or leibniz?
Yea, in itself AI is just AI.
The practical implementation is what leads to the autocratic and or fascist like tendencies. LLMs in their current state take massive amounts of money/compute/energy to make. Those items in large amounts are typically managed by corporations or governments. Corporations are not democracies. Corporations also have liability considerations they have to work around. And, they have to do all this without pissing off the government they operate under too much. So yes, this is almost always going to lead to a situation that is not individual friendly. The implementation ends up opinionated because it must. There are only a small number of implementations and the company has much less freedom in what it outputs than the average 'open all the freedom gates' idiot thinks.
Really the only solution here, if possible, is hoping that we can train LLMs/AI with far less resources in the future. If so, this can lead to a proliferation of different models optimized for different purposes. But at the end of the day we must remember all models are biased, this includes human brains. At the end of the day, both AI and brains, are a map and not the territory. We are defined by what we filter out.
another "ai is inherently evil" take coming from the "ai is inherently evil" blog.
i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.
and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.
open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.