Dell admits consumers don't care about AI PCs

(pcgamer.com)

252 points | by mossTechnician a day ago ago

169 comments

  • pseudosavant an hour ago

    I don't know how many others here have a CoPilot+ PC but the NPU on it is basically useless. There isn't any meaningful feature I get by having that NPU. They are far too limited to ever do any meaningful local LLM inference, image processing or generation. It handles stuff like video chat background blurring, but users' PC's have been doing that for years now without an NPU.

    • kenjackson an hour ago

      I'd love to see a thorough breakdown of what these local NPUs can really do. I've had friends ask me about this (as the resident computer expert) and I really have no idea. Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with. Is there a single remotely killer application for local client NPUs?

      • martinald 38 minutes ago

        The problem is essentially memory bandiwdth afiak. Simplifying a lot in my reply, but most NPUs (all?) do not have faster memory bandwidth than the GPU. They were originally designed when ML models were megabytes not gigabytes. They have a small amount of very fast SRAM (4MB I want to say?). LLM models _do not_ fit into 4MB of SRAM :).

        And LLM inference is heavily memory bandwidth bound (reading input tokens isn't though - so it _could_ be useful for this in theory, but usually on device prompts are very short).

        So if you are memory bandwidth bound anyway and the NPU doesn't provide any speedup on that front, it's going to be no faster. But has loads of other gotchas so no real "SDK" format for them.

        Note the idea isn't bad per se, it has real efficiencies when you do start getting compute bound (eg doing multiple parallel batches of inference at once), this is basically what TPUs do (but with far higher memory bandwidth).

        • zozbot234 27 minutes ago

          NPUs are still useful for LLM pre-processing and other compute-bound tasks. They will waste memory bandwidth during LLM generation phase (even in the best-case scenario where they aren't physically bottlenecked on bandwidth to begin with, compared to the iGPU) because they generally have to read padded/dequantized data from main memory as they compute directly on that, as opposed to being able to unpack it in local registers like iGPUs can.

          > usually on device prompts are very short

          Sure, but that might change with better NPU support, making time-to-first-token quicker with larger prompts.

      • Someone 26 minutes ago

        > Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with.

        I don’t know how good these neural engines are, but transistors are dead-cheap nowadays. That makes adding specialized hardware a valuable option, even if it doesn’t speed up things but ‘only’ decreases latency or power usage.

    • skrebbel an hour ago

      I have one as well and I simply don’t get it. I lucked into being able to do somewhat acceptable local LLM’ing by virtue of the Intel integrated “GPU” sharing VRAM and RAM, which I’m pretty sure wasn’t meant to be the awesome feature it turned out to be. Sure, it’s dead slow, but I can run mid size models and that’s pretty cool for an office-marketed HP convertible.

      (it’s still amazing to me that I can download a 15GB blob of bytes and then that blob of bytes can be made to answer questions and write prose)

      But the NPU, the thing actually marketed for doing local AI just sits there doing nothing.

    • zozbot234 30 minutes ago

      NPUs overall need better support from local AI frameworks. They're not "useless" for what they can do (low-precision bulk compute, which is potentially relevant for many of the newer models) and they could help address thermal limits due to their higher power efficiency compared to CPU/iGPU. but that all requires specialized support that hasn't been coming.

    • GrantMoyer 34 minutes ago

      The idea is that NPUs are more power efficient for convolutional neural network operations. I don't know whether they actually are more power efficent, but it'd be wrong to dismiss them just because they don't unlock new capabilties or perform well for very large models. For smaller ML applications like blurring backgrounds, object detection, or OCR, they could be beneficial for battery life.

      • margalabargala 32 minutes ago

        Not sure about all NPUs, but TPUs like Google's Coral accelerator are absolutely, massively more efficient per watt than a GPU, at least for things like image processing.

  • rsynnott a few seconds ago

    Well, yes, Dell, everyone knows that, but it is _most_ improper to actually _say_ it. What would the basilisk think?!

  • disfictional an hour ago

    As someone who spent a year writing an SDK specifically for AI PCs, it always felt like a solution in search of a problem. Like watching dancers in bunny suits sell CPUs, if the consumer doesn't know the pain point you're fixing, they won't buy your product.

    • martinald an hour ago

      Tbh it's been the same in Windows PCs since forever. Like MMX in the Pentium 1 days - was marketed as basically essential for anything "multimedia" but provided somewhat between no and minimal speedup (v little software was compiled for it).

      It's quite similar with Apple's neural engine, which afiak is used very little for LLMs, even for coreML. I know I don't think I ever saw it being used in asitop. And I'm sure whatever was using it (facial recognition?) could have easily ran on GPU with no real efficiency loss.

  • FfejL a day ago

    > It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

    I wish every consumer product leader would figure this out.

    • ericmcer a day ago

      People will want what LLMs can do they just don't want "AI". I think having it pervade products in a much more subtle way is the future though.

      For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

      That is a trivial example but you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX. For now everyone is making the LLM the product, but once we start building products with an LLM as a background tool it will be great.

      It is actually a really weird time, my whole career we wanted to obfuscate implementation and present a clean UI to end users, we want them peaking behind the curtain as little as possible. Now everything is like "This is built with AI! This uses AI!".

      • wrl an hour ago

        > Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

        I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.

        Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.

        • 9rx an hour ago

          The problem isn't so much the five seconds, it is the muscle memory. You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone. I have been bitten before. Something like the parent described would be a huge improvement.

          Granted, it seems the even better UX is to save what the user inputs and let them recover if they lost something important. That would also help for other things, like crashes, which have also burned me in the past. But tradeoffs, as always.

          • officeplant 21 minutes ago

            >You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone.

            I'm not sure we need even local AI's reading everything we do for what amounts to a skill issue.

            • 9rx 8 minutes ago

              You're quite right that those with skills have no need for computers, but for the rest of us there is no need for them to not have a good user experience.

          • pavel_lishin 21 minutes ago

            I have the exact opposite muscle memory.

      • mossTechnician a day ago

        > if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

        I don't think that's a great example, because you can evaluate the length of the content of a text box with a one-line "if" statement. You could even expand it to check for how long you've been writing, and cache the contents of the box with a couple more lines of code.

        An LLM, by contrast, requires a significant amount of disk space and processing power for this task, and it would be unpredictable and difficult to debug, even if we could define a threshold for "important"!

        • mort96 a day ago

          I think it's an excellent example to be honest. Most of the time whenever someone proposes some use case for a large language model that's not just being a chat bot, it's either a bad idea, or a decent idea that you'd do much better with something much less fancy (like this, where you'd obviously prefer some length threshold) than with a large language model. It's wild how often I've heard people say "we should have an AI do X" when X is something that's very obviously either a terrible idea or best suited for traditional algorithms.

          Sort of like how most of the time when people proposed a non-cryptocurrency use for "blockchain", they had either re-invented Git or re-invented the database. The similarity to how people treat "AI" is uncanny.

          • QuantumNomad_ a day ago

            > It's wild how often I've heard people say "we should have an AI do X" when X is something that's very obviously either a terrible idea or best suited for traditional algorithms.

            Likewise when smartphones were new, everyone and their mother was certain that random niche thing that made no sense as an app would be a perfect app and that if they could just get someone to make the app they’d be rich. (And of course ideally, the idea haver of the misguided idea would get the lions share of the riches, and the programmer would get a slice of pizza and perhaps a percentage or two of ownership if the idea haver was extra generous.)

            • fragmede a day ago

              With Claude Code doing the implementing now, we'll have to see who gets which slice of pizza!

              • reactordev an hour ago

                Difference is now, that person with an idea, doesn’t need a programmer or anyone to share the pizza with. They are free to gorge on all 18” of it.

        • mandevil a day ago

          The difference between "AI" and "linear regression" is whether you are talking to a VC or an engineer.

      • publicdebates an hour ago

        > readily discard short or nonsensical input

        When "asdfasdf" is actually a package name, and it's in reply to a request for an NPM package, and the question is formulated in a way that makes it hard for LLMs to make that connection, you will get a false positive.

        I imagine this will happen more than not.

      • wavemode a day ago

        > Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input.

        That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.

        AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

        Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.

        • ericmcer 3 hours ago

          But... A lot of stuff you rely on now was probably once distracting and unpredictable. There are a ton of subtle UX behaviors a modern computer is doing that you don't notice, but if they all disappeared and you had to use windows 95 for a week you would miss.

          That is more what I am advocating for, subtle background UX improvements based on an LLMs ability to interpret a users intent. We had limited abilities to look at an applications state and try to determine a users intent, but it is easier to do that with an LLM. Yeah like you point out some users don't want you to try and predict their intent, but if you can do it accurately a high percentage of the time it is "magic".

          • mjfisher an hour ago

            Serious question: what are those things from windows 95/98 I might miss?

            Rose tinted glasses perhaps, but I remember it as a very straightforward and consistent UI that provided great feedback, was snappy and did everything I needed. Up to and including little hints for power users like underlining shortcut letters for the & key.

          • marcosdumay 31 minutes ago

            > But... A lot of stuff you rely on now was probably once distracting and unpredictable.

            And nobody relied on them when they were distracting and unpredictable. People only rely on them now because they are not.

            LLMs won't ever be predictable. They are designed not to be. A predictable AI is something different from a LLM.

          • nottorp 5 minutes ago

            > There are a ton of subtle UX behaviors a modern computer is doing that you don't notice, but if they all disappeared and you had to use windows 95 for a week you would miss.

            Like what? All those popups screaming that my PC is unprotected because I turned off windows firewall?

        • wredcoll a day ago

          > The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

          Sawstop literally patented this and made millions and seems to have genuinely improved the world.

          I personally am a big fan of tools that make it hard to mangle my body parts.

          • wavemode a day ago

            sawstop is not AI

            • ChoGGi 2 hours ago

              I mean, I wouldn't want sawstop to hallucinate my finger is a piece of wood.

            • wredcoll a day ago

              Sure, where's the line?

              If you want to tell me that llms are inherently non-deterministic, then sure, but from the point of view of a user, a saw stop activating because the wood is wet is really not expected either.

              • GuinansEyebrows an hour ago

                also from the point of view from a user: in this example, while frustrating/possibly costly, a false positive is infinitely preferable to a false negative.

              • wavemode 21 hours ago

                Mm yeah, I see the point you're making.

                (Though, of course, there certainly are people who dislike sawstop for that sort of reason, as well.)

        • bluGill a day ago

          I want magic that works. Sometimes I want a tool to interrupt me! I know my route to work so I'm not going to ask how I should get there today - but 1% of the time there is something wrong with my plan (accident, construction...) and I want the tool to say something. I know I need to turn right to get someplace, but sometimes as a human I'll say left instead: confusing me and the driver where they don't turn right, and AI that realizes who made the mistake would help.

          The hard part is the AI needs to be correct when it doesn't something unexpected. I don't know if this is a solvable problem, but it is what I want.

          • yndoendo 19 hours ago

            Magic in real life never works 100% of the time. It is all an illusion were some observers understand the trick and others do not. Those that understand it have the potential to break the magic. Even the magician has the ability to fault the trick.

            I want reproducibility not magic.

            • bluGill 8 hours ago

              It is magic that I can touch a swith on the wall and lights come on. It is magic that I have a warm house despite the outside temperature is near freezing. we have plenty of other magic that works. I want more

              • nottorp 4 minutes ago

                If your light switch doesn't turn on the lights any more it's probably broken.

                If your "AI" light switch doesn't turn on the lights, you have to rephrase the prompt.

              • Telemakhos an hour ago

                Electricity, light, and heat aren't magic: they're science. Science is something well understood. Something that seems magical is something poorly understood. When I ask AI a question, I don't know whether it will tell me something truthful, mendacious in a verisimilitudinous way, or blatantly wrong, and I can only tell when it's blatantly wrong. That's magic, and I hate magic. I want more science in my life.

      • slg a day ago

        >For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

        The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manner that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of its usefulness, there could have always just been a check on the length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems.

        • fragmede 21 hours ago

          I've been totally AI-pilled because I don't see why that's of questionable utility. How is a regexp going to tell the difference between "asdffghjjk" and "So, she cheated on me". A mere byte count isn't going to do it either.

          If the computer can tell the difference and be less annoying, it seems useful to me?

          • slg 20 hours ago

            Who said anything about regexp? I was literally talking about something as simple as "if(text.length > 100)". Also the example provided was distinguishing "a 2 page essay or 'asdfasdf'" which clearly can be accomplished with length much easier than either an LLM or even regexp.

            We should keep in mind that we're trying to optimize for user's time. "So, she cheated on me" takes less than a second to type. It would probably take the user longer to respond to whatever pop up warning you give than just retyping that text again. So what actual value do you think the LLM is contributing here that justifies the added complexity and overhead?

            Plus that benefit needs to overcome the other undesired behavior that an LLM would introduce such as it will now present an unnecessary popup if people enter a little real data and intentionally navigate away from the page (and it should be noted, users will almost certainly be much more likely to intentionally navigate away than accidentally navigate away). LLMs also aren't deterministic. If 90% of the time you navigate away from the page with text entered, the LLM warns you, then 10% of the time it doesn't, those 10% times are going to be a lot more frustrating than if the length check just warned you every single time. And from a user satisfaction perspective, it seems like a mistake to swap frustration caused by user mistakes (accidentally navigating away) with frustration caused by your design decisions (inconsistent behavior). Even if all those numbers end up falling exactly the right way to slightly make the users less frustrated overall, you're still trading users who were previously frustrated at themselves for users being frustrated at you. That seems like a bad business decision.

            Like I said, this all just seems like a solution in search of a problem.

          • ChoGGi 2 hours ago

            What about counting words based on user's current lang, and prompting off that?

            Close enough for the issue to me and can't be more expensive than asking an LLM?

          • MichaelRo 19 hours ago

            We went from the bullshit "internet of things" to "LLM of things", or as Sheldon from Big Bang Theory put it "everything is better with Bluetooth".

            Literally "T-shirt with Bluetooth", that's what 99.98% of "AI" stickers today advertise.

      • thwarted a day ago

        YouTube could use AI to not recommend videos I've already watched, which is apparently a really hard problem.

        • Ekaros 10 hours ago

          It just might be that lot of users watch same videos multiple times. They must have some data on this and see that recommending same videos gets more views than recommending new ones.

          • i386 an hour ago

            I work for YouTube. You’re hired.

      • everdrive a day ago

        Using such an expensive technology to prevent someone from making a stupid mistake on a meaningless endeavor seems like a complete waste of time. Users should just be allowed to fail.

        • plasticsoprano a day ago

          Amen! This is part of the overall societal decline of no failing for anyone. You gotta feel the pain to get the growth.

        • anthonypasq a day ago

          if somone from 1960 saw the quadrillions of cpu cycles we are wasting on absolutely nothing every second, they would have an aneurysm

          • robrain a few seconds ago

            As someone from 1969, but with an excellent circulatory system, I just roll my eyes and look forward to the sound of bubbles bursting whilst billionaires weep.

        • AuryGlenz a day ago

          Expensive now is super cheap 10 years from now though.

      • ryukoposting 26 minutes ago

        Bingo. Nobody uses ChatGPT because it's AI. They use it because it does their homework, or it helps them write emails, or whatever else. The story can't just be "AI PC." It has to be "hey look, it's ChatGPT but you don't have to pay a subscription fee."

      • Wowfunhappy 18 hours ago

        > For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

        I agree this would be a great use of LLMs! However, it would have to be really low latency, like on the order of milliseconds. I don't think the tech is there yet, although maybe it will be soon-ish.

      • ambicapter a day ago

        So, like, machine learning. Remember when people used to call it AI/ML? Definitely wasn't as much money being spent on it back then.

      • nottorp a day ago

        > The end result is I only have to deal with that annoying popup when I really am glad it is there.

        Are you sure about that? It will trigger only for what the LLM declares important, not what you care about.

        Is anyone delivering local LLMs that can actually be trained on your data? Or just pre made models for the lowest common denominator?

      • gt0 14 hours ago

        Would that be ideal though? Adding enormous complexity to solve a trivial problem which would work I'm sure 99.999% of the time, but not 100% of the time.

        Ideally, in my view, is that the browser asks you if you are sure regardless of content.

        I use LLMs, but that browser "are you sure" type of integration is adding a massive amount of work to do something that ultimately isn't useful in any real way.

      • thombles a day ago

        > you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX

        It’s already there for Apple developers: https://developer.apple.com/documentation/foundationmodels

        I saw some presentations about it last year. It’s extremely easy to use.

      • nkrisc a day ago

        It’s because “AI” isn’t a feature. “AI” without context is meaningless.

        Google isn’t running ads on TV for Google Docs touting that it uses conflict-free replicated data types, or whatever, because (almost entirely) no one cares. Most people care the same amount about “AI” too.

      • expedition32 a day ago

        Honestly some of the recommendations to watch next I get on Netflix are pretty good.

        No idea if they are AI Netflix doesn't tell and I don't ask.

        AI is just a toxic brand at this point IMO.

    • notatoad a day ago

      I think they will eventually. It’s always been a very incoherent sales pitch that your expensive PCs are packed full of expensive hardware that’s supposed to do AI things, but your cheap PCs that have none of that are still capable of doing 100% of the AI tasks that customers actually care about: accessing chatGPT.

      • voidfunc a day ago

        Also, what kind of AI tasks is the average person doing? The people thinking about this stuff are detached from reality. For most people a computer is a gateway to talking to friends and family, sharing pictures, browsing social media, and looking up recipes and how-to guides. Maybe they do some tracking of things as well in something like Excel or Google Sheets.

        Consumer AI has never really made any sense. It's going to end up in the same category of things as 3D TV's, smart appliances, etc.

        • ryandrake a day ago

          I don't remember any other time in the tech industry's history when "what companies and CEOs want to push" was less connected to "what customers want." Nobody transformed their business around 3D TVs like current companies are transforming themselves to deliver "AI-everything".

          • jimbokun 15 minutes ago

            The customers are CEOs dreaming of a human-free work force.

          • Tanoc 13 hours ago

            I think it does make sense if you're at a certain level of user hardware. If you make local computing infeasible because of the computational or hardware cost it makes it much easier to sell compute as a service. Since about 2014 almost every single change to paid software has been to make it a recurring fee rather than a single payment, and now they can do that with hardware as well. To the financially illiterate paying a $15 a month subscription to two LLMs from their phone they have a $40 monthly payment on for two years seems like a better deal than paying $1,200 for a desktop computer with free software that they'll use a tenth as much as the phone. This is why Nvidia is offering GForce Now the same way in one hundred hour increments, as they can get $20 a month that goes directly to them, with the chance of getting up to an additional $42 maximum if the person buys additional extensions of equal amount (another one hundred hours). That ends up with $744 a year directly to Nvidia without any board partners getting a cut, while a mid grade GPU with better performance and no network latency would cost that much and last the user five entire years. Most people won't realize that long before they reach the end of the useful lifetime of the service they'll have paid three to four times as much as if they had just bought the hardware outright.

            With more of the compute being pushed off of local hardware they can cheapen out on said hardware with smaller batteries, fewer ports and features, and weaker CPUs. This lessens the pressure they feel from consumers who were taught by corporations in the 20th century that improvements will always come year over year. They can sell less complex hardware and make up for it with software.

            For the hardware companies it's all rent seeking from the top down. And the push to put "AI" into everything is a blitz offensive to make this impossible to escape. They just need to normalize non-local computing and have it succeed this time, unlike when they tried it with the "cloud" craze a few years ago. But the companies didn't learn the intended lesson last time when users straight up said that they don't like others gatekeeping the devices they're holding right in their hands. Instead the companies learned they have to deny all other options so users are forced to acquiesce to the gatekeeping.

          • walterbell a day ago

            If memory shortages make existing products non-viable (e.g. 50% price increases on mini PCs, https://news.ycombinator.com/item?id=46514794), will consumers flock to new/AI products like OpenAI "pen" or reject those outright?

        • jimbokun 15 minutes ago

          “Do my homework assignment for me.”

        • tjr a day ago

          Just off the top of my head of some "consumer" areas that I personally encounter...

          I don't want AI involved in my laundry machines. The only possible exception I could see would be some sort of emergency-off system, but I don't think that even needs to be "AI". But I don't want AI determining when my laundry is adequately washed or dried; I know what I'm doing, and I neither need nor want help from AI.

          I don't want AI involved in my cooking. Admittedly, I have asked ChatGPT for some cooking information (sometimes easier than finding it on slop-and-ad-ridden Google), but I don't want AI in the oven or in the refrigerator or in the stove.

          I don't want AI controlling my thermostat. I don't want AI controlling my water heater. I don't want AI controlling my garage door. I don't want AI balancing my checkbook.

          I am totally fine with involving computers and technology in these things, but I don't want it to be "AI". I have way less trust in nondeterministic neural network systems than I do in basic well-tested sensors, microcontrollers, and tiny low-level C programs.

          • the_snooze a day ago

            A lot of consumer tech needs have been met for decades. The problem is that companies aren't able to extract rent from all that value.

        • PunchyHamster a day ago

          I do think it makes some sense in limited capacity.

          Have some half decent model integrated with OS's builtin image editing app so average user can do basic fixing of their vacation photos by some prompts

          Have some local model with access to files automatically tag your photos, maybe even ask some questions and add tags based on that and then use that for search ("give me photo of that person from last year's vacation"

          Similarly with chat records

          But once you start throwing it in cloud... people get anxious about their data getting lost, or might not exactly see the value in subscription

        • chpatrick 19 hours ago

          Consumer local AI? Maybe.

          On the other hand everyone non-technical I know under 40 uses LLMs and my 74 year old dad just started using ChatGPT.

          You could use a search engine and hope someone answered a close enough question (and wade through the SEO slop), or just get an AI to actually help you.

        • fragmede a day ago

          You and I live in different bubbles. ChatGPT is the go-to for my non-techie friends to ask for advice on basically everything. Women asking it for relationship advice and medical questions, to guys with business ideas and lawsuit stuff.

    • extraduder_ire a day ago

      Dell are less beholden to shareholder pressure than others, Michael Dell owns 50% of the company since it went public again.

    • pmdr a day ago

      Meanwhile we got Copilot in Notepad.

    • ivanjermakov 20 hours ago

      Treating consumers as customers, good.

    • nikanj a day ago

      Companies don’t really exist to make products for consumers, they live to create stock value for investors. And the stock market loves AI

      • bluGill a day ago

        The stock market as always been about whatever is the fad in the short term, and whatever produces value in the long term. Today AI is the fad, but investors who care about fundamentals have always cared about pleasing customers because that is where the real value has always come from. (though be careful - not all customers are worth having, some wannabe customers should not be pleased)

      • ehnto 17 hours ago

        As someone pointed out, Dell is 50% owned by Michael Dell. So it's less influenced by this paradigm.

    • PunchyHamster a day ago

      There is place for it but it is insanely overrated. AI overlords are trying to sell incremental (if in places pretty big) improvement in tools as revolution.

  • yalogin a day ago

    They nailed it. Consumers don't care about AI, they care about functionality they can use, and care less if it uses AI or not. It's on the OS and apps to figure out the AI part. This is why even though people think Apple is far behind in AI, they are doing it at their own pace. The immediate hardware sales for them did not get impacted by lack of flashy AI announcements. They will slowly get there but they have time. The current froth is all about AI infrastructure not consumer devices.

    • jorvi a day ago

      The only thing Apple is behind on in the AI race is LLMs.

      They've been vastly ahead of everyone else with things like text OCR, image element recognition / extraction, microphone noise suppression, etc.

      iPhones have had these features 2-5 years before Android did.

      • giancarlostoro a day ago

        TTS is absolutely horrible on iOS. I have nearly driven into a wall when trying to use it whilst driving and it goofs up what I've said terribly. For the love of all things holy, will someone at Apple finally fix text to speech? It feels like they last touched it in 2016. My phone can run offline LLMs and generate images but it can't understand my words.

        • galleywest200 21 hours ago

          > I have nearly driven into a wall when trying to use it whilst driving and it goofs up what I've said terribly.

          People should not be using their phones while driving anyways. My iPhone disables all notifications, except for Find My notifications, while driving. Bluetooth speaker calls are an exception.

        • wolvoleo 11 hours ago

          It sounds like you mean STT not TTS there?

          • giancarlostoro 4 hours ago

            You're right, in my rage I typod, its really frustrating, even friends will text me and their text makes no sense, and 2 minutes later "STUPID VOICE TO TEXT" I have a few friends who drive trucks, so they need to be able to use their voice to communicate.

            • delecti 28 minutes ago

              Better speech transcription is cool, but that feels kinda contrived. Phone calls exist, so do voice messages sent via texting apps, and professional drivers can also just wait a bit to send messages if they really must be text; they're on the job, but if it's really that urgent they can pull over.

              • jimbokun 9 minutes ago

                They can also use paper maps instead of GPS.

            • wolvoleo 2 hours ago

              I have to say that OpenAI's Whisper model is excellent. If you could leverage that somehow I think it would really improve. I run it locally myself on an old PC with 3060 card. This way I can run whisper large which is still speedy on a GPU especially with faster-whisper. Added bonus is the language autodetection which is great because I speak 3 languages regularly.

              I think there's even better models now but Whisper still works fine for me. And there's a big ecosystem around it.

      • fragmede a day ago

        Kind of a big "only" though. Siri is still shit and it's been 15 years since initial release.

        • 0x38B 12 hours ago

          When I'm driving and tell Siri, "Call <family member name>", sometimes instead of calling, it says, "To who?", and I can't get it to call no matter what I do.

    • nerdjon a day ago

      All of the reporting about Apple being behind on AI is driving me insane and I hope that what Dell is doing is finally going to be the reversal of this pattern.

      The only thing that Apple is really behind on is shoving the word (word?) "AI" in your face at every moment when ML has been silently running in many parts of their platforms well before ChatGPT.

      Sure we can argue about Siri all day long and some of that is warranted but even the more advanced voice assistants are still largely used for the basics.

      I am just hoping that this bubble pops or the marketing turns around before Apple feels "forced" to do a copilot or recall like disaster.

      LLM tech isn't going away and it shouldn't, it has its valid use cases. But we will be much better when it finally goes back into the background like ML always was.

      • yalogin a day ago

        Right! Also I don’t think Siri is that important to the overall user experience on the ecosystem. Sure it’s one of the most visible use cases but how many people really care about that? I don’t want to talk out loud to do tasks usually, it’s helpful in some specific scenarios but not the primary use case. The text counterpart of understanding user context on the phone is more important even in the context of llms, and that what plays into the success of their stack going forward

        • SoftTalker 21 minutes ago

          I've never used Siri. Never even tried it. It's disabled on my phone as much as I've been able to work out how to do.

        • lurking_swe 12 hours ago

          are you really asking why someone would like a much better siri?

          - truck drivers that are driving for hours.

          - commuters driving to work

          - ANYONE with a homepod at home that likes to do things hands free (cooking, dishes, etc).

          - ANYONE with airpods in their ears that is not in an awkward social setting (bicycle, walking alone on the sidewalk, on a trail, etc)

          every one of these interaction modes benefits from a smart siri.

          That’s just the tip of the iceberg. Why can’t I have a siri that can intelligently do multi step actions for me? “siri please add milk and eggs to my Target order. Also let my wife know that i’ll pick up the order on my way home from work. Lastly, we’re hosting some friends for dinner this weekend. I’m thinking Italian. Can you suggest 5 recipes i might like? [siri sends me the recipes ASYNC after a web search]”

          All of this is TECHNICALLY possible. There’s no reason apple couldn’t build out, or work with, various retailers to create useful MCP-like integrations into siri. Just omit dangerous or destructive actions and require the user to manually confirm or perform those actions. Having an LLM add/remove items in my cart is not dangerous. Importantly, siri should be able to do some tasks for me in the background. Like on my mac…i’m able to launch Cursor and have it work in agent mode to implement some small feature in my project, while i do something else on my computer. Why must i stare at my phone while siri “thinks” and replies with something stupid lol. Similarly, why can’t my phone draft a reply to an email ASYNC and let me review it later at my leisure? Everything about siri is so synchronous. It sucks.

          It’s just soooo sooo bad when you consider how good it could be. I think we’re just conditioned to expect it to suck. It doesn’t need to.

          • nerdjon 7 hours ago

            I doubt that anyone is actually suggesting that Siri should not be better, but to me I think the issues with it are very much overblown when it does what I actually ask it to do the vast majority of the time since the reality is most of the time what I actually want to ask it to do are basic things.

            I have a several homepods, and it does what I ask it to do. This includes being the hub of all of my home automation.

            Yes there are areas it can improve but I think the important question is how much use would those things actually get vs making a cool announcement, a fun party trick, and then never used again.

            We have also seen the failures that have been done by trying to treat LLM as a magic box that can just do things for you so while these things are "Technically" possible they are far from being reliable.

    • tecoholic 10 hours ago

      Nailed it? Maybe close. They still have a keyboard button dedicated to Copoilot. That thing can’t be reconfigured easily.

      • angulardragon03 a few seconds ago

        Required for Windows certification nowadays iirc

      • xgkickt an hour ago

        Can PowerToys remap it?

    • bluGill a day ago

      Even customers who care about AI (or perhaps should...) have other concerns. With the RAM shortage coming up many customers may choose to do without AI features to save money even though they want it at a lower price.

  • mirekrusin 14 minutes ago

    Isn't the only AI PC a Mac Studio?

  • Traster a day ago

    Fundamentally when you think about it, what people know today as AI are things like ChatGPT and all of those products run on cloud infrastructure mainly via the browser or an app. So it makes perfect sense that customers just get confused when you say "This is an AI PC". Like, what a weird thing to say - my smartphone can do ChatGPT why would I buy a PC to do that. It's just a totally confusing selling point. So you ask the question why is it an AI PC and then you have to talk about NPUs, which apart from anything else are confusing (Neural what?) but bring you back to this conversation:

    What is an NPU? Oh it's a special bit of hardware to do AI. Oh ok, does it run ChatGPT? Well no, that still happens in the cloud. Ok, so why would I buy this?

  • pier25 a day ago

    Consumers are not idiots. We know all this AI PC crap is it's mostly a useless gimmick.

    One day it will be very cool to run something like ChatGPT, Claude, or Gemini locally in our phones but we're still very, very far away from that.

    • MBCook a day ago

      It’s today’s 3D TVs. It’s something investors got all hyped up about that everybody “has to have“.

      There is useful functionality there. Apple has had it for years, so have others. But at the time they weren’t calling it “AI“ because that wasn’t the cool word.

      I also think most people associate AI with ChatGPT or other conversational things. And I’m not entirely sure I want that on my computer.

      But some of the things Apple and others have done that aren’t conversational are very useful. Pervasive OCR on Windows and Mac is fantastic, for example. You could brand that as AI. But you don’t really need to no one cares if you do or not.

      • pier25 a day ago

        > Pervasive OCR on Windows and Mac is fantastic, for example.

        I agree. Definitely useful features but still a far cry from LLMs which is what the average consumer identifies as AI.

    • cco 12 hours ago

      Not that far away, you can run a useful model on flagship phones today, something around GPT 3.5's level.

      So we're probably only a few years out from today's SOTA models on our phones.

  • weird_trousers a day ago

    Finally companies understand that consumers do not want AI products, but just better, stronger, and cheaper products.

    Unfortunately investors are not ready to hear that yet...

    • ashleyn a day ago

      If the AI-based product is suitable for purpose (whatever "for purpose" may mean), then it doesn't need to be marketed first and foremost as "AI". This strikes me as pandering more to investors than consumers, and even signaling that you don't value the consumers you sell to, or that you regard the company's stock as more of the product than the actual product.

      I can see a trend of companies continuing to use AI, but instead portraying it to consumers as "advanced search", "nondeterministic analysis", "context-aware completion", etc - the things you'd actually find useful that AI does very well.

      • PunchyHamster a day ago

        It's basically being used as "see, we keep up with the times" label, as there is plenty of propaganda that basically goes "move entirely to using AI for everything or you're obsolete"

    • m000 a day ago

      The problem is that there are virtually no off-the-shelf local AI applications. So they're trying to sell us expensive hardware with no software that takes advantage of it.

      • ehnto 17 hours ago

        Yes it's a surprising marketing angle. What are they expecting people to run on these machines? Do they expect your average joe to pop into the terminal and boot up ollama?

        Anyone technical enough to jump into local AI usage can probably see through the hardware fluff, and will just get whatever laptop has the right amount of VRAM.

        They are just hoping to catch the trend chasers out, selling them hardware they won't use, confusing it as a requirement for using ChatGPT in the browser.

    • neilsimp1 a day ago

      I agree with you, and I don't want anything related to the current AI craze in my life, at all.

      But when I come on HN and see people posting about AI IDEs and vibe coding and everything, I'm led to believe that there are developers that like this sort of thing.

      I cannot explain this.

      • jimbokun 2 minutes ago

        If you develop software you can’t be as productive without an LLM as a competitor or coworker can be with one.

      • afavour a day ago

        I see using AI for coding as a little different. I'm producing something that is designed for a machine to consume and react to. Code is the means by which I express my aims to the machine. With AI there's an extra layer of machine that transforms my written aims into a language any machine can understand. I'm still ambivalent about it, I'm proud of my code. I like to know it inside out. Surrendering all that feels alien to me. But it's also undeniable that AI has sped up a bunch of the boring grunt work I have to do in projects. You can write, say, an OpenAPI spec, some tests and tell the AI to do the rest. It's very, very far from perfect but it remains very useful.

        But the fact remains that I'm producing something for a machine to consume. When I see people using AI to e.g. write e-mails for them that's where I object: that's communication intended for humans. When you fob that off onto a machine something important is lost.

        • nottorp a day ago

          > I like to know it inside out. Surrendering all that feels alien to me.

          It's okay, you'll just forget you were ever able to know your code :)

          • CamperBob2 2 hours ago

            I've already forgotten most assembly languages I ever used. I look forward to forgetting C++.

            • nottorp 29 minutes ago

              Last part is very common, but what's wrong with assembly languages?

              But I wasn't talking about forgetting one language or another, i was talking about forgetting to program completely.

      • RevEng 16 hours ago

        Even as a principal software developer and someone who is skeptical and exhausted with the AI hype, AI IDEs can be useful. The rule I give to my coworkers is: use it where you know what to write but want to save time doing it. Unit tests are great for this. Quick demos and test benches are great. Boilerplate and glue are great for this. There are lots of places where trivial, mind-numbing work can be done quickly and effortlessly with an AI. These are cases where it's actually making life better for the developer, not replacing their expertise.

        I've also had luck with it helping with debugging. It has the knowledge of the entire Internet and it can quickly add tracing and run debugging. It has helped me find some nasty interactions that I had no idea were a thing.

        AI certainly has some advantages in certain use cases, that's why we have been using AI/ML for decades. The latest wave of models bring even more possibilities. But of course, it also brings a lot of potential for abuse and a lot of hype. I, too, all quite sick of it all and can't wait for the bubble to burst so we can get back to building effective tools instead of making wild claims for investors.

        • brailsafe an hour ago

          I think you've captured how I feel about it too. If I try to go beyond the scopes you've described, with Cursor in my case and a variety of models, I often end up wasting time unless it's a purely exploratory request.

          "This package has been removed, grep for string X and update every reference in the entire codebase" is a great conservative task; easy to review the results, and I basically know what it should be doing and definitely don't want to do it.

          "Here's an ambiguous error, what could be the cause?" sometimes comes up with nonsense, but sometimes actually works.

      • add-sub-mul-div a day ago

        Partly it's these people all trying to make money selling AI tools to each other, and partly there's a lot of people who want to take shortcuts to learning and productivity without thinking or caring about long term consequences, and AI offers that.

      • walterbell a day ago
      • CamperBob2 a day ago

        I cannot explain this.

        That usually means you're missing something, not that everyone else is.

        • kevinh 21 hours ago

          Sometimes, but I didn't get sucked into the crypto/blockchain/NFT hype and feel like that was the right call in hindsight.

      • blibble a day ago

        > I'm led to believe that there are developers that like this sort of thing.

        this is their aim, along with rabbiting on about "inevitability"

        once you drop out of the SF/tech-oligarch bubble the advocacy drops off

  • m348e912 a day ago

    Protip, if you are considering a dell xps laptop, consider the dell precision laptop workstation instead which is the business version of the consumer level xps.

    It also looks like names are being changed, and the business laptops are going with a dell pro (essential/premium/plus/max) naming convention.

    • ishtanbul a day ago

      I have the precision 5690 (the 16inch model) with a ultra 7 processor and 4k touchscreen (2025 model). It is very heavy, but its very powerful. My main gripe is that the battery life is very bad, and it has a 165 watt charger, which wont work on most planes. So if you fly a lot for work, this laptop will die on you unless you bring a lower wattage charger. It also doesn't sleep properly. I often find it in my bag hours after closing it and the fans are going at full blast. It should have a 4th usb port (like the smaller version!). Otherwise I have no complaints (other than about windows 11!).

      • microflash a day ago

        After using several Precisions at work, I now firmly believe that Dell does not know how to cool their workstations properly. They are all heavy, pretty bad at energy efficiency and run extremely hot (I use my work machine laid belly up in summer since fans are always on). I’d take a ThinkPad or Mac any day over any Dell.

        • hallmason17 a day ago

          Power hungry intel chips and graphics cards are inconvenient in laptops when it comes to battery life and cooling. It is especially noticeable if you spend any time using an M-series macbook pro, where performance is the same or better, but you get 16 hours of battery life. I prefer to use thinkpads, but apple just has a big technological advantage here that stands out in the UX department. I really hope advances are made quickly by competitors to get similar UX in a more affordable package.

  • tpurves a day ago

    Dell is cooked this year for reasons entirely outside their control. DRAM and storage/drive shortages are causing costs of those to go to the moon. And Dell's 'inventory' light supply chain and narrow margins puts them in a perfect storm of trouble.

    • soupfordummies a day ago

      So it was RAM a couple months ago and now storage/drives are going to the moon also?

      • stonogo a day ago

        It was RAM a couple months ago, and it continues to be RAM. Major RAM manufacturers like SK Hynix are dismantling NAND production to increase RAM manufacturing, which is leading to sharp price increases for solid-state storage.

    • dude250711 a day ago

      Anything but admitting that AI king is naked, here on HN...

      • cogman10 a day ago

        What? No, this is a pretty relevant comment that is being directly caused by AI.

        Consumer PCs and hardware are going to be expensive in 2026 and AI is primarily to blame. You can find examples of CEOs talking about buying up hardware for AI without having a datacenter to run it in. This run on hardware will ultimately drive hardware prices up everywhere.

        The knock on effect is that hardware manufacturers are likely going to spend less money doing R&D for consumer level hardware. Why make a CPU for a laptop when you can spend the same research dollars making a 700 core beast for AI workloads in a datacenter? And you can get a nice premium for that product because every AI company is fighting to get any hardware right now.

        • bluGill a day ago

          > Why make a CPU for a laptop when you can spend the same research dollars

          You might be right, but I suspect not. While the hardware company are willing to do without laptop sales, data centers need the power efficiency as well.

          Facebook has (well had - this was ~10 years ago when I heard it) a team of engineers making their core code faster because in some places a 0.1% speed improvement across all their servers results in saving hundreds of thousands of dollars per month (sources won't give real numbers but reading between the lines this seems about right) on the power bill. Hardware that can do more with less power thus pays for itself very fast in the data center.

          Also cooling chips internally is often a limit of speed, so if you can make your chip just a little more efficient it can do more. Many CPUs will disable parts of the CPU not in use just to save that heat, if you can use more of the CPU that translates to more work done and in turn makes you better than the competition.

          Of course the work must be done, so data centers will sometimes have to settle for whatever they can get. Still they are always looking for faster chips that use less power because that will show up on the bottom line very fast.

        • flyinghamster a day ago

          See also, Crucial exiting the marketplace. That one hit me out of left field, since they've been my go-to for RAM for decades. Though I also see that as a little bit of what has been the story of American businesses: "It's too much trouble to make consumer products. Let's just make components or sell raw materials, or be middlemen instead. No one will notice."

  • GuB-42 an hour ago

    WTF is an "AI PC"? Most of "AI" happens on the internet, in big datacenters, your PC has nothing to do with that. It will more likely confuse users who don't understand why they need a special PC when any PC can access chatgpt.com.

    Now, for some who actually want to do AI locally, they are not going to look for "AI PCs". They are going to look for specific hardware, lots of RAM, big GPUs, etc... And it is not a very common use case anyways.

    I have an "AI laptop", and even I, who run a local model from time to time and bought that PC with my own money don't know what it means, probably some matrix multiplication hardware that I have not idea how to take advantage of. It was a good deal for the specs it had, that's the only thing I cared for, the "AI" part was just noise.

    At least a "gaming PC" means something. I expect high power, a good GPU, a CPU with good single-core performance, usually 16 to 32 GB of RAM, high refresh rate monitor, RGB lighting. But "AI PC", no idea.

  • walterbell a day ago

    > What we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI .. In fact I think AI probably confuses them more than it helps them understand a specific outcome.

    Do consumers understand that OEM device price increases are due to AI-induced memory price spike over 100%?

  • d--b 18 minutes ago

    People don't want AI PC, cause they don't want to spend 5000 bucks for something that's half as good as the free version of ChatGPT.

    But we've been there before. Computers are going to get faster for cheaper, and LLMs are going to be more optimized, cause right now, they do a ton of useless calculations for sure.

    There's a market, just not right now.

  • Galanwe 35 minutes ago

    I have a "Copilot" button on my new ThinkPad. I have yet to understand what it does that necessitates a dedicated button.

    On Linux it does nothing, on Windows it tells me I need an Office 365 plan to use it.

    Like... What the hell... They literally placed a paywalled Windows only physical button on my laptop.

    What next, an always-on screen for ads next to the trackpad?

    • rzzzt 33 minutes ago

      It's equivalent to Win + Shift + F23 so you can map it to some useful action if you have a suitable utility at hand.

  • mjbale116 18 hours ago

    On the same note, whats going on with Dell's marketing lately?

    Dell, Dell Pro, Dell Premium, Dell _Pro_ Premium Dell Max, Dell _Pro_ max... They went and added capacitive keys on the XPS? Why would you do this...

    A lot of decisions that do not make sense to me.

    • rationalist 16 hours ago

      It's a lot easier for people to spend more money when they are confused about their choices.

  • helsinkiandrew a day ago

    They’ve just realised that AI won’t be in the PC, but on a server. Where Dell are heavily selling into - “AI datacenter” counted for about 40% of there infrastructure revenue

  • beloch a day ago

    "We're very focused on delivering upon the AI capabilities of a device—in fact everything that we're announcing has an NPU in it—but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger says bluntly. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."

    --------------

    What we're seeing here is that "AI" lacks appeal as a marketing buzzword. This probably shouldn't be surprising. It's a term that's been in the public consciousness for a very long time thanks to fiction, but more frequently with negative connotations. To most, AI is Skynet, not the thing that helps you write a cover letter.

    If a buzzword carries no weight, then drop it. People don't care if a computer has a NPU for AI any more than they care if a microwave has a low-loss waveguide. They just care that it will do the things they want it to do. For typical users, AI is just another algorithm under the hood and out of mind.

    What Dell is doing is focusing on what their computers can do for people rather than the latest "under the hood" thing that lets them do it. This is probably going to work out well for them.

    • JohnFen a day ago

      > People don't care if a computer has a NPU

      I actually do care, on a narrow point. I have no use for an NPU and if I see that a machine includes one, I immediately think that machine is overpriced for my needs.

      • Tsiklon a day ago

        Alas NPUs are in essentially all modern CPUs by Intel and AMD. It’s not a separate bit of silicon, it’s on the same package as the CPU

        • JohnFen a day ago

          True. But if a company is specifically calling out that their machine has an NPU, I assume they're also adding an surcharge for it above what they would charge if they didn't mention it. I'm not claiming that this is a rational stance, only that I take "NPU" as a signal for "overpriced".

          • Tsiklon 21 minutes ago

            Ahh I hear you that’s a fair observation.

  • GeekyBear a day ago

    There is one feature that I do care about.

    Local speech recognition is genuinely useful and much more private than server based options.

  • kingstnap a day ago

    NPUs are just kind of weird and difficult to develop for and integration is usually done poorly.

    Some useful applications do exist. Particularly grammar checkers and I think windows recall could be useful. But we don't currently have these designed well such that it makes sense.

    • criddell 41 minutes ago

      A while ago I tried to figure out which APIs use the NPU and it was confusing to say the least.

      They have something called the Windows Copilot Runtime but that seems to be a blanket label and from their announcement I couldn't really figure out how the NPU ties into it. It seems like the NPU is used if it's there but isn't necessary for most things.

  • tedmcory77 19 hours ago

    People dont want feature x (AI). They want problem(s) solved.

  • metalman 11 hours ago

    I already have experience with intermitent wipers, they are impossible to use reliably, a newer car I have made the intermitent wipers fully automatic, and impossible to dissable.Now they have figured out how to make intermitent wipers talk, and want to put them in everything. I forsee a future where humanity has total power and fine controll over reality, where finaly after hundreds of years, there is weather controll good enough to make it rain exactly the right amount for intermitent wipers to work properly, but we are not there yet.

  • almosthere 15 hours ago

    The typical consumer doesn't care about any checkbox feature. They just care if they can play the games they care about and word/email/netflix.

    That being said, netflix would be an impossible app without gfx acceleration APIs that are enabled by specific CPU and/or GPU instruction sets. The typical consumer doesn't care about those CPU/GPU instruction sets. At least they don't care to know about them. However they would care if they didn't exist and Netflix took 1 second per frame to render.

    Similar to AI - they don't care about AI until some killer app that they DO care about needs local AI.

    There is no such killer app. But they're coming. However as we turn the corner into 2026 it's becoming extremely clear that local AI is never going to be enough for the coming wave of AI requirements. AI is going to require 10-15 simultaneous LLM calls or GenAI requests. These are things that won't do well on local AI ever.

    • rasz 10 hours ago

      Even i3 cpu is perfectly fine software decoding 2160p H264, the only consequence is about 2x higher power draw compared to NVidia decoder.

  • 4d4m 11 hours ago

    Happy Dell takes user feedback to heart

  • alexb_ a day ago

    Unfortunately, their common sense has been rewarded by the stock tanking 15% in the past month including 4% just today alone. Dell shows why companies don't dare talk poorly of AI, or even talk about AI in a negative way at all. It doesn't matter that it's correct, investors hate this and that's what a ton of companies are mainly focusing on.

    • vitaflo a day ago

      Should have stayed private. Then they wouldn’t have to care what investors think.

    • bilbo0s a day ago

      To be fair, Dell has bigger, more fundamental threats out on the horizon right now than consumers not wanting AI.

      Making consumers want things is fixable in any number of ways.

      Tariffs?..

      Supply chain issues in a fracturing global order?..

      .. not so much. Only a couple ways to fix those things, and they all involve nontrivial investments.

      Even longer term threats are starting to look more plausible these days.

      Lot of unpredictability out there at the moment.

  • scblock a day ago

    This should have been obvious to anyone paying any attention whatsoever, long before any one of these computers launched as a product. But we can't make decisions on product or marketing based on reality or market fit. No, we have to make decisions on the investor buzzword faith market.

    Hence the large percentage of Youtube ads I saw being "with a Dell AI PC, powered by Intel..." here are some lies.

  • whalesalad a day ago

    I'm kind of excited about the revival of XPS. The new hardware sounds pretty compelling. I have been longing for a macbook-quality device that I can run Linux on... so eagerly awaiting this.

    • rationalist 16 hours ago

      Sweet, TIL!

      I love my 2020 XPS.

      The keyboard keys on mine do not rattle, but I have seen newer XPS keyboard keys that do rattle. I hope they fixed that.

    • thesh4d0w a day ago

      I owned a couple XPS 13 laptops in a row and liked them a lot, until I got one with a touch bar. I returned it after a couple weeks and swapped over the to X1 Carbon.

      The return back to physical buttons makes the XPS look pretty appealing again.

      • xcjs a day ago

        This is exactly what I was hoping to see. I also returned one I ordered with the feedback that I needed physical function keys and the touchbar just wasn't cutting it for me.

  • xnx a day ago

    > It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

    This seems like a cop out for saving cost by putting Intel GPUs in laptops instead of Nvidia.

    • recursive 40 minutes ago

      How is saving costs a cop out? That's a genuine goal of most businesses.