>NVIDIA GPU Display Driver for Windows and Linux contains a vulnerability which could allow a privileged attacker to escalate permissions. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, information disclosure, and data tampering
What does “privileged attacker” mean on Linux? In my mind, “privileged” would mean they already have root, but in that case there’s nothing to escalate, right?
Wondering the same. The same column lists a bunch of Windows-only CVEs where an unprivileged user can do stuff, so there has to be some difference between those (CVE‑2024‑0117 - CVE‑2024‑0121) and the headliner CVE‑2024‑0126
They mention hypervisor breaches further below, so could the CVE 0126 imply that a local root user on a shared GPU machine of some sort can break out of the virtualization?
That one is probably only an issue for folks who care about the root/kernel distinction, but there’s a bunch of buffer issues with the user mode component (this runs inside your process when you use graphics APIs). Not enough details, but that could be potentially exploitable from e.g. WebGL/WebGPU
Presumably it needs to be run by someone who already has access to the computer and gives them (or a program executed by them) root/escalated privileges. Although that might include running code from a webpage etc.
If I'm reading the bulletin right, then all the issues can only be exploited from code already running on your machine. So if you have a single user machine and aren't already owned then this is a non-issue and the verbiage in the title and PC World article are not warranted.
The people that actually need to update are:
* Multi-user systems with some untrusted users.
* Users with malware on their system already (which could privilege escalate)
If you use a web browser or play multiplayer video games then there will be code running on your system that interacts with GPU drivers that you haven't explicitly chosen to download and which could potentially exploit certain vulnerabilities.
This highlights why we shouldn't let browsers (google) keep expanding their reach outside of the historical sandbox. It's almost like all the in-browser Java and Flash problems being repeated. They're creating security problems more than helping legitimate developers. WebGL was fine. Websockets were fine. WebGPU and the recently proposed arbitrary socket API are jumping the shark. Raw GPU access and TCP/UDP access are simply bad ideas from inexperienced people and need to be shut down. If you truly need that stuff I think the solution is to step up your game and make native applications.
I'm not sure why WebGPU is a step too far but WebGL isn't? Every other API for using a GPU went the same direction; why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan? The security properties of both are very similar, Vulkan/Metal/DX12 just lets you skip varying levels of compatibility nonsense inherent in old graphics APIs.
> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?
Because web browsers are supposed to be locked down and able to run untrusted code, not an operating system that reinvents all the same failings of actual operating systems. They should be functionality impaired in favor of safety as much as possible. For the same reason you don't get access to high precision timing in browser (a lesson that took a while to learn!), you shouldn't have arbitrary capabilities piled onto it.
Those are all historical remnants. Modern web browsers serve a radically different purpose than they did in the 90s. It doesn't make sense to even keep calling them "web browsers" since most people don't know what a "web" is, let alone "browse" it.
Modern browsers are application runtimes with a very flexible delivery mechanism. It's really up to web developers to decide what features this system should have to enable rich experiences for their users. Declaring that they should be functionally impaired or what they "should be" without taking into account the user experience we want to deliver is the wrong way of approaching this.
To be clear: I do think we should take security very seriously, especially in the one program people use the most. I also think reinventing operating systems to run within other operating systems is silly. But the web browser has become the primary application runtime and is how most people experience computing, so enabling it to deliver rich user experiences is inevitable. Doing this without compromising security or privacy is a very difficult problem, which should be addressed. It's not like the web is not a security and privacy nightmare without this already. So the solution is not to restrict functionality in order to safeguard security, but to find a way to implement these features securely and safely.
> Modern browsers are application runtimes with a very flexible delivery mechanism.
Clearly this is true. But as someone with an old-school preference for native applications over webapps (mostly for performance/ux/privacy reasons) it irritates me that I need to use an everything app just to browse HN or Wikipedia. I don't want to go all hairshirt and start using Lynx, I just want something with decent ux and a smaller vulnerability surface.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia
But why?
That feels like saying it irritates someone they need to run Windows in order to run Notepad, when they don't need the capabilities of Photoshop at the moment.
An everything app is for everything. Including the simple things.
The last thing I'd want is to have to use one browser for simpler sites and another for more complex sites and webapps and constantly have to remember which one was for which.
Some of us don't use the web for anything other than websites. I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...? And chat was on the road to being an open protocol until Google/Facebook saw the potential for lockin and both dropped XMPP.
I already have an operating system. It's like saying I don't need notepad to be able to execute arbitrary programs with 3D capabilities and listen sockets because it's a text editor.
You also wouldn't need to remember what your generic sandbox app runtime is. Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
> I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...?
Are you not familiar with Gmail or Google Maps or YouTube?
> I already have an operating system.
But Gmail and Google Maps and YouTube don't run on the OS. And this is a feature -- I can log into my Gmail on any browser without having to install anything. Life is so much easier when you don't have to install software, but just open a link.
> Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
But I like having news links in Gmail open in a new tab in the same window. The last thing I want is to be juggling windows between different applications when tabs in the same app are such a superior UX.
Imagine how annoying it would be if my "app" browser had tabs for Gmail and Maps and YouTube and my "docs" browser had tabs for the NYT and WaPo and CNN, and I couldn't mix them?
Or if the NYT only worked in my "docs" browser, but opening a link to its crossword puzzle opened in my "apps" browser instead?
That's a terrible user experience for zero benefit at all.
(And I still would have to remember which is which, even if there's a MIME type, for when I want to go back to a tab I already opened!)
Calling gmail or youtube apps is already kind of a stretch. Gmail splits everything into separate web pages with the associated loading times and need to navigate back and forth. Exacerbating this is that it paginates things, which is something you only ever see in web pages. It lacks basic features you'd expect out of an application like ability to resize UI panes. Youtube has a custom, worse version of a <video> tag to prevent you from saving the videos (even CC licensed ones, which is probably a license violation), but is otherwise a bunch of minimally interactive web pages.
Maps is legitimately an interactive application, though I'd be surprised if most people don't use a dedicated app for it.
The point is you wouldn't have an "apps browser" with tabs. If something is nontrivial, launch it as an actual application, and let the browser be about browsing websites with minimal scripting like the crossword puzzle. Honestly there probably should be friction with launching apps because it's a horrible idea to randomly run code from every page you browse to, and expanding the scope of what that code is allowed to do is just piling on more bad ideas.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia.
...this is possibly missing the point, but it occurs to me that you don't have to. Hacker News and Wikipedia are two websites I'd expect to work perfectly well in e.g. Links.
It's a bigger problem if you want to read the New York Times. I don't know whether the raw html is compatible, but if nothing else you have to log in to get past their paywall.
I don't necessarily disagree. But there's no going back now. There's a demand for rich user experiences that are not as easy to implement or deliver via legacy operating systems. So there's no point in arguing to keep functionality out of web browsers, since there is no practical alternative for it.
If rich ux can be delivered in a web browser then it can be delivered in a native app. I'd assert that the reason this is uncommon now (with the exception of games) is economic not technological.
It is partly economic, but I would say that it's more of a matter of convenience. Developing a web application is more approachable than a native app, and the pool of web developers is larger. Users also don't want the burden of installing and upgrading apps, they just want them available. Even traditional app stores that mobile devices popularized are antiquated now. Requesting a specific app by its unique identifier, which is what web URLs are, is much more user friendly than navigating an app store, let alone downloading an app on a traditional operating system and dealing with a hundred different "package managers", and all the associated issues that come with that.
Some app stores and package managers automate a lot of this complexity to simplify the UX, and all of them use the web in the background anyway, but the experience is far from just loading a web URL in a browser.
And native apps on most platforms are also a security nightmare, which is why there is a lot of momentum to replicate the mobile and web sandboxing model on traditional OSs, which is something that web browsers have had for a long time.
The answer is somewhere in the middle. We need better and more secure operating systems that replicate some of the web model, and we need more capable and featureful "web browsers" that deliver the same experience as native apps. There have been numerous attempts at both approaches over the past decade+ with varying degrees of success, but there is still a lot of work to be done.
Every package manager I know of lets you install a package directly without any kind of Internet connection (I haven't tried much, but I've run into CORS errors with file URIs that suggest browser authors don't want those to work). They also--critically--allow you to not update your software.
The web today is mostly a media consumption platform. Applications for people who want to use their computer as a tool rather than a toy don't fit the model of "connect to some URL and hope your tools are still there".
The difference is in the learning curve. On Windows, making a native app usually requires you to install a bunch of things - a compiler, a specific code editor, etc - in order to even be able to start learning.
Meanwhile, while that's also true for web apps, you can get started with learning HTML and basic JavaScript in Notepad, with no extra software needed. (Of course, you might then progress to actually using compilers like TypeScript, frameworks like React, and so on, but you don't need them to start learning.)
There's always been a much higher perceived barrier to be able to make native apps in Windows, whereas it's easier to get started with web development.
That settles it then. Let's remove all the innovations of the past 30 years that have allowed the web to deliver rich user experiences, and leave developers to innovate with static HTML alone. Who needs JavaScript and CSS anyway?
Seriously, don't you see the incongruity of your statement?
Putting everything, I mean everything into the browser, and arguing for it, is stupid. It stops becoming a browser then and becomes a native sytem, with the problems of the native systems accessing the open wild all over again. And then? Will be there a sandbox inside the browser/new-OS for the sake of security then? Sanbox into a not so sandbox anymore?
Modern operating systems are bad and they are not going to be fixed. So Browser is another attempt at creating better operating system.
Why modern operating systems are bad:
1. Desktop OS allow installation of unrestricted applications. And actually most applications are unrestricted. While there are attempts at creating containerised applications, those attempts are weak and not popular. When I'm installing World of Warcraft, its installer silently adds trusted root certificate into my computer.
2. Mobile OS are walled gardens. You can't just run anything, you need to jump through many hoops at best or live in certain countries at worst.
3. There's no common ground for every operating system. Every operating system is different, has completely different APIs. While there are frameworks which try to abstract those things, those frameworks adds their own pile of issues.
Browser just fixes everything. It provides secure sandbox which is trusted by billions of users. It does not restrict user in any way, there's no "Website Store" or something like that, you can open everything and you can bring your app online within few minutes. It provides an uniform API which is enough to create many kinds of applications and it'll run everywhere: iPhone, Pixel, Macbook, Surface, Thinkpad.
Unrestricted app installation is not bad. It's a trade-off. It's freedom to use your own hardware how you want versus 'safety' and restriction imposed by some central authority which aims to
profit. Fuck app stores, generally speaking. I prefer to determine what source to trust myself and not be charged (directly or indirectly) to put software on my own system.
An overwhelming majority of the apps does not need full device access. All they need is to draw to the window and talk with network.
Yes, there are apps which might need full filesystem access, for example to measure directory sizes or to search things on the filesystem. There are apps to check neighbour WiFi for security which need very full access to WiFi adapter and that's fine. But those apps could use another way of installation, like entering password 3 times and dancing for 1 minute, to ensure that user understands the full implications of giving such an access.
My point is that on typical desktop operating system today, typical application has too much access and many applications actually use that access for bad things, like spying for user, installing their own startup launchers, updaters and whatnot. Web does that better. You can't make your webapp to open when browser starts, unless you ask user to perform a complicated sequence of actions. You can't make your webapp to access my ssh key unless you ask me to drag it into a webpage.
I agree. I'm not knowledgable enough to say for sure, but my intuition is that the total complexity of WebGPU (browser implementation + driver) is usually less than the total complexity of WebGL.
WebGL is like letting your browser use your GPU with a condom on, and WebGPU is doing the same without one. The indirection is useful for safety assuming people maintain the standard and think about it. Opening up capability in the browser needs to be a careful process. It has not been recently.
It's my understanding that the browsers use a translation layer (such as ANGLE) between both WebGL and WebGPU and a preferred lower level native API (Vulkan or Metal). In this regard I don't believe WebGL has any more or less protection than WebGPU. It's not right to confuse abstraction with a layer of security.
My analogy was bad and I'd probably be wrong as you (and your sibling post) say to expect WebGPU to have any lurking dangers as compared to WebGL. I was mainly trying to express concern with new APIs and capabilities being regularly added, and the danger inherent in growing these surfaces.
It's clear that you know nothing about how WebGL or WebGPU are implemented. WebGPU is not more "raw" than WebGL. You should stop speaking confidently on these topics and misleading people who don't realize that you are not an expert.
I'd dispute that I know nothing. I'm not an expert but have worked with both, mostly WebGL. Anyways, sorry, it was a bad analogy and you're right, I don't know enough, particularly to say that WebGPU has any unique flaws or exposes any problems not in WebGL. I'm merely suspicious that it could, and maybe that is just from ignorance in this case.
That's incorrect, WebGPU has the exact same security guarantees as WebGL, if anything the specification is even stricter to completely eliminate UB (which native 3D APIs are surprisingly full of). But no data or shader code makes it to the GPU driver without thorough validation both in WebGL and WebGPU (WebGPU *may* suffer from implementation bugs just the same as WebGL of course).
> Opening up capability in the browser needs to be a careful process. It has not been recently.
That's what about 95% of the WebGPU design process is about and why it takes so long (the design process started in 2017). Creating a cross-platform 3D API is trivial, doing this with web security requirements is not.
Both WebGL and WebGPU should be locked behind permission because they allow fingerprinting user's hardware (also they provide the name of user's graphic card). And because they expose your GPU drivers to the whole world.
Agree wholeheartedly (and I used to work on Safari/WebKit).
Cross-platform app frameworks have never been a panacea, but I think there may be a middle ground to be found between the web and truly native apps. Something with a shallower learning curve, batteries-included updating and distribution, etc. that isn’t the web or Electron.
That said, I worry that it’s too late. Even if such a framework were to magically appear, the momentum of the complex beast that is the web platform will probably not slow.
> I think the solution is to step up your game and make native applications
Say goodbye to anyone supporting Linux at all in that case. These rare security issues are a small price to pay for having software that works everywhere.
> malware that works everywhere is a small price for software that works everywhere
Yes.
Although the malware we're talking about doesn't actually work everywhere but only one one brand of GPU. But I would take it working everywhere over my computer not being useful.
Isn't WebGPU supposed to be containerized? So that it only access its processes, which are the computations it is running for rendering? I honestly don't know much but I had heard it was sandboxed.
It's not uncommon that I go to Shadertoy and see strange visual artifacts in some shaders including window contents of other applications running on my system, potentially including sensitive information.
It's difficult to make GPU access secure because GPU vendors never really prioritized security, so there's countless ways to do something that's wonky and accidentally leaks memory from something the app isn't supposed to have access to. You can containerize CPU and have strict guarantees that there's no way host memory will map into the container, but AFAIK this isn't a thing on GPUs except in some enterprise cards.
> ... including window contents of other applications running on my system, potentially including sensitive information.
If this is actually the case (which I doubt very much - no offense) then please definitely write a ticket to your browser vendor, because that would be a massive security problem and would be more news-worthy than this NVIDIA CVE (leaking image data into WebGL textures was actually a bug I remember right around the time when WebGL was in development, but that was fixed quickly).
Yeah that sounds like a basic garbage collection issue and isn't that the very basics of sandboxing? Is the rule not to not hand memory to a sandbox that hasn't already been overwritten with 0s or random information? This sounds analogous to the old C lack of bounds checking where you could steal passwords and stuff just by accessing out of bound memory. Is this not low hanging fruit?
TCP/UDP access is behind explicit prompt and it's basically the same as executing downloaded application, so I don't think that it's anything bad. Basically you either install software to your local system which does not have any restrictions or you use web application which still is pretty restricted and contained.
Would you be happy with two clicks? Three clicks? Like what's the principal difference? As I said, you can download and run arbitrary application with one click today. And may be second click to confirm to operating system (not sure if it's always necessary).
Insane thing is that arbitrary application has instantly full access to your computer. And web application still heavily constrained and has to ask about almost every permission.
I would accept zero clicks on a browser that I've installed without this dangerous feature and /with a promise no autoupdate will sneak it in/.
The reason your web page has to be imprisoned in permissions is that it is a web page from just about anyone using access that the browser has given it without telling the user.
Each to their own but I consider native applications a step down from web apps.
"Screw it, I'm giving up my web app and will now pay Apple/Google the protection money and margin they demand to shelter within their ad-ridden ecosystem lock-in." ... yeh that's definitely a step down.
You're talking like android and ios are the only platforms. The downsides of those platforms don't justify a web browser (which should be safe to use) granting excessive capability to untrusted code.
Are you saying that WebGPU should only be supported on Android and iOS, because Android and iOS have more secure GPUs? Desktop browsers shouldn't support WebGPU (but should continue supporting WebGL)?
This to me is the big risk here. A worm hidden in a game mod or something.
I can see it staying in the wild for a long time too. How many of the people that are playing on these cards, or crypto mining, or doing LLM work, are really going to even find out about these vulnerabilities and update the drivers?
>This to me is the big risk here. A worm hidden in a game mod or something.
Game mods are already barely sandboxed to begin with. Unless proven otherwise (ie. by manually inspecting the mod package), you should treat game mods the same as random exes you got off the internet, not harmless apps you install on a whim.
The attack surface from a browser is tiny. All you can do is call into ANGLE or Dawn through documented, well-defined and well-tested APIs. Or use things like canvas and CSS animations, I suppose. Browser vendors care a lot about security, so they try to make sure these libraries are solid and you can't interact with the GPU in any other way.
Native applications talk directly to the GPU's kernel-mode driver. The intended flow is that you call into the vendor's user-mode drivers - which are dynamic libraries in your application's address space - and they generate commands and execute ioctls on your application's behalf. The interface between these libraries and the KMD is usually undocumented, ill-defined and poorly tested. GPU vendors don't tend to care about security, so if the KMD doesn't properly validate some inputs, well, that issue can persist a long time. And if there's any bit of control stream that lets you tell the GPU to copy stuff between memory you own and memory you don't... I guess you get a very long security bulletin.
The point is, webpages have access to a much smaller attack surface than native applications. It's unlikely anything in this bulletin is exploitable through a browser.
This is why Qubes OS, which runs everything in isolated VMs, doesn't allow them to use the GPU. My daily driver, can't recommend it enough if you care about security.
Numerous vulnerabilities are found in all browsers regularly, as well as in the root isolation in Linux. Similar with other OSes. The discussed article is one example.
In addition, Qubes is not so restrictive, if you don't play games or run LLMs.
I asked about your threat model, I'm aware that there are numerous vulnerabilities found in all browsers regularly. I just personally don't have a reason to care about that. It's like driving on the highway, every time you do it you create a period of vastly increased mortality in your life but that's often still very worthwhile, imo using Qubes is like going on back roads only because your odds of dying at highway speeds are so much higher.
If you consider specific listed threats as not a real threat model, then what else would you like to know? The threats are real and I value my data and privacy a lot. Also, I want to support a great OS by using it and spreading the word. Personally, using Qubes for me is not as hard and limiting as people think. It's the opposite: It improves my data workflow by separating different things I do on my computer.
Data being stolen (or getting ransomwared or whatever) from my personal machine is something I expect to happen maybe once or twice a lifetime as a baseline if I have like a bare veneer of security (a decent firewall on the edge, not clicking phishing links). I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic. In general I don't find this to be worth caring about basically at all. The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.
That is roughly equivalent to dealing with a security related roadblock to my workflow for 1 minute every day (or 10 security related popups that i have to click that cost me 6 seconds each or one 30 minute inconvenience a month). I think that even having the UAC popups enabled on Windows is too steep a price to pay.
I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example) because your threat model has to consider you being specifically targeted for exploitation. As an individual worried about internet background hacking radiation it doesn't make sense for me to waste my time.
> I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic
So you are doing manually what Qubes OS does automatically: security through compartmentalization.
> The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.
This sounds quite reasonable but ignores privacy issues and issues with computer ownership with Windows; I guess you also don't care about that.
I do agree that using Qubes wastes more of my time than your estimates; however it also, e.g., encourages 100% safe tinkering for those who like it, prevents potential upgrade downtime, enables easy backup and restore process and more.
> I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example)
A cursory search suggests such plugins aren't sandboxed and run with the same privileges as the main program itself, so I'd definitely be suspicious of any plugin.
I can't find anything on that page or https://nvidia.custhelp.com/app/answers/detail/a_id/5586#sec... that describes whether Windows Update will update the drivers automatically. I just did a Windows Update and it's still got an NVIDIA display driver dated 3/9/2023.
No, Windows Update's Nvidia driver set is usually years out of date and rarely gets updated. It exists as an emergency fallback and doesn't push out regular updates.
Which begs the question: is Windows' NVIDIA driver even affected by this recent flaw?
Because I don't use my PC to play games and thus don't need anything more than run-of-the-mill graphics acceleration, I'm loathe to download NVIDIA's enormous drivers, which I assume contain extraneous features and utilities that are useless to me.
There's a third party utility program you can use, it notifies you of new versions and lets you skip installing a lot of the bloatware like GeForce. I think it's called NV Install.
NVCleanInstall, maybe? I couldn't find anything called "NV Install".
Personally I'm still running with the drivers that came with the box when I bought it in 2020. GeForce Experience is an abomination; besides the mind-boggling bloat, demanding that I create an account just to download a driver update really made me determined never to buy NVidia ever again.
Yes, apologies I was on my phone earlier and didn't find it from a quick search. But I just checked my laptop and that's the one I'm using. I allows stripping out some telemetry and a few other things beside GeForce experience.
I can't fathom why people want to abstract something as simple as downloading the drivers straight from Nvidia and installing it, but then again people (perhaps rightfully) don't understand WTF a computer is.
"I can't fathom why people want to abstract something as simple as downloading the drivers straight from Nvidia and installing it, but then again people"
I think I do understand WTF a computer is, yet at some point I also had a tool on windows installed, that automatically downloaded ALL of the drivers for all devices.
Convenient, but the main reason I installed the thing was, because it could install drivers I did not even find on official websites.
But just out of curiosity, if you understand what a computer is, why do you prefer manual labour and look down on people who automate things?
Because driver updates I didn't strictly need have historically ruined my day more often than not.
No, I'm not grabbing this driver update either. My Nvidia drivers are years old but they work fine, and I have better things to do than troubleshoot borkage stemming from drivers I didn't need to fix.
Remember: If it ain't broke, don't fix it.
>and look down on people who automate things?
The specific audience here should know better than to delegate updates (let alone updates for system components) to some nebulous automated and/or all-in-one construct provided by third-parties to the hardware/driver vendor.
I download drivers straight from nvidia.com and it takes many steps: go to drivers page, choose product series, choose product (choices don’t seem to matter but who knows if that’s gonna change at some point), click start search, click on the search result to go to driver page, click download, run installer, click click click click click. It’s a hassle compared to updating just about anything else on my machines.
I only do it because (1) GeForce Experience requires logging in with Nvidia account and seems to log me out every time; (2) when GeForce Experience updates the driver it seems to pause forever doing god knows what between finishing the download and starting to install.
In the past, GeForce Experience had game streaming functionality. Similar to VNC, but using hardware-accelerated video codecs, and supports joysticks and sound.
GeForce Experience removed the game streaming feature in 2023, but the protocol was reverse-engineered, and there's compatible third-party tools for game streaming.
Sunshine is the server, and Moonlight is the client.
Wow, if you go to the Nvidia drivers website, they are still pushing the vulnerable 565.90 version as the "stable version" for creatives. It's only the gamer version that has been updated to 566.03 with the fixes. Incredible.
It is widely believed that the two versions are completely identical, just on a different release cadence. The "Studio" version is also supposed to pass a wider set of tests, but it's still the same driver. The "Game" driver is really only important for newly released games needing fixes/patches/optimizations that just weren't available yet for the previous "Studio" driver release.
At least, that seems to be the consensus among people who've tried to figure this out. There's no official word.
The fine print on the security notice page says that there is a 565.92 version that some OEMs are pushing out themselves.
I get that there is a different release cadence, but it’s simply not acceptable to do business as normal surrounding security releases. The driver page should either have that 656.92 available or disable the download link for the stable channel with a note on when they expect it to be available again.
At the very least, some product manager should be fired. This is a legal liability, and no amount of click-wrap disclaimers will protect them if someone gets owned because of this negligence.
This just reminded me that the GeForce Experience (which is recommended to use the recording functions, and keep updating the drivers), requires an account. And the software is unable to keep your account logged in. It's outrageous to pay so much for a graphics card to later have this experience.
Most of them are our out of bounds memory corruption friends, which everyone keeps telling it is not an issue on their C and C++ code, only happens to others.
Unlikely. Security isn't really a competitive feature for a customer product (security theater, on the other hand…). Ever if an average user had ever cared about security, they would never be able to evaluate the claims until it's too late to matter.
And as JBiserkov points out in a sibling comment, GPUs really prioritize power above everything else, it's not like either OpenAI or a gamer would be happy about reducing their TFLOPs/FPS for the sake of security.
NVIDIA's website lists 566.03 as the newest "Game Ready" driver, and 565.90 as the newest "Studio" driver. Does the Studio driver contain the security fix as well?
No. The hotfix is in v566 or later. The "Game" and "Studio" drivers are widely believed to be from one codebase, just released on different schedules and with a larger test suite for the "Studio" driver. So you have your choice of hotfix-without-testing or tested-without-hotfix!
https://nvidia.custhelp.com/app/answers/detail/a_id/5586
>NVIDIA GPU Display Driver for Windows and Linux contains a vulnerability which could allow a privileged attacker to escalate permissions. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, information disclosure, and data tampering
What does “privileged attacker” mean on Linux? In my mind, “privileged” would mean they already have root, but in that case there’s nothing to escalate, right?
Wondering the same. The same column lists a bunch of Windows-only CVEs where an unprivileged user can do stuff, so there has to be some difference between those (CVE‑2024‑0117 - CVE‑2024‑0121) and the headliner CVE‑2024‑0126
They mention hypervisor breaches further below, so could the CVE 0126 imply that a local root user on a shared GPU machine of some sort can break out of the virtualization?
That one is probably only an issue for folks who care about the root/kernel distinction, but there’s a bunch of buffer issues with the user mode component (this runs inside your process when you use graphics APIs). Not enough details, but that could be potentially exploitable from e.g. WebGL/WebGPU
a member of the 'video' group
Presumably it needs to be run by someone who already has access to the computer and gives them (or a program executed by them) root/escalated privileges. Although that might include running code from a webpage etc.
Including an advertiser on a webpage.
Do you have proof of concept? If it would be exploitable from the browser, it would be huge.
[flagged]
If I'm reading the bulletin right, then all the issues can only be exploited from code already running on your machine. So if you have a single user machine and aren't already owned then this is a non-issue and the verbiage in the title and PC World article are not warranted.
The people that actually need to update are:
* Multi-user systems with some untrusted users.
* Users with malware on their system already (which could privilege escalate)
* virtualization hosts of untrusted guests.
If you use a web browser or play multiplayer video games then there will be code running on your system that interacts with GPU drivers that you haven't explicitly chosen to download and which could potentially exploit certain vulnerabilities.
This highlights why we shouldn't let browsers (google) keep expanding their reach outside of the historical sandbox. It's almost like all the in-browser Java and Flash problems being repeated. They're creating security problems more than helping legitimate developers. WebGL was fine. Websockets were fine. WebGPU and the recently proposed arbitrary socket API are jumping the shark. Raw GPU access and TCP/UDP access are simply bad ideas from inexperienced people and need to be shut down. If you truly need that stuff I think the solution is to step up your game and make native applications.
I'm not sure why WebGPU is a step too far but WebGL isn't? Every other API for using a GPU went the same direction; why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan? The security properties of both are very similar, Vulkan/Metal/DX12 just lets you skip varying levels of compatibility nonsense inherent in old graphics APIs.
> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?
Because web browsers are supposed to be locked down and able to run untrusted code, not an operating system that reinvents all the same failings of actual operating systems. They should be functionality impaired in favor of safety as much as possible. For the same reason you don't get access to high precision timing in browser (a lesson that took a while to learn!), you shouldn't have arbitrary capabilities piled onto it.
Those are all historical remnants. Modern web browsers serve a radically different purpose than they did in the 90s. It doesn't make sense to even keep calling them "web browsers" since most people don't know what a "web" is, let alone "browse" it.
Modern browsers are application runtimes with a very flexible delivery mechanism. It's really up to web developers to decide what features this system should have to enable rich experiences for their users. Declaring that they should be functionally impaired or what they "should be" without taking into account the user experience we want to deliver is the wrong way of approaching this.
To be clear: I do think we should take security very seriously, especially in the one program people use the most. I also think reinventing operating systems to run within other operating systems is silly. But the web browser has become the primary application runtime and is how most people experience computing, so enabling it to deliver rich user experiences is inevitable. Doing this without compromising security or privacy is a very difficult problem, which should be addressed. It's not like the web is not a security and privacy nightmare without this already. So the solution is not to restrict functionality in order to safeguard security, but to find a way to implement these features securely and safely.
> Modern browsers are application runtimes with a very flexible delivery mechanism.
Clearly this is true. But as someone with an old-school preference for native applications over webapps (mostly for performance/ux/privacy reasons) it irritates me that I need to use an everything app just to browse HN or Wikipedia. I don't want to go all hairshirt and start using Lynx, I just want something with decent ux and a smaller vulnerability surface.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia
But why?
That feels like saying it irritates someone they need to run Windows in order to run Notepad, when they don't need the capabilities of Photoshop at the moment.
An everything app is for everything. Including the simple things.
The last thing I'd want is to have to use one browser for simpler sites and another for more complex sites and webapps and constantly have to remember which one was for which.
Some of us don't use the web for anything other than websites. I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...? And chat was on the road to being an open protocol until Google/Facebook saw the potential for lockin and both dropped XMPP.
I already have an operating system. It's like saying I don't need notepad to be able to execute arbitrary programs with 3D capabilities and listen sockets because it's a text editor.
You also wouldn't need to remember what your generic sandbox app runtime is. Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
> I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...?
Are you not familiar with Gmail or Google Maps or YouTube?
> I already have an operating system.
But Gmail and Google Maps and YouTube don't run on the OS. And this is a feature -- I can log into my Gmail on any browser without having to install anything. Life is so much easier when you don't have to install software, but just open a link.
> Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
But I like having news links in Gmail open in a new tab in the same window. The last thing I want is to be juggling windows between different applications when tabs in the same app are such a superior UX.
Imagine how annoying it would be if my "app" browser had tabs for Gmail and Maps and YouTube and my "docs" browser had tabs for the NYT and WaPo and CNN, and I couldn't mix them?
Or if the NYT only worked in my "docs" browser, but opening a link to its crossword puzzle opened in my "apps" browser instead?
That's a terrible user experience for zero benefit at all.
(And I still would have to remember which is which, even if there's a MIME type, for when I want to go back to a tab I already opened!)
Calling gmail or youtube apps is already kind of a stretch. Gmail splits everything into separate web pages with the associated loading times and need to navigate back and forth. Exacerbating this is that it paginates things, which is something you only ever see in web pages. It lacks basic features you'd expect out of an application like ability to resize UI panes. Youtube has a custom, worse version of a <video> tag to prevent you from saving the videos (even CC licensed ones, which is probably a license violation), but is otherwise a bunch of minimally interactive web pages.
Maps is legitimately an interactive application, though I'd be surprised if most people don't use a dedicated app for it.
The point is you wouldn't have an "apps browser" with tabs. If something is nontrivial, launch it as an actual application, and let the browser be about browsing websites with minimal scripting like the crossword puzzle. Honestly there probably should be friction with launching apps because it's a horrible idea to randomly run code from every page you browse to, and expanding the scope of what that code is allowed to do is just piling on more bad ideas.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia.
...this is possibly missing the point, but it occurs to me that you don't have to. Hacker News and Wikipedia are two websites I'd expect to work perfectly well in e.g. Links.
It's a bigger problem if you want to read the New York Times. I don't know whether the raw html is compatible, but if nothing else you have to log in to get past their paywall.
> Modern web browsers serve a radically different purpose than they did in the 90s
And that is a bad thing it was pushed this far! Exactly this is the argument here!
I don't necessarily disagree. But there's no going back now. There's a demand for rich user experiences that are not as easy to implement or deliver via legacy operating systems. So there's no point in arguing to keep functionality out of web browsers, since there is no practical alternative for it.
If rich ux can be delivered in a web browser then it can be delivered in a native app. I'd assert that the reason this is uncommon now (with the exception of games) is economic not technological.
It is partly economic, but I would say that it's more of a matter of convenience. Developing a web application is more approachable than a native app, and the pool of web developers is larger. Users also don't want the burden of installing and upgrading apps, they just want them available. Even traditional app stores that mobile devices popularized are antiquated now. Requesting a specific app by its unique identifier, which is what web URLs are, is much more user friendly than navigating an app store, let alone downloading an app on a traditional operating system and dealing with a hundred different "package managers", and all the associated issues that come with that.
Some app stores and package managers automate a lot of this complexity to simplify the UX, and all of them use the web in the background anyway, but the experience is far from just loading a web URL in a browser.
And native apps on most platforms are also a security nightmare, which is why there is a lot of momentum to replicate the mobile and web sandboxing model on traditional OSs, which is something that web browsers have had for a long time.
The answer is somewhere in the middle. We need better and more secure operating systems that replicate some of the web model, and we need more capable and featureful "web browsers" that deliver the same experience as native apps. There have been numerous attempts at both approaches over the past decade+ with varying degrees of success, but there is still a lot of work to be done.
Every package manager I know of lets you install a package directly without any kind of Internet connection (I haven't tried much, but I've run into CORS errors with file URIs that suggest browser authors don't want those to work). They also--critically--allow you to not update your software.
The web today is mostly a media consumption platform. Applications for people who want to use their computer as a tool rather than a toy don't fit the model of "connect to some URL and hope your tools are still there".
> And native apps on most platforms are also a security nightmare
You make it sound like a web browser is not a native app.
The difference is in the learning curve. On Windows, making a native app usually requires you to install a bunch of things - a compiler, a specific code editor, etc - in order to even be able to start learning.
Meanwhile, while that's also true for web apps, you can get started with learning HTML and basic JavaScript in Notepad, with no extra software needed. (Of course, you might then progress to actually using compilers like TypeScript, frameworks like React, and so on, but you don't need them to start learning.)
There's always been a much higher perceived barrier to be able to make native apps in Windows, whereas it's easier to get started with web development.
Not to mention it was (is) a constantly moving target. WinUI, WPF, Silverlight, UWP, RT, Forms, MFC, maybe more!
Browsers should be aggressively pro-user, and developers can innovate within the limitations they're given.
That settles it then. Let's remove all the innovations of the past 30 years that have allowed the web to deliver rich user experiences, and leave developers to innovate with static HTML alone. Who needs JavaScript and CSS anyway?
Seriously, don't you see the incongruity of your statement?
Exactly.
Putting everything, I mean everything into the browser, and arguing for it, is stupid. It stops becoming a browser then and becomes a native sytem, with the problems of the native systems accessing the open wild all over again. And then? Will be there a sandbox inside the browser/new-OS for the sake of security then? Sanbox into a not so sandbox anymore?
Modern operating systems are bad and they are not going to be fixed. So Browser is another attempt at creating better operating system.
Why modern operating systems are bad:
1. Desktop OS allow installation of unrestricted applications. And actually most applications are unrestricted. While there are attempts at creating containerised applications, those attempts are weak and not popular. When I'm installing World of Warcraft, its installer silently adds trusted root certificate into my computer.
2. Mobile OS are walled gardens. You can't just run anything, you need to jump through many hoops at best or live in certain countries at worst.
3. There's no common ground for every operating system. Every operating system is different, has completely different APIs. While there are frameworks which try to abstract those things, those frameworks adds their own pile of issues.
Browser just fixes everything. It provides secure sandbox which is trusted by billions of users. It does not restrict user in any way, there's no "Website Store" or something like that, you can open everything and you can bring your app online within few minutes. It provides an uniform API which is enough to create many kinds of applications and it'll run everywhere: iPhone, Pixel, Macbook, Surface, Thinkpad.
Unrestricted app installation is not bad. It's a trade-off. It's freedom to use your own hardware how you want versus 'safety' and restriction imposed by some central authority which aims to profit. Fuck app stores, generally speaking. I prefer to determine what source to trust myself and not be charged (directly or indirectly) to put software on my own system.
An overwhelming majority of the apps does not need full device access. All they need is to draw to the window and talk with network.
Yes, there are apps which might need full filesystem access, for example to measure directory sizes or to search things on the filesystem. There are apps to check neighbour WiFi for security which need very full access to WiFi adapter and that's fine. But those apps could use another way of installation, like entering password 3 times and dancing for 1 minute, to ensure that user understands the full implications of giving such an access.
My point is that on typical desktop operating system today, typical application has too much access and many applications actually use that access for bad things, like spying for user, installing their own startup launchers, updaters and whatnot. Web does that better. You can't make your webapp to open when browser starts, unless you ask user to perform a complicated sequence of actions. You can't make your webapp to access my ssh key unless you ask me to drag it into a webpage.
That's exactly what ChromeOS is/was. Users hated it.
This guy gets it 100%.
I agree. I'm not knowledgable enough to say for sure, but my intuition is that the total complexity of WebGPU (browser implementation + driver) is usually less than the total complexity of WebGL.
WebGL is like letting your browser use your GPU with a condom on, and WebGPU is doing the same without one. The indirection is useful for safety assuming people maintain the standard and think about it. Opening up capability in the browser needs to be a careful process. It has not been recently.
It's my understanding that the browsers use a translation layer (such as ANGLE) between both WebGL and WebGPU and a preferred lower level native API (Vulkan or Metal). In this regard I don't believe WebGL has any more or less protection than WebGPU. It's not right to confuse abstraction with a layer of security.
The translation layer is the safety layer. In principle, it's like running Java bytecode instead of machine code.
My analogy was bad and I'd probably be wrong as you (and your sibling post) say to expect WebGPU to have any lurking dangers as compared to WebGL. I was mainly trying to express concern with new APIs and capabilities being regularly added, and the danger inherent in growing these surfaces.
It's clear that you know nothing about how WebGL or WebGPU are implemented. WebGPU is not more "raw" than WebGL. You should stop speaking confidently on these topics and misleading people who don't realize that you are not an expert.
I'd dispute that I know nothing. I'm not an expert but have worked with both, mostly WebGL. Anyways, sorry, it was a bad analogy and you're right, I don't know enough, particularly to say that WebGPU has any unique flaws or exposes any problems not in WebGL. I'm merely suspicious that it could, and maybe that is just from ignorance in this case.
That's incorrect, WebGPU has the exact same security guarantees as WebGL, if anything the specification is even stricter to completely eliminate UB (which native 3D APIs are surprisingly full of). But no data or shader code makes it to the GPU driver without thorough validation both in WebGL and WebGPU (WebGPU *may* suffer from implementation bugs just the same as WebGL of course).
> Opening up capability in the browser needs to be a careful process. It has not been recently.
That's what about 95% of the WebGPU design process is about and why it takes so long (the design process started in 2017). Creating a cross-platform 3D API is trivial, doing this with web security requirements is not.
Both WebGL and WebGPU should be locked behind permission because they allow fingerprinting user's hardware (also they provide the name of user's graphic card). And because they expose your GPU drivers to the whole world.
> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?
Same reason kids should be stuck with Nerf guns while grownups have firearms.
Agree wholeheartedly (and I used to work on Safari/WebKit).
Cross-platform app frameworks have never been a panacea, but I think there may be a middle ground to be found between the web and truly native apps. Something with a shallower learning curve, batteries-included updating and distribution, etc. that isn’t the web or Electron.
That said, I worry that it’s too late. Even if such a framework were to magically appear, the momentum of the complex beast that is the web platform will probably not slow.
> I think the solution is to step up your game and make native applications
Say goodbye to anyone supporting Linux at all in that case. These rare security issues are a small price to pay for having software that works everywhere.
Rephrasing that: malware that works everywhere is a small price for software that works everywhere.
It isn't.
And there is no basis for your assertion these security issues are rare.
> malware that works everywhere is a small price for software that works everywhere
Yes.
Although the malware we're talking about doesn't actually work everywhere but only one one brand of GPU. But I would take it working everywhere over my computer not being useful.
Isn't WebGPU supposed to be containerized? So that it only access its processes, which are the computations it is running for rendering? I honestly don't know much but I had heard it was sandboxed.
It's not uncommon that I go to Shadertoy and see strange visual artifacts in some shaders including window contents of other applications running on my system, potentially including sensitive information.
It's difficult to make GPU access secure because GPU vendors never really prioritized security, so there's countless ways to do something that's wonky and accidentally leaks memory from something the app isn't supposed to have access to. You can containerize CPU and have strict guarantees that there's no way host memory will map into the container, but AFAIK this isn't a thing on GPUs except in some enterprise cards.
> ... including window contents of other applications running on my system, potentially including sensitive information.
If this is actually the case (which I doubt very much - no offense) then please definitely write a ticket to your browser vendor, because that would be a massive security problem and would be more news-worthy than this NVIDIA CVE (leaking image data into WebGL textures was actually a bug I remember right around the time when WebGL was in development, but that was fixed quickly).
Yeah that sounds like a basic garbage collection issue and isn't that the very basics of sandboxing? Is the rule not to not hand memory to a sandbox that hasn't already been overwritten with 0s or random information? This sounds analogous to the old C lack of bounds checking where you could steal passwords and stuff just by accessing out of bound memory. Is this not low hanging fruit?
TCP/UDP access is behind explicit prompt and it's basically the same as executing downloaded application, so I don't think that it's anything bad. Basically you either install software to your local system which does not have any restrictions or you use web application which still is pretty restricted and contained.
> TCP/UDP access is behind explicit prompt
... that is satisfied by a single click from malware or social engineering. Insane.
Would you be happy with two clicks? Three clicks? Like what's the principal difference? As I said, you can download and run arbitrary application with one click today. And may be second click to confirm to operating system (not sure if it's always necessary).
Insane thing is that arbitrary application has instantly full access to your computer. And web application still heavily constrained and has to ask about almost every permission.
I would accept zero clicks on a browser that I've installed without this dangerous feature and /with a promise no autoupdate will sneak it in/.
The reason your web page has to be imprisoned in permissions is that it is a web page from just about anyone using access that the browser has given it without telling the user.
> step up your game and make native applications.
Each to their own but I consider native applications a step down from web apps.
"Screw it, I'm giving up my web app and will now pay Apple/Google the protection money and margin they demand to shelter within their ad-ridden ecosystem lock-in." ... yeh that's definitely a step down.
You're talking like android and ios are the only platforms. The downsides of those platforms don't justify a web browser (which should be safe to use) granting excessive capability to untrusted code.
If you’re targeting a mass market audience they often are the only platforms. For many people their phone is their only computing device.
Neither Apple or Google are in scope for this issue. This is about NVidia GPUs.
Are you saying that WebGPU should only be supported on Android and iOS, because Android and iOS have more secure GPUs? Desktop browsers shouldn't support WebGPU (but should continue supporting WebGL)?
Close your eyes before you see WebUSB. Reckless and irresponsible in the extreme.
WebMIDI is cool, though! I updated my Novation Launchpad firmware with that.
They just keep promising more access to my machine but don't worry it is all totally secure! We promise! Yeah sure, when has that ever worked out?
This to me is the big risk here. A worm hidden in a game mod or something.
I can see it staying in the wild for a long time too. How many of the people that are playing on these cards, or crypto mining, or doing LLM work, are really going to even find out about these vulnerabilities and update the drivers?
>This to me is the big risk here. A worm hidden in a game mod or something.
Game mods are already barely sandboxed to begin with. Unless proven otherwise (ie. by manually inspecting the mod package), you should treat game mods the same as random exes you got off the internet, not harmless apps you install on a whim.
The attack surface from a browser is tiny. All you can do is call into ANGLE or Dawn through documented, well-defined and well-tested APIs. Or use things like canvas and CSS animations, I suppose. Browser vendors care a lot about security, so they try to make sure these libraries are solid and you can't interact with the GPU in any other way.
Native applications talk directly to the GPU's kernel-mode driver. The intended flow is that you call into the vendor's user-mode drivers - which are dynamic libraries in your application's address space - and they generate commands and execute ioctls on your application's behalf. The interface between these libraries and the KMD is usually undocumented, ill-defined and poorly tested. GPU vendors don't tend to care about security, so if the KMD doesn't properly validate some inputs, well, that issue can persist a long time. And if there's any bit of control stream that lets you tell the GPU to copy stuff between memory you own and memory you don't... I guess you get a very long security bulletin.
The point is, webpages have access to a much smaller attack surface than native applications. It's unlikely anything in this bulletin is exploitable through a browser.
This is why Qubes OS, which runs everything in isolated VMs, doesn't allow them to use the GPU. My daily driver, can't recommend it enough if you care about security.
What is your threat model that you chose to daily something so restrictive?
Numerous vulnerabilities are found in all browsers regularly, as well as in the root isolation in Linux. Similar with other OSes. The discussed article is one example.
In addition, Qubes is not so restrictive, if you don't play games or run LLMs.
See also: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15
I asked about your threat model, I'm aware that there are numerous vulnerabilities found in all browsers regularly. I just personally don't have a reason to care about that. It's like driving on the highway, every time you do it you create a period of vastly increased mortality in your life but that's often still very worthwhile, imo using Qubes is like going on back roads only because your odds of dying at highway speeds are so much higher.
If you consider specific listed threats as not a real threat model, then what else would you like to know? The threats are real and I value my data and privacy a lot. Also, I want to support a great OS by using it and spreading the word. Personally, using Qubes for me is not as hard and limiting as people think. It's the opposite: It improves my data workflow by separating different things I do on my computer.
Data being stolen (or getting ransomwared or whatever) from my personal machine is something I expect to happen maybe once or twice a lifetime as a baseline if I have like a bare veneer of security (a decent firewall on the edge, not clicking phishing links). I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic. In general I don't find this to be worth caring about basically at all. The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.
That is roughly equivalent to dealing with a security related roadblock to my workflow for 1 minute every day (or 10 security related popups that i have to click that cost me 6 seconds each or one 30 minute inconvenience a month). I think that even having the UAC popups enabled on Windows is too steep a price to pay.
I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example) because your threat model has to consider you being specifically targeted for exploitation. As an individual worried about internet background hacking radiation it doesn't make sense for me to waste my time.
Thank you for the interesting arguments.
> I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic
So you are doing manually what Qubes OS does automatically: security through compartmentalization.
> The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.
This sounds quite reasonable but ignores privacy issues and issues with computer ownership with Windows; I guess you also don't care about that.
I do agree that using Qubes wastes more of my time than your estimates; however it also, e.g., encourages 100% safe tinkering for those who like it, prevents potential upgrade downtime, enables easy backup and restore process and more.
> I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example)
How about owning crypto?
GPU support for Qubes is coming.
Opt-in for chosen, trusted VMs is coming.
I wonder if it could be exploited via WebGL/WebGPU?
>if you have a single user machine and aren't already owned then this is a non-issue
if you have a single user machine and ARE already owned then this is REALLY a non-issue for you.
I wonder if single-user systems running something like a1111 should be concerned. Could the plugin system be an attack vehicle?
A cursory search suggests such plugins aren't sandboxed and run with the same privileges as the main program itself, so I'd definitely be suspicious of any plugin.
Does that mean it cannot be exploited through WebGL?
I wonder if this affects Geforce Now
The actual security bulletin is here - https://nvidia.custhelp.com/app/answers/detail/a_id/5586
As it points out, this is an issue with the driver rather than the physical GPU.
I can't find anything on that page or https://nvidia.custhelp.com/app/answers/detail/a_id/5586#sec... that describes whether Windows Update will update the drivers automatically. I just did a Windows Update and it's still got an NVIDIA display driver dated 3/9/2023.
No, Windows Update's Nvidia driver set is usually years out of date and rarely gets updated. It exists as an emergency fallback and doesn't push out regular updates.
It’s also likely nvidia keeps submitting the driver and it can’t pass the 'is this not crap' step of the process. Wouldn’t surprise me in the least.
And there's the nub.
Perhaps nVideo should request access to a dpecial 'this is less crap' process and help MS understand how dangerous is the previous crap it approved.
[dead]
Which begs the question: is Windows' NVIDIA driver even affected by this recent flaw?
Because I don't use my PC to play games and thus don't need anything more than run-of-the-mill graphics acceleration, I'm loathe to download NVIDIA's enormous drivers, which I assume contain extraneous features and utilities that are useless to me.
How is this latest not an emergency, I wonder.
On Windows you have to either download the driver updates manually, or install that GeForce Experience thing that keeps them up to date for you.
There's a third party utility program you can use, it notifies you of new versions and lets you skip installing a lot of the bloatware like GeForce. I think it's called NV Install.
NVCleanInstall, maybe? I couldn't find anything called "NV Install".
Personally I'm still running with the drivers that came with the box when I bought it in 2020. GeForce Experience is an abomination; besides the mind-boggling bloat, demanding that I create an account just to download a driver update really made me determined never to buy NVidia ever again.
Yes, apologies I was on my phone earlier and didn't find it from a quick search. But I just checked my laptop and that's the one I'm using. I allows stripping out some telemetry and a few other things beside GeForce experience.
GeForce Experience is optional.
GeForce Experience is optional.
I can't fathom why people want to abstract something as simple as downloading the drivers straight from Nvidia and installing it, but then again people (perhaps rightfully) don't understand WTF a computer is.
"I can't fathom why people want to abstract something as simple as downloading the drivers straight from Nvidia and installing it, but then again people"
I think I do understand WTF a computer is, yet at some point I also had a tool on windows installed, that automatically downloaded ALL of the drivers for all devices.
Convenient, but the main reason I installed the thing was, because it could install drivers I did not even find on official websites.
But just out of curiosity, if you understand what a computer is, why do you prefer manual labour and look down on people who automate things?
>why do you prefer manual labour
Because driver updates I didn't strictly need have historically ruined my day more often than not.
No, I'm not grabbing this driver update either. My Nvidia drivers are years old but they work fine, and I have better things to do than troubleshoot borkage stemming from drivers I didn't need to fix.
Remember: If it ain't broke, don't fix it.
>and look down on people who automate things?
The specific audience here should know better than to delegate updates (let alone updates for system components) to some nebulous automated and/or all-in-one construct provided by third-parties to the hardware/driver vendor.
I download drivers straight from nvidia.com and it takes many steps: go to drivers page, choose product series, choose product (choices don’t seem to matter but who knows if that’s gonna change at some point), click start search, click on the search result to go to driver page, click download, run installer, click click click click click. It’s a hassle compared to updating just about anything else on my machines.
I only do it because (1) GeForce Experience requires logging in with Nvidia account and seems to log me out every time; (2) when GeForce Experience updates the driver it seems to pause forever doing god knows what between finishing the download and starting to install.
>(choices don’t seem to matter but who knows if that’s gonna change at some point)
They do matter, you're just lucky(?) always using hardware and software environments that are always covered by the first thing on the list.
If you don't want to specify details and run an installer, that's squarely a You problem.
That you “can’t fathom” why people want to automate a multi-minute chore is a “You problem”, really.
I read that as “no matter what I choose I get the same file but I still make the correct choice because Nvidia knows better than I do.”
In the past, GeForce Experience had game streaming functionality. Similar to VNC, but using hardware-accelerated video codecs, and supports joysticks and sound.
GeForce Experience removed the game streaming feature in 2023, but the protocol was reverse-engineered, and there's compatible third-party tools for game streaming.
Sunshine is the server, and Moonlight is the client.
It works a lot better than Miracast.
It was working also on devices like the Shield.
Umm no it didn't. The game streaming is still there and I use it literally every day to access my windows PCs from my macs.
There’s some weird desire for product managers working on the windows platform to build “Dashboards” in front of the most simple things.
It is advertising.
Perhaps people understand that having to accept so-called fixes from the source of such a serious problem is highly stressful.
A driver, to be even more correct.
Wow, if you go to the Nvidia drivers website, they are still pushing the vulnerable 565.90 version as the "stable version" for creatives. It's only the gamer version that has been updated to 566.03 with the fixes. Incredible.
It is widely believed that the two versions are completely identical, just on a different release cadence. The "Studio" version is also supposed to pass a wider set of tests, but it's still the same driver. The "Game" driver is really only important for newly released games needing fixes/patches/optimizations that just weren't available yet for the previous "Studio" driver release.
At least, that seems to be the consensus among people who've tried to figure this out. There's no official word.
The fine print on the security notice page says that there is a 565.92 version that some OEMs are pushing out themselves.
I get that there is a different release cadence, but it’s simply not acceptable to do business as normal surrounding security releases. The driver page should either have that 656.92 available or disable the download link for the stable channel with a note on when they expect it to be available again.
At the very least, some product manager should be fired. This is a legal liability, and no amount of click-wrap disclaimers will protect them if someone gets owned because of this negligence.
The studio driver also opens up some functionality that the game driver does not.
Like what? I did a fairly deep search on this recently and couldn't find any mention of anything like that.
Let's see if they recommend the fixed version by Monday or Tuesday at the latest
Relax buddy
No update is currently available for Debian¹. This vulnerability is marked as "low priority"² for Bookworm.
1. https://security-tracker.debian.org/tracker/source-package/n... 2. https://tracker.debian.org/pkg/nvidia-graphics-drivers
Worth noting that the non-free repos are not a part of Debian and don't get security support. The free Nvidia driver is not affected.
As is always the case all throughout every year: https://www.nvidia.com/en-us/security/
(enter "GPU Display" in the search filter box)
Times like this really bolster my distrust and hatred for those horrible dkms installers. Thank you, nvidia, for living up to my worst expectations.
This just reminded me that the GeForce Experience (which is recommended to use the recording functions, and keep updating the drivers), requires an account. And the software is unable to keep your account logged in. It's outrageous to pay so much for a graphics card to later have this experience.
The NVIDIA App (beta), which, as I understand it, combines the GeForce Experience and the Control Panel doesn't require the login (for now?): https://www.nvidia.com/en-us/software/nvidia-app/
https://arstechnica.com/gaming/2024/02/nvidias-new-app-doesn...
Most of them are our out of bounds memory corruption friends, which everyone keeps telling it is not an issue on their C and C++ code, only happens to others.
GPU monoculture showed its downside.
"Take that, normies!"
- Nouveau users tomorrow, after their framebuffer finishes loading
s/after/if ;)
Because gpu drivers were so much better when there were six vendors?
No, because we'd get some competition. On features like security.
Unlikely. Security isn't really a competitive feature for a customer product (security theater, on the other hand…). Ever if an average user had ever cared about security, they would never be able to evaluate the claims until it's too late to matter. And as JBiserkov points out in a sibling comment, GPUs really prioritize power above everything else, it's not like either OpenAI or a gamer would be happy about reducing their TFLOPs/FPS for the sake of security.
Maybe, but GPUs are also used in virtualized environments (datacenters), where you can't ignore these security issues.
The 'S' in FPS stands for "security".
And where is the competition?
Famously Poor Security? :)
Other than the 2000$ graphics cards?
Looks like their manual driver search model dropdown only goes back to the Turing generation (20 series).
Edit: found the Windows driver[1] directly via an online search which covers older models, too.
[1] https://www.nvidia.com/download/driverResults.aspx/235774/
I already have kernel level root kits installed for multiplayer games. This privilege escalation exploit is child’s play.
Even if I still liked playing multiplayer games, I would never (willingly) install a rootkit.
NVIDIA's website lists 566.03 as the newest "Game Ready" driver, and 565.90 as the newest "Studio" driver. Does the Studio driver contain the security fix as well?
No. The hotfix is in v566 or later. The "Game" and "Studio" drivers are widely believed to be from one codebase, just released on different schedules and with a larger test suite for the "Studio" driver. So you have your choice of hotfix-without-testing or tested-without-hotfix!
Why is their versioning scheme follow based on a stock price and not normie numbers?
How's that webgpu or webassembly working out?
To save a click, the release date of the updated driver (566.03) is 22/10/2024.
You could have added that the fixes for Linux are in versions 565.57.01, 550.127.05, and 535.216.01. Not everyone runs Windows.
Oh good, historically Nvidia drivers on Linux are just… chef’s kiss. Hopefully this will improve things /s.
[dead]
That's going to effect a lot of computers. This family of GPUs is incredibly popular. Many of my own graphics cards are GeForce :/
Ok, so update the drivers. It sounds like you think you have to throw the GPU away.
Update the drivers and enjoy all the app breakage that delivers.