"Microsoft shipped a broken “Update and shut down” toggle with Windows 10, and it never acknowledged it until now."
I guess they know what's best for the user base, and this was obviously deemed not important. But boy did they get Copilot integrated in everything post haste.
However, my guess is that this email got nowhere, because the experience of using Windows isn't so different decades later.
What this means is that 1) Microsoft is first and foremost a business oriented company, and what matters to them most is feature set, compatibility, support etc. As long as things mostly work, it's fine. Usability is at the bottom of the list. 2) Windows is just not important to Microsoft any more.
I bet that Satya Nadella has grumbles about bugs and ads in Windows 11, and likely has run into this specific bug first hand. But when he decides that "ads revenue trumps everything" and "these are just small bugs that don't really matter", he immediately forgets about it all.
> Until I read the story about how Steve Jobs was mad about the fact that Mac was slow to start and asked teams to fix it. Surprise, they fixed it.
What was different then was that Steve Jobs actually loved computers and used them. That is not the case for our modern computing behemoths (Microsoft or Apple).
Dogfooding is a thing, and having a person in power who can say "no" is important.
"Someone decided to trash the one part of Windows that was usable? The file system is no longer usable. The registry is not usable. This program listing was one sane place but now it is all crapped up."
I just assume the entire Windows team uses OSX on their own time but have some kind of neural defect that prevents them from taking any lessons from it.
New macbooks with a notch hide icons underneath of the notch and those icons are completely inaccessible without installing 3rd party software to manage your status bar, or turning off a bunch of other software with visible icons on your bar.
IMO that's a far worse UX than update and shutdown turning the computer back on at the end.
you can finally set a screen resolution that just stays below the notch! I'm not sure when that became available, but I just used it a couple weeks ago.
Sounds like a hard life. So much time spent on buggy, unintuitive, jumbled, and half-assed OS, then the only time they get away from it, they have to use Windows.
> Microsoft is first and foremost a business oriented company, and what matters to them most is feature set, compatibility, support etc. As long as things mostly work, it's fine. Usability is at the bottom of the list.
Blame their customers. Those people accepted random reboots for decades.
I think he's talking about that story about the MacBook Air Presentation to Steve Jobs where he threw the prototype on the floor when he saw how slow it booted so they decided to switch to SSD only storage to mitigate this.
I hadn't heard that one, and I can't find anything online. Considering that the base model MacBook Air had spinning rust for the first two and a half years, I'm skeptical.
The "Saving Lives" story I'm referring to is unverified but it does at least come from directly someone who was there.
It is easy to have such hubris, when the competition at the shopping mall where most folks buy hardware is either crimpled Chromebooks and Android tablets, or overpriced Apple laptops, at least in what concerns most tier 2 and 3 countries.
It would be nice to have somethig like Asus or Dell XPS, with Ubuntu LTS fully working laptop hardware at Dixons, FNAC, Publico, Worten, Cool Blue, Saturn, Media Markt,..... but it ain't happening.
However after the netbook phase, that is yet to happen again.
That's actually a key point to make. To generalize, people don't install operating systems. They buy a device with some sort of operating system on it.
That hubris combined with a whole bunch of decisions I resent/actively dislike and the hassle to opt out of things I never asked for is why for the first time since the late 80's I don't have any Microsoft OS's on any of my PC's.
I only used windows 11 for gaming and I don't really do that much anymore - I may have a look at steam/proton but not really in any hurry either.
90-95% of my computing life was spent inside Linux anyway.
These kinds of comments make me question how many folk here are actually in tech or if my experience has been uncharacteristically grim.
Does your company not have hundreds to thousands of backlogged tickets and bugs? Are there not different teams for different parts of the system? No triage policy for prioritizing work?
I had often reboots followed by more update installation and then a shutdown, so I assumed this was working as intended (i.e. finish installing the updates, which might require a reboot, and then power off).
My laptop did something like it last night but not exactly : it booted from HIBERNATION to apply updates and reboot. Auto updates have been turned off for a while and yet this happens. You can't trust micro$oft for even the smallest thing.
Edit: And I'm fairly sure this is a wanted malicious behavior. The thing was hibernating for quite a time before doing it, like it would wait for me to leave the office/for the computer to be in a bag on the way back home and I wouldn't notice what happened.
Full-disk encryption, as useful as it is, also makes this a royal pain. Updates can't be performed unattended, because each restart done during the updates requires providing the password before continuing.
I honestly thought the same and kind of just gave up the idea of hitting that at night and being done.
Figured having 4 OS installations was already fairly niche that it was largely a self imposed issue. Looking forward to confirming that this fixes the issue in my use case.
I wonder if they'll backport this fix to Windows 10 (Very much doubt).
I also wonder if they'll ever fix the menu entry delay bug. At the moment neither of the "Update and ..." options is in the menu when you first open it. Opening the shutdown menu then checks if there are updates available to install and will then add those options, shifting the existing menu entries. Which makes it incredibly easy to quickly click on an option you didn't want.
I’ve always assumed it’s either just something my corporate laptops like to do (my older HP would often switch itself back on even when you told it to shut down, forgetting about any updates), or that I had just clicked the wrong button.
Well, guess that’s my mentally stability so slightly restored!
Windows won't let you overwrite files "in use" and "file" is determined by the full pathname.
Linux will let you overwrite files "in use" (though the program(s) using them may not notice) and "file" is determined by a magic number, the inode - you can delete a file from a directory, really it's removing _that inode_, and put a new file in place with the same name, it's a _new_ inode. Programs that still have the file open are referring to the _old_ inode, which only goes away once everyone stops using it.
So actually you need to go round restarting your programs/services on Linux to get them to pick up changes (most package managers do that automatically), but at least it's _possible_ to make those changes without a reboot. Windows has to go into a special mode where nothing else runs, to be sure that it can update files.
This is why I love OpenSUSE, when you update your system it will let you know when updated files that certain processes are using were touched and you can then decide if you want to restart them.
Suse systems in general are just so much nicer to administer than RedHat or Debian/Ubuntu ones (imo of course).
htop(1) can also highlight running processes that have had their on-disk executable replaced (highlights in red) or one of its shared libraries (highlights in yellow). I find this very useful.
KDE Neon used to do this. Almost always caused issues after update with stuff crashing due to mismatch of versions talking to each other over D-Bus and such.
So they moved to something more like the Windows style, where it downloads, reboots to apply and then reboots again freshly updated.
Note that just replacing files on disk is not sufficient because all the running software would still have the old version.
In the first place it means the security issue could still be present in currently running software, in the second place exciting things can happen when two (or more?!) different versions try to talk to each other. Oh, and who's to say the whole file was fully loaded into memory (or wasn't partially paged out) - imagine the fun that would happen if you later page in data from a different version of the binary!
So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.
There is no such partial or mixed exe problem from paging.
It doesn't matter if it was paged out, virtual memory is still just memory.
Paging out & restoring some memory doesn't know or care where the contents originally came from. It doesn't have an optimization that goes "Oh this chunk of memory is an executable file. I can skip writing this out to the swap file, and later when I need to restore it I can just read the original file instead of swap."
For files that a program opens, an open handle is an open handle. The entire file is available in whatever state it was at the time the handle was opened, modulo whatever changes this specific handle has made.
If a program closes and re-opens handles, then it always knew that the entire world could have changed between those 2 opens. Same if it opens non-exclusive. If it opens without exclusive or closes & reopens, then it's ok for the data to change between each access.
There are problems during updates, but they are much higher level and safer than that. Open file handles are open file handles, and currenly loaded exes are consistent and sane until they close. All the problems are in the higher level domains of processes interacting with each other.
> So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.
Most systems are technically capable of hot patching (if your exe file is mmaped, and you change the backing file, Bob's your uncle, unless your OS is no fun; which is why unix install pattern is unlink and replace rather than in-place updares). But most executables are not built to be hot patched, especially not without coordination.
Hot patching lets you make changes to your live environment with tremendous speed, but it also has risk of changing your live environment to an offline environment with tremendous speed. I'm a proponent of hot patching, and would love to be able to hot load all the things, but it has requirements and tradeoffs and most software isn't built for it, and that's probably the right decision for most things.
Yep. In fact rename/replace is conceptually the same as unlink/replace, but another potential issue is in-process dll hell. If a patch replaces multiple libraries, and they're not all loaded into a process yet, even if each is atomic, you might load version 1 of the first library but version 2 of the second
Windows locks files when they're in use, so that you cannot overwrite them.
Linux doesn't do this.
So if you want to overwrite a running service then you can either stop it, update it, and restart it (tricky to manage if it has dependencies, or is necessary for using the PC), or to shut down everything, update the files while the OS isn't (or is barely) running, and then restart the OS.
> Windows locks files when they're in use, so that you cannot overwrite them. Linux doesn't do this.
Linux does do this (try overwriting or truncating a binary executable while it's running and you'll get -ETXTBSY).
The difference is that Linux allows you to delete (unlink) a running executable. This will not free the on-disk space occupied by that executable (so anything you write to disk in the immediate future will not overwrite the executable, and it can continue executing even if not all of the executable has been paged in) until all references to its inode are freed (e.g. the program exits and there are no other hardlinks to it).
Then you can install a new version of the executable with the same name (since a file by that name no longer exists). This is what install(1) does.
MOVEFILE_DELAY_UNTIL_REBOOT is sort of the real trick, because it's processed by the Windows equivalent of PID 0 (ssms), which processes these pending operations before actually starting any other userspace stuff (which would invariably load things like kernel32.dll etc.)
Windows NT was always capable of what Microsoft calls “POSIX delete semantics” (the POSIX compatibility layer was in the design doc since before the name change from “NT OS/2”), but some years ago the default for the Win32 call DeleteFile actually changed to take advantage of that (causing some breakage in applications, but apparently not a lot of it).
it'd be nice if Microsoft paid just a bit of attention to the immutable/atomic Linux ecosystem a bit and if they could finally ship an OS that wasn't always a dearly loved "pet".
"Microsoft shipped a broken “Update and shut down” toggle with Windows 10, and it never acknowledged it until now."
I guess they know what's best for the user base, and this was obviously deemed not important. But boy did they get Copilot integrated in everything post haste.
Typical Microsoft hubris.
Different teams, perhaps. No idea. A monster org like this must be massively disconnected internally, specially for non-critical bugs and such.
I used to always think like that and try to come up with these excuses.
Until I read the story about how Steve Jobs was mad about the fact that Mac was slow to start and asked teams to fix it. Surprise, they fixed it.
And it's not like nobody could say anything at Microsoft. Someone on HN posted this email (originally from a different website):
https://www.techemails.com/p/bill-gates-tries-to-install-mov...
However, my guess is that this email got nowhere, because the experience of using Windows isn't so different decades later.
What this means is that 1) Microsoft is first and foremost a business oriented company, and what matters to them most is feature set, compatibility, support etc. As long as things mostly work, it's fine. Usability is at the bottom of the list. 2) Windows is just not important to Microsoft any more.
I bet that Satya Nadella has grumbles about bugs and ads in Windows 11, and likely has run into this specific bug first hand. But when he decides that "ads revenue trumps everything" and "these are just small bugs that don't really matter", he immediately forgets about it all.
> Until I read the story about how Steve Jobs was mad about the fact that Mac was slow to start and asked teams to fix it. Surprise, they fixed it.
What was different then was that Steve Jobs actually loved computers and used them. That is not the case for our modern computing behemoths (Microsoft or Apple).
Dogfooding is a thing, and having a person in power who can say "no" is important.
"Someone decided to trash the one part of Windows that was usable? The file system is no longer usable. The registry is not usable. This program listing was one sane place but now it is all crapped up."
In 2003 already; Amazing!
I just assume the entire Windows team uses OSX on their own time but have some kind of neural defect that prevents them from taking any lessons from it.
New macbooks with a notch hide icons underneath of the notch and those icons are completely inaccessible without installing 3rd party software to manage your status bar, or turning off a bunch of other software with visible icons on your bar.
IMO that's a far worse UX than update and shutdown turning the computer back on at the end.
you can finally set a screen resolution that just stays below the notch! I'm not sure when that became available, but I just used it a couple weeks ago.
In a pinch you can reduce the spacing between items [1]. The default of macOS is ridiculously large.
[1] https://apple.stackexchange.com/questions/406316/can-the-spa...
OSX is not the paragon of UX, as evidenced by the long list of software I need to install to make it behave in a non-broken fashion.
That said, I don’t think I disagree with your diagnosis. I’m just afraid they’re lifting more bad parts than good.
Sounds like a hard life. So much time spent on buggy, unintuitive, jumbled, and half-assed OS, then the only time they get away from it, they have to use Windows.
> Microsoft is first and foremost a business oriented company, and what matters to them most is feature set, compatibility, support etc. As long as things mostly work, it's fine. Usability is at the bottom of the list.
Blame their customers. Those people accepted random reboots for decades.
That Steve Jobs story is from 1983 and the entire Mac team could probably have fit into a reasonably large conference room.
I think he's talking about that story about the MacBook Air Presentation to Steve Jobs where he threw the prototype on the floor when he saw how slow it booted so they decided to switch to SSD only storage to mitigate this.
It's however difficult to verify these stories.
I hadn't heard that one, and I can't find anything online. Considering that the base model MacBook Air had spinning rust for the first two and a half years, I'm skeptical.
The "Saving Lives" story I'm referring to is unverified but it does at least come from directly someone who was there.
It’s frankly wild how many weird problems and UX pitfalls I experienced with my first PC in roughly 2005 are STILL issues.
The fuck is Microsoft having all these engineers work all day on?
I've been supporting MS as a career for 30+ years, so trust me, I understand that.. and it's a common excuse. But I don't accept it.
New priorities get funding and promotions, so everyone abandons unglamorous but critical work.
Conway's law would say so.
It is easy to have such hubris, when the competition at the shopping mall where most folks buy hardware is either crimpled Chromebooks and Android tablets, or overpriced Apple laptops, at least in what concerns most tier 2 and 3 countries.
It would be nice to have somethig like Asus or Dell XPS, with Ubuntu LTS fully working laptop hardware at Dixons, FNAC, Publico, Worten, Cool Blue, Saturn, Media Markt,..... but it ain't happening.
However after the netbook phase, that is yet to happen again.
That's actually a key point to make. To generalize, people don't install operating systems. They buy a device with some sort of operating system on it.
No, the truth of the matter is they needed Copilot in there to analyse and identify the bug for them. And then write the code to fix it...
> Typical Microsoft hubris.
That hubris combined with a whole bunch of decisions I resent/actively dislike and the hassle to opt out of things I never asked for is why for the first time since the late 80's I don't have any Microsoft OS's on any of my PC's.
I only used windows 11 for gaming and I don't really do that much anymore - I may have a look at steam/proton but not really in any hurry either.
90-95% of my computing life was spent inside Linux anyway.
These kinds of comments make me question how many folk here are actually in tech or if my experience has been uncharacteristically grim.
Does your company not have hundreds to thousands of backlogged tickets and bugs? Are there not different teams for different parts of the system? No triage policy for prioritizing work?
Well, as it happened, when I was part of a company that released software, we prioritized high-visibility bugs that users complained about often.
This is a high-visibility bug that users complain about often.
"Update and shutdown" always worked for me in Windows 10 :shrug:
Probably race condition galore that was hard to repro.
I had often reboots followed by more update installation and then a shutdown, so I assumed this was working as intended (i.e. finish installing the updates, which might require a reboot, and then power off).
For a long time, windows had two options:
1. Update and restart and prompt for bitlocker password and update and restart and prompt for bitlocker password and restart
2. Update and restart and prompt for bitlocker password and update and restart and prompt for bitlocker password and shut down (and restart)
Finally, they fixed the last bit of option 2
My laptop did something like it last night but not exactly : it booted from HIBERNATION to apply updates and reboot. Auto updates have been turned off for a while and yet this happens. You can't trust micro$oft for even the smallest thing. Edit: And I'm fairly sure this is a wanted malicious behavior. The thing was hibernating for quite a time before doing it, like it would wait for me to leave the office/for the computer to be in a bag on the way back home and I wouldn't notice what happened.
That was a bug? I always thought it was because of my dual boot configuration.
Yeah, even with this fixed it's going to be annoying because it first restarts, so you have to stay at the PC to select windows in grub
Full-disk encryption, as useful as it is, also makes this a royal pain. Updates can't be performed unattended, because each restart done during the updates requires providing the password before continuing.
You can set grub to select the lastly selected menu entry.
I honestly thought the same and kind of just gave up the idea of hitting that at night and being done.
Figured having 4 OS installations was already fairly niche that it was largely a self imposed issue. Looking forward to confirming that this fixes the issue in my use case.
I wonder if they'll backport this fix to Windows 10 (Very much doubt).
I also wonder if they'll ever fix the menu entry delay bug. At the moment neither of the "Update and ..." options is in the menu when you first open it. Opening the shutdown menu then checks if there are updates available to install and will then add those options, shifting the existing menu entries. Which makes it incredibly easy to quickly click on an option you didn't want.
Looking at the rest of their recent update history, this likely just broke...
Task failed successfully.
I noticed Windows's start menu only updates certain tracked files (e.g., remove deleted files) on a normal shutdown + start, and not on a restart.
Which is odd because I was under the impression a restart is the only "true" shutdown due to fastboot behavior.
I didn't look too much into it and chalked it up to a quirk of Windows start menu behavior in tracking recent files.
I can't believe I'm alive to witness this.
I’ve always assumed it’s either just something my corporate laptops like to do (my older HP would often switch itself back on even when you told it to shut down, forgetting about any updates), or that I had just clicked the wrong button.
Well, guess that’s my mentally stability so slightly restored!
This happened to me last night! I was going to bed and I clicked Update and Shut Down, then I went in to the other room.
After a few minutes I could see the blue glow of my Windows background shining on the wall.
Glad it is fixed!
I have always blamed my Dell desktop for this.
Incredible.
Microsoft denied a bug for a decade that harmed 100% of users every month in an obvious way.
It is the denial that is so very Microsoft.
About time... I guess
Oh. My God.
Finally.
Seriously, on a Linux system, I update everything except the kernel without a reboot. Why can’t Windows do this?
Windows won't let you overwrite files "in use" and "file" is determined by the full pathname.
Linux will let you overwrite files "in use" (though the program(s) using them may not notice) and "file" is determined by a magic number, the inode - you can delete a file from a directory, really it's removing _that inode_, and put a new file in place with the same name, it's a _new_ inode. Programs that still have the file open are referring to the _old_ inode, which only goes away once everyone stops using it.
So actually you need to go round restarting your programs/services on Linux to get them to pick up changes (most package managers do that automatically), but at least it's _possible_ to make those changes without a reboot. Windows has to go into a special mode where nothing else runs, to be sure that it can update files.
This is why I love OpenSUSE, when you update your system it will let you know when updated files that certain processes are using were touched and you can then decide if you want to restart them.
Suse systems in general are just so much nicer to administer than RedHat or Debian/Ubuntu ones (imo of course).
htop(1) can also highlight running processes that have had their on-disk executable replaced (highlights in red) or one of its shared libraries (highlights in yellow). I find this very useful.
btop as well
Debian can do just the same, at least for a libc upgrade and a few others you get asked if you wish to restart now or later.
KDE Neon used to do this. Almost always caused issues after update with stuff crashing due to mismatch of versions talking to each other over D-Bus and such.
So they moved to something more like the Windows style, where it downloads, reboots to apply and then reboots again freshly updated.
KDE sometimes borks after big Qt or KDE updates. I just logout and login.
This what it said on the tin since forever for Linux systems, and it doesn't hurt.
Right, but a reboot is just as quick and then you get to load everything from scratch so I just ended up doing that.
My systems run a couple of services, too. So, I don’t prefer to reboot unless I’m upgrading the kernel or something in close vicinity.
Also, it surfaces long-running bugs so I can report them.
Briefly: it can (see e.g https://devblogs.microsoft.com/oldnewthing/20130102-00/?p=56...)
Note that just replacing files on disk is not sufficient because all the running software would still have the old version.
In the first place it means the security issue could still be present in currently running software, in the second place exciting things can happen when two (or more?!) different versions try to talk to each other. Oh, and who's to say the whole file was fully loaded into memory (or wasn't partially paged out) - imagine the fun that would happen if you later page in data from a different version of the binary!
So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.
There is no such partial or mixed exe problem from paging.
It doesn't matter if it was paged out, virtual memory is still just memory.
Paging out & restoring some memory doesn't know or care where the contents originally came from. It doesn't have an optimization that goes "Oh this chunk of memory is an executable file. I can skip writing this out to the swap file, and later when I need to restore it I can just read the original file instead of swap."
For files that a program opens, an open handle is an open handle. The entire file is available in whatever state it was at the time the handle was opened, modulo whatever changes this specific handle has made.
If a program closes and re-opens handles, then it always knew that the entire world could have changed between those 2 opens. Same if it opens non-exclusive. If it opens without exclusive or closes & reopens, then it's ok for the data to change between each access.
There are problems during updates, but they are much higher level and safer than that. Open file handles are open file handles, and currenly loaded exes are consistent and sane until they close. All the problems are in the higher level domains of processes interacting with each other.
> So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.
Most systems are technically capable of hot patching (if your exe file is mmaped, and you change the backing file, Bob's your uncle, unless your OS is no fun; which is why unix install pattern is unlink and replace rather than in-place updares). But most executables are not built to be hot patched, especially not without coordination.
Hot patching lets you make changes to your live environment with tremendous speed, but it also has risk of changing your live environment to an offline environment with tremendous speed. I'm a proponent of hot patching, and would love to be able to hot load all the things, but it has requirements and tradeoffs and most software isn't built for it, and that's probably the right decision for most things.
Yep. In fact rename/replace is conceptually the same as unlink/replace, but another potential issue is in-process dll hell. If a patch replaces multiple libraries, and they're not all loaded into a process yet, even if each is atomic, you might load version 1 of the first library but version 2 of the second
Windows locks files when they're in use, so that you cannot overwrite them. Linux doesn't do this.
So if you want to overwrite a running service then you can either stop it, update it, and restart it (tricky to manage if it has dependencies, or is necessary for using the PC), or to shut down everything, update the files while the OS isn't (or is barely) running, and then restart the OS.
> Windows locks files when they're in use, so that you cannot overwrite them. Linux doesn't do this.
Linux does do this (try overwriting or truncating a binary executable while it's running and you'll get -ETXTBSY).
The difference is that Linux allows you to delete (unlink) a running executable. This will not free the on-disk space occupied by that executable (so anything you write to disk in the immediate future will not overwrite the executable, and it can continue executing even if not all of the executable has been paged in) until all references to its inode are freed (e.g. the program exits and there are no other hardlinks to it).
Then you can install a new version of the executable with the same name (since a file by that name no longer exists). This is what install(1) does.
MOVEFILE_DELAY_UNTIL_REBOOT is sort of the real trick, because it's processed by the Windows equivalent of PID 0 (ssms), which processes these pending operations before actually starting any other userspace stuff (which would invariably load things like kernel32.dll etc.)
That’s not strictly true: https://askubuntu.com/a/731993
I semi-regularly have to reboot my Linux system despite the kernel remaining unchanged.
With things like kpatch you can even update the kernel without a reboot
The file system won’t allow you to overwrite an open file in Windows.
thinking it has something to do with this https://unix.stackexchange.com/a/49306
Windows NT was always capable of what Microsoft calls “POSIX delete semantics” (the POSIX compatibility layer was in the design doc since before the name change from “NT OS/2”), but some years ago the default for the Win32 call DeleteFile actually changed to take advantage of that (causing some breakage in applications, but apparently not a lot of it).
For me 25H2 was the first update that didn't update and shut down. All the previous ones did (on multiple computers).
it'd be nice if Microsoft paid just a bit of attention to the immutable/atomic Linux ecosystem a bit and if they could finally ship an OS that wasn't always a dearly loved "pet".
pity to those who have to work on/with this jank.
How dare you shut down such a great operating system?
What a joke of an OS and company.