Writing this from a passively cooled (Streacom FC8 Evo) Linux PC with a Russian keyboard.
# dmidecode 3.6
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.
Handle 0x002C, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0029
Type: <OUT OF SPEC>
Status: <OUT OF SPEC>
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Cooling Dev 1
Handle 0x002F, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0029
Type: <OUT OF SPEC>
Status: <OUT OF SPEC>
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Not Specified
Handle 0x0037, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0036
Type: Power Supply Fan
Status: OK
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Cooling Dev 1
I normally think PC cases are gaudy and boring even when trying to evoke some style. That stuff in Streacom website however makes me want to build something with it.
The computer knows there's a fan because it sees tacho output. If it doesn't see tacho, shrug. You can get an external temperature-controlled PWM controller for a few units of your local currency on AliExpress, steal 12V from somewhere (Molex header or whatever) and run the fans off that. Figure out where to put the temp sensor to get the desired effect.
There are far better ways to do this, but they require software engineering, not €3 and 15 minutes.
I am yet to see _any_ consumer-oriented motherboard where SMBIOS descriptions have even a passing relationship to the actual hardware. I would not be surprised if this malware would also fail in 50% of real hardware out there. But I also guess malware can afford this failure rate; as long as it guarantees it also fails on 100% of VMs/debuggers, it is worth it.
But if these assumptions are true then I'd presume malware authors would do timing checks rather than the trivially "emulable" SMBIOS.
> I am yet to see _any_ consumer-oriented motherboard where SMBIOS descriptions have even a passing relationship to the actual hardware.
This seems to be especially true for cheap chineese boxes. If I had a dollar for every time I saw "to be filled in by OEM" strings in "live/production" BIOS images ... i'd be retired :).
Bonus points for a non-unique UEFI UUID that is already enrolled in some random company's Microsoft Intune / Windows Autopilot instance so when you fire it up off a fresh Windows install it begs you to sign into $RANDOM_COMPANY_WITH_BAD_IT_CONTROLS.
Triple-points if the vendor includes a sticker telling you to complete Windows OOBE without connecting it to the Internet to avoid this.
If the OEM hadn't messed up and reused UUIDs, it would be "Microsoft letting companies do whatever they want with their device", which is not unreasonable. OEMs reusing UUIDs for some ridiculous reason is breaking down the chain of "whose device is it".
I’m fairly sure my expensive ASUS ROG motherboard (ergo: not even their budget line) also had a “to be filled in by OEM” string that I couldn’t even override. (ASUS have a utility but it’s not publicly available, probably just for computer shops)
But that's exactly the point. Computer shops that sell complete systems are supposed to put their name in the "system manufacturer" field. If you bought the mainboard yourself and built your own system, then who do you think should have replaced that string?
That's basically my experience for 2 other "gaming" motherboard brands that aren't ASUS as well. My guess is that people who build their own PCs probably don't care about SMBIOS serial numbers being properly populated, so why bother?
But this is correct, if the Mainboard was bought as is and was not part of a complete system, the system manufacturer is obviously not filled out as there is none.
Malware has bugs. In fact some viruses have done far more damage than the author intended due to bugs.
There was a substantially effective virus years ago that made it around the world in 90 minutes, and it turns out a bug in its networking code caused it to spread half as fast as it should have. Meaning it should have been everywhere in 45 minutes. You can still do a lot of damage without hitting every machine in existence.
Using such tricks might seem like a cute way for malware to make analysis difficult, but often times calling these obscure system APIs can be detected statically, and you bet that it will flagged as suspicious by AV software. If the malware binary is not obfuscated to hide such calls, I'd even call them "counterproductive" for the malware authors!
The legit programs interested in these APIs are almost always binaries signed by well known (and trusted) CAs - making it sensible for the analysis to report sus behavior.
I worked as a junior in this field, and one of my tasks was to implement regex pattern matching to detect usages of similar APIs. Surprisingly effective at catching low hanging fruit distributed en masse.
Malware is signed surprisingly often these days, you can't rely on malware companies not to sign their binaries anymore. Hacked code signing certificates seem to be all over the place and Microsoft seems very reluctant to revoke trust out of fear of actually breaking their original customers' software.
Same goes for the common vulnerable drivers that malware likes to load so they can get into the kernel. A weird tiny binary making WMI calls may stand out, but a five year old overclocking utility full of vulnerabilities doing the same queries wouldn't.
From the research I've read, this doesn't seem to be about avoiding detection as much as it's about not detonating the real payload on a malware analyst's machine. If the AV flags the binary or the detection trips, the second stage isn't downloaded and the malware that does stuff that makes the news doesn't execute (yet).
>Hacked code signing certificates seem to be all over the place and Microsoft seems very reluctant to revoke trust out of fear of actually breaking their original customers' software.
AFAIK most (all?) code signing CAs are cracking down on this (or maybe Microsoft is pushing them) by mandating that signing keys be on physical or cloud hosted HSMs. For instance if you try to buy a digicert code signing certificate, all the delivery options are either cloud or physical HSMs.
It's a change to the CA rules that was passed in https://cabforum.org/2022/04/06/ballot-csc-13-update-to-subs... to align OV certificate requirements with the EV ones (that enforces the use of HSMs/hardware tokens/etc) that was meant to go into effect for new certificates issued after November 2022, but was delayed and eventually implemented on June 1 2023.
That said, plenty of malware will stop downloading additional modules or even erase itself when it detects things that could indicate it's being analysed, like VirtualBox drivers, VMWare hardware IDs, and in the case of some Russian malware relying on the "as long as we don't hack Russians the government won't care" tactic, a Russian keyboard layout.
It won't stop less sophisticated malware, but running stuff inside of a VM can definitely have viruses kill themselves out of fear of being analysed.
This is increasingly less true. SR-IOV and S-IOV are becoming increasingly common even in consumer hardware and OS manufacturers are increasingly leaning on virtualisation as a means to protect users or provide conveniences.
WSL has helped with virtualisation support quite a bit as a means of getting hardware manufacturers to finally play nice with consumer virtualisation.
And Microsoft is even now provides full ephemeral Windows VM "sandboxes". The feature that came with them that surprised me was that they support enabling proper GPU virtualisation as well.
But then you have your "VMs" accessing the real hardware, so the benefits of the VM reduce if not disappear. You literally can't have the cake and eat it too.
Soundlike having a virtual.Russian keyboard and installing VMware tools or virtualbox addons to host and not using them is the new low overhead antivirus.
That leaves you vulnerable to side channel attacks. From a security perspective, we shouldn’t run software at all, but if you have to, just use AWS Lambda.
Every app would have a long permissions dialog. Every app would want to read your CPU fan for no good reason (just as another piece of fingerprint) so you'd get use to clicking accept so you could use any apps at all. The malware would still get through. This already happened on mobile.
IIRC the xbox one onwards (switching from PowerPC to AMD x86) gave them synergy with AMD's efforts to push hard into servers with virtualization, as well as MS pushing Azure
The trick is to become a company like "CrowdStrike", get your crappy software that runs at kernel level signed, then you can run all of the "suspicious" calls to sys apis all you want. Forget determining if it’s a VM or not.
Just push untested code/releases on production machines across all of your customers. Then watch the world burn, flights get delayed, critical infrastructure gets hammered, _real_ people get impacted.
_Legitimate_ companies have done more damage to American companies than black hat hackers or state actors can ever dream of.
The folks behind xz util within libzma aspire to cause the amount of damage companies like ClownStrike and SolarWinds have caused.
Anti virus software just guessing what is and isn’t malware by analysing static calls is actually really annoying. If you’re doing that then why not just make an allow list of trusted software and mark any software not in that list as being malware. It’ll work just about the same.
This reminds me of how having the right SMBIOS was necessary to create a working Hackintosh. There are so many of these relatively obscure APIs which have been added to the PC over the years, which are often overlooked by those writing virtualisation software, and malware and other VM detection software often tries to poke at them to see how real they look.
A next step to making the VM look real is having simulated temperature sensors that actually change in response to CPU load.
I wonder if making a user endpoint actually look like a VM could help? Maybe adding some VM like flags to throw off some malware?
I feel that bad actors would catch on, but it might offer some protection for some low hanging vulnerabilities?
Mitre ATT&CK's T1497.001 (VM Detection) lists SMBIOS checks as a known vector means its open for injection anyways.
i did one little expirement on faking VM's powersupply. done it with 'HotReplaceable=Yes' and 'Status=OK', and you suddenly look like a $5k baremetal server.
Misread the title as "I made my VM think it WAS a CPU fan" and was a bit disappointed to find the actual article was not about a VM with an identity crisis.
Pretty funny that a blog post talking about complex and innovative ways to help investigate malware has a block of the lowest quality, scummiest ads that probably lead to malware.
Huh so new antimalware tactic: Buy passively cooled PC :)
And also set up a Russian keyboard: https://krebsonsecurity.com/2021/05/try-this-one-weird-trick...
Writing this from a passively cooled (Streacom FC8 Evo) Linux PC with a Russian keyboard.
So a cooling device is still present.Sensor data:
> Streacom FC8 Evo
I normally think PC cases are gaudy and boring even when trying to evoke some style. That stuff in Streacom website however makes me want to build something with it.
Passively cooled PC probably won't work because the board will still have fan headers even if nothing is connected to them.
So we just need to implement the opposite of what OP has on our PCs, i.e. make OS think there are no fans.
Yes and another method of controlling them.
External cooling device?
The computer knows there's a fan because it sees tacho output. If it doesn't see tacho, shrug. You can get an external temperature-controlled PWM controller for a few units of your local currency on AliExpress, steal 12V from somewhere (Molex header or whatever) and run the fans off that. Figure out where to put the temp sensor to get the desired effect.
There are far better ways to do this, but they require software engineering, not €3 and 15 minutes.
I am yet to see _any_ consumer-oriented motherboard where SMBIOS descriptions have even a passing relationship to the actual hardware. I would not be surprised if this malware would also fail in 50% of real hardware out there. But I also guess malware can afford this failure rate; as long as it guarantees it also fails on 100% of VMs/debuggers, it is worth it.
But if these assumptions are true then I'd presume malware authors would do timing checks rather than the trivially "emulable" SMBIOS.
> I am yet to see _any_ consumer-oriented motherboard where SMBIOS descriptions have even a passing relationship to the actual hardware.
This seems to be especially true for cheap chineese boxes. If I had a dollar for every time I saw "to be filled in by OEM" strings in "live/production" BIOS images ... i'd be retired :).
Bonus points for a non-unique UEFI UUID that is already enrolled in some random company's Microsoft Intune / Windows Autopilot instance so when you fire it up off a fresh Windows install it begs you to sign into $RANDOM_COMPANY_WITH_BAD_IT_CONTROLS.
Triple-points if the vendor includes a sticker telling you to complete Windows OOBE without connecting it to the Internet to avoid this.
I still can't believe that microsoft allows companies to essentially brick machines they don't even own like that. Seems criminal to me.
More criminal than hard coding UUID for some other device?
You can do whatever you want with your device. Microsoft is also doing whatever they want with your device.
If the OEM hadn't messed up and reused UUIDs, it would be "Microsoft letting companies do whatever they want with their device", which is not unreasonable. OEMs reusing UUIDs for some ridiculous reason is breaking down the chain of "whose device is it".
I’m fairly sure my expensive ASUS ROG motherboard (ergo: not even their budget line) also had a “to be filled in by OEM” string that I couldn’t even override. (ASUS have a utility but it’s not publicly available, probably just for computer shops)
But that's exactly the point. Computer shops that sell complete systems are supposed to put their name in the "system manufacturer" field. If you bought the mainboard yourself and built your own system, then who do you think should have replaced that string?
If you buy a motherboard to build your own (or any, even if it is for someone else) PC, you are the OEM.
That's basically my experience for 2 other "gaming" motherboard brands that aren't ASUS as well. My guess is that people who build their own PCs probably don't care about SMBIOS serial numbers being properly populated, so why bother?
But this is correct, if the Mainboard was bought as is and was not part of a complete system, the system manufacturer is obviously not filled out as there is none.
Malware has bugs. In fact some viruses have done far more damage than the author intended due to bugs.
There was a substantially effective virus years ago that made it around the world in 90 minutes, and it turns out a bug in its networking code caused it to spread half as fast as it should have. Meaning it should have been everywhere in 45 minutes. You can still do a lot of damage without hitting every machine in existence.
How does Linux find the fans these days? Is it an ACPI/EFI thing now? Nearly all my machines seem to have correct fans/sensors.
Through a bazillion of practically motherboard-model-specific hacks:
https://lxr.linux.no/#linux+v6.7.1/drivers/hwmon/
Yes acpi is far more reliable.
Is it the actual malware checking this or some researcher-created malware samples?
Using such tricks might seem like a cute way for malware to make analysis difficult, but often times calling these obscure system APIs can be detected statically, and you bet that it will flagged as suspicious by AV software. If the malware binary is not obfuscated to hide such calls, I'd even call them "counterproductive" for the malware authors!
The legit programs interested in these APIs are almost always binaries signed by well known (and trusted) CAs - making it sensible for the analysis to report sus behavior.
I worked as a junior in this field, and one of my tasks was to implement regex pattern matching to detect usages of similar APIs. Surprisingly effective at catching low hanging fruit distributed en masse.
Malware is signed surprisingly often these days, you can't rely on malware companies not to sign their binaries anymore. Hacked code signing certificates seem to be all over the place and Microsoft seems very reluctant to revoke trust out of fear of actually breaking their original customers' software.
Same goes for the common vulnerable drivers that malware likes to load so they can get into the kernel. A weird tiny binary making WMI calls may stand out, but a five year old overclocking utility full of vulnerabilities doing the same queries wouldn't.
From the research I've read, this doesn't seem to be about avoiding detection as much as it's about not detonating the real payload on a malware analyst's machine. If the AV flags the binary or the detection trips, the second stage isn't downloaded and the malware that does stuff that makes the news doesn't execute (yet).
>Hacked code signing certificates seem to be all over the place and Microsoft seems very reluctant to revoke trust out of fear of actually breaking their original customers' software.
AFAIK most (all?) code signing CAs are cracking down on this (or maybe Microsoft is pushing them) by mandating that signing keys be on physical or cloud hosted HSMs. For instance if you try to buy a digicert code signing certificate, all the delivery options are either cloud or physical HSMs.
https://www.digicert.com/signing/code-signing-certificates
It's a change to the CA rules that was passed in https://cabforum.org/2022/04/06/ballot-csc-13-update-to-subs... to align OV certificate requirements with the EV ones (that enforces the use of HSMs/hardware tokens/etc) that was meant to go into effect for new certificates issued after November 2022, but was delayed and eventually implemented on June 1 2023.
So, from a security perspective, maybe we should run all software inside a VM then?
You'd lose things like hardware acceleration.
That said, plenty of malware will stop downloading additional modules or even erase itself when it detects things that could indicate it's being analysed, like VirtualBox drivers, VMWare hardware IDs, and in the case of some Russian malware relying on the "as long as we don't hack Russians the government won't care" tactic, a Russian keyboard layout.
It won't stop less sophisticated malware, but running stuff inside of a VM can definitely have viruses kill themselves out of fear of being analysed.
> You'd lose things like hardware acceleration.
This is increasingly less true. SR-IOV and S-IOV are becoming increasingly common even in consumer hardware and OS manufacturers are increasingly leaning on virtualisation as a means to protect users or provide conveniences.
WSL has helped with virtualisation support quite a bit as a means of getting hardware manufacturers to finally play nice with consumer virtualisation.
And Microsoft is even now provides full ephemeral Windows VM "sandboxes". The feature that came with them that surprised me was that they support enabling proper GPU virtualisation as well.
But then you have your "VMs" accessing the real hardware, so the benefits of the VM reduce if not disappear. You literally can't have the cake and eat it too.
Soundlike having a virtual.Russian keyboard and installing VMware tools or virtualbox addons to host and not using them is the new low overhead antivirus.
That leaves you vulnerable to side channel attacks. From a security perspective, we shouldn’t run software at all, but if you have to, just use AWS Lambda.
My response is in the queue, please be patient.
What kind of side-channel attacks? You mean caching-related?
We wouldn't need to if we used capability-based operating systems.
Every app would have a long permissions dialog. Every app would want to read your CPU fan for no good reason (just as another piece of fingerprint) so you'd get use to clicking accept so you could use any apps at all. The malware would still get through. This already happened on mobile.
That’s how the Xbox works too
IIRC the xbox one onwards (switching from PowerPC to AMD x86) gave them synergy with AMD's efforts to push hard into servers with virtualization, as well as MS pushing Azure
Qubes OS exists
The trick is to become a company like "CrowdStrike", get your crappy software that runs at kernel level signed, then you can run all of the "suspicious" calls to sys apis all you want. Forget determining if it’s a VM or not.
Just push untested code/releases on production machines across all of your customers. Then watch the world burn, flights get delayed, critical infrastructure gets hammered, _real_ people get impacted.
_Legitimate_ companies have done more damage to American companies than black hat hackers or state actors can ever dream of.
The folks behind xz util within libzma aspire to cause the amount of damage companies like ClownStrike and SolarWinds have caused.
Anti virus software just guessing what is and isn’t malware by analysing static calls is actually really annoying. If you’re doing that then why not just make an allow list of trusted software and mark any software not in that list as being malware. It’ll work just about the same.
That's pretty much exactly how it works now. We instead analyze programs and guess that they're safe.
Well, after we send a copy of the program to Microsoft, of course
This reminds me of how having the right SMBIOS was necessary to create a working Hackintosh. There are so many of these relatively obscure APIs which have been added to the PC over the years, which are often overlooked by those writing virtualisation software, and malware and other VM detection software often tries to poke at them to see how real they look.
A next step to making the VM look real is having simulated temperature sensors that actually change in response to CPU load.
I wonder if making a user endpoint actually look like a VM could help? Maybe adding some VM like flags to throw off some malware? I feel that bad actors would catch on, but it might offer some protection for some low hanging vulnerabilities?
Fascinating article. It prompted two questions for me:
1) With the level of expertise, would it be as easy, or easier, to modify the check in the malware itself?
2) How much work would it be for a something like KVM to fake absolutely everything about a PC so it was impossible to tell it was a VM?
Mitre ATT&CK's T1497.001 (VM Detection) lists SMBIOS checks as a known vector means its open for injection anyways.
i did one little expirement on faking VM's powersupply. done it with 'HotReplaceable=Yes' and 'Status=OK', and you suddenly look like a $5k baremetal server.
cmd used
pip install dmigen dmigen -o smbios.bin \
--type0 vendor="American Megatrends",version="F.1" \
--type1 manufacturer="Dell Inc.",product="PowerEdge T630" \
--type39 name="PSU1",location="Bay 1",status=3,hotreplaceable=1
FYI: You need two line breaks to force an actual break on HN, or you need to indent each line by two to force code mode.
That’s nothing. I make my VMs think they have dust.
I haven't bought a computer cooled by a fan in over 13 years.
Misread the title as "I made my VM think it WAS a CPU fan" and was a bit disappointed to find the actual article was not about a VM with an identity crisis.
What's up with the body shaming in this article?
> But that’s smol pp way of thinking
Because they think it's funny. Personally, I just found it off-putting and stopped reading.
Hang on, does this mean the MacBook Air is less vulnerable to some malware?
What an arcane piece of tech. Why not use EFI?
Pretty funny that a blog post talking about complex and innovative ways to help investigate malware has a block of the lowest quality, scummiest ads that probably lead to malware.
[dead]