I haven't listened to this podcast yet so I don't know if this comes up, but a particularly scary part of running a custom OS on Apple Silicon machines is that the internal speakers temperature is regulated in software. The Asahi devs have had to painstakingly reverse engineer and reimplement the safety DSP that macOS uses on each device, and add some safety margin, because if they get it wrong they could literally destroy the speakers (and IIRC at least one of their own MacBooks did have its speakers sacrificed along the way).
I wonder if there will be a similar issue with the displays when Asahi gets around to supporting HDR on machines equipped with FALD mini-LED backlights (or XDR, as Apple calls it). HDR displays usually regulate their brightness to keep the panel from getting too hot, and if Apple does that in software too then Asahi will need to replicate it.
As someone who builds speakers commercially, there aren't any devices that will actually monitor a speaker's temperature. I think what you mean is perhaps, wattage? The speaker's temperature arises from the coil, which is what moves to produce sound. In large speakers, I've seen manufacturers attach some sort of sensors pointing at the moving coil to produce an estimate, but for the speakers used in the Macbooks, they are so thin that this would be a near impossible feat. Maybe Apple uses an estimation from the DSP based on the watts supplied vs time. But, certainly, you cannot attach any sort of device to the coil without making it distort or non-functional.
I know that Apple like to supply the speakers with slightly more power (Watts) than they can deal with to get them loud, but I haven't seen nor heard of this temperature monitoring so far. I couldn't find anything related to it either. Please share citations, if you have.
I don't know what apple is doing, but in theory, it should be possible to measure ohmic resistance of the coil, and infer temperature via PTC coefficient of coil material. The effect isn't very big at temperature differences you could tolerate in a loudspeaker coil though (~0,5%/K).
You're probably right. I looked at Neumann ("Independent soft clip, peak and thermo limiters for woofer and tweeter; Woofer excursion limiter; thermo limiter for the electronics and amplifiers") and Genelec (https://www.genelec.com/key-technologies/protection-circuitr...) for info - as they're the reference in the monitoring world - and my guess is that it's implemented like Apple, via some simulation function from time-integrated wattage and frequency to bool (limit on/off).
Apple isn't the first to use advanced limiters to get the most of loudspeakers, if I remember well, this trick is what allowed Devialet to make the Phantom what it is.
Which is why loudspeaker measurements should _always_ include something like this: http://0x0.st/XG1y.png
The fact that speaker temperature even needs to be considered is something that boggles the mind. I can't find any other references elsewhere on the Internet. Are you sure it's not DC blocking that they've cheaped out on somehow? It somewhat reminds me of this: https://news.ycombinator.com/item?id=7205759
They do it because it lets them drive the speakers much louder than they could do safely with a simple power limiter. Apparently there are amplifier chips which can do that smart regulation internally but Apple decided to do it in software for whatever reason. Maybe they can squeeze out more sound quality with a more complex DSP that way, I doubt it's cost-cutting given their margins.
Certainly I can see the appeal of implementing control in the silicon I own rather than buying a vendor's chip and having to deal with their supply chain, their sales team, their space claim on my board, etc.
It's probably a combo of cost-cutting and control. If you use a hardware chip, the chip itself only costs cents, but space is at a premium, and there assembly costs and reduced flexibility (you have to lock in your design well in advance).
Cost-cutting, control, malicious compliance, and more importantly, plausible deniability. Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy. Making the design overly complex and fragile (and spending extra $$$ in the process), just to make it harder to correctly use, seems to be a common theme. They will of course come up with plausible arguments why their design is "better", but in reality it's only better for them.
> Apparently there are amplifier chips which can do that smart regulation internally but Apple decided to do it in software for whatever reason.
> Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy.
Using fewer specialized hardware chips to dynamically regulate speaker temperatures and implementing it in software seems like it would be more repair-friendly, not less. Unless you meant this in some more general sense that doesn't include repair, per se?
I've worked with PC laptop manufacturers before, and they do exactly the same thing. In every case, it's either "how can we take really cheap speakers with limited placement options and make them sound decent, to save costs", or "how can we take slightly better speakers and slightly better placement options and make them sound like surround sound so our audio can be a selling point".
It doesn't require malice or conspiracy to wind up with a closed proprietary design that makes up for cheap flawed hardware with software workarounds. It's the default if you don't have the opposite as an key value and make a deliberate effort to achieve it.
Yeah - the same thing has been happening for well over a decade with phone cameras. I bought a fancy full frame dslr camera recently and the image quality is incredible. But the sensor on my camera is 30x larger than my phone’s sensor (or something like that). And good lenses are massive - my arms get a workout from shooting.
It really puts into relief how weird it is that we get such good quality out of phone cameras. They’re almost as much generative AI images as they are actual photos.
Of all of those reasons, malicious compliance and plausible deniability seem the least likely to me. With and from what, exactly?
I get being upset at Apple for e.g. 30% take on in-app purchases. But what exactly would they be trying to do by making it complex to control their speakers?
But what exactly would they be trying to do by making it complex to control their speakers?
Making it harder for alternative OSes to use their hardware without accidentally damaging it, further cementing macOS' position as the only OS to be trusted.
It's similar to their war against third-party repair shops by deliberately making it difficult to replace parts, even with genuine ones --- see my second link about the iPhone 12 camera.
This is the automotive equivalent of adding sensors and ECU firmware that detects if the engine is using the manufacturer's proprietary oil, and subtly changing the parameters to decrease power and fuel economy, increase emissions, and/or shorten the life of the engine if it isn't. Then blaming third-party repair shops when customers complain.
> Making it harder for alternative OSes to use their hardware without accidentally damaging it, further cementing macOS' position as the only OS to be trusted.
If apple wanted to kill asahi linux, they wouldn't have had to lift a finger. Its the opposite - Apple engineers have needed to do several small things to keep the door open to easily run 3rd party operating systems on their computers. Remember, a modern mac has essentially identical hardware as an ipad. Apple has locked down the entire boot process and uses digital signatures all through the boot chain. They actively changed how macs boot to make sure things like asahi linux can run on their computers at all.
I don't think they deserve special praise for that. But it does make it a stretch to claim their speakers were intentionally designed hurt linux-on-mac. If they wanted to stop asahi linux, they had plenty of much easier, much more effective ways to do so.
Sounds like "be glad they gave you some bones from the table" instead, you know, the company providing the actual proper means for users to reliably run whatever on the hardware they bought, not just the manufacturers blessed OS.
Sometimes I wonder if it really makes sense to spend so much time to do the work Apple should have done in the first place & with no guarantee it will even work after a firmware upgrade or on the next model.
Spending the same effort on more open platforms or open hardware efforts might be wiser in the long term.
On one hand I agree with you, on the other hand I'm happy they are taking the effort to do this because it will reduce e-waste. When those MacBooks no longer receive updates, they can get a second life thanks to this work.
Yes, I did say in my comment that I don’t think they deserve special praise for making a computer a computer. I have complicated feelings about it all too. On one hand, I wish my Apple devices were more open: I think the App Store tax is anticompetitive abuse of a monopoly. I have an expectation of actually owning the things I buy, and I don’t feel that way about the software ecosystem on my iPhone.
On the other hand, I adore the security and cross-application sandboxing that iOS brings. I wish Linux had more ways to sandbox applications from one another - since it’s clear that “trust all computer programs” is an insanely idiotic policy in 2024. And “implicitly trust all code” is de facto how Linux, npm, cargo, etc all run today.
My comment was in response to the idea that the speakers are maliciously designed to hurt asahi Linux, which just seems obviously wrong given how much power Apple has over their devices. They could completely kill asahi Linux with a flick of the wrist if they ever want to. There’s no way that’s the reason for their complex speaker setup.
> Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy. Making the design overly complex and fragile (and spending extra $$$ in the process), just to make it harder to correctly use, seems to be a common theme.
This, and the nannying nature of their OS is why I could never have a mac as a primary machine. I'm always slightly mind-boggled at the amount of technical people that do. But then I guess many just live in an IDE which works fine.
Apple has a very long history of implementing hardware drivers "in software". Steve Wozniak famously did this for a floppy disk drive for one of Apple's early computers.
I think it's just a different, integrated, approach to hardware a software development. If you're doing things custom anyway, then why add an extra chip?
Because:
1. Software is more likely to fail at protection with worse consequences when it does (fire, damaged goods, warranty claims). Not just now, but also the future updates.
2. It eats away at the resources that are intended for the user. In other words: it makes the machine slightly slower for the user, for no good reason to the user.
3. You can do things that are impossible in laptop OS software. It gives redundancy, even if the OS freezes you can still make sure the speaker doesn't overheat. If it's implemented in a seperate chip. Also there is real time ability, etc.
4. It makes the OS and drivers much much simpler, which is important if you want to support other OSes on the same laptop.
Advantages, for Apple, to do it in software:
1. Software upgrade is easier and cheaper (assuming they never ever fail).
2. Cheaper.
3. You can keep competing OS'es off of your hardware, because it's too hard to write drivers for your secret sauce closed source drivers that include functionality that is "preventing parts from frying themselves".
The control loop is handled by the OS? What if the (relatively complex and therefore much more likely to crash) OS crashes? Why would there not at least be some kind of basic thermal throttling implemented in hardware as a fallback? Oh wait, it's Apple, never mind.
It's managed by the Linux kernel communicating with a user space daemon (speakersafetyd). If the user land crashes or if the user space daemon is too slow the kernel can still fall back to a ridiculously low limit that will not damage the speakers for any audio. If the kernel crashes, well you get no audio in that case. IIRC the reason they couldn't do it completely in the kernel was because the temperature model uses floating point which is not allowed in the kernel.
In the late 2000s I remember hearing of PC laptops which if left in the BIOS setup would overheat and shut down due to the fan control being exclusively done by OS drivers, so this sort of "planned fragility" isn't exclusive to Apple.
I had one such laptop. It was always a crapshoot whether Windows updates (the big ones where it rebooted one or more times) would succeed or not before the laptop overheated and shut down. The reason was that the BIOS didn't do any power management and would always run the CPU at maximum power with the fans blowing like a jet engine. It was only once Windows (or Linux) was fully booted that they would take over and do proper power management. But the Windows updates were so slow that they would often spend too much time in that pre-boot environment leading to overheating and shutdown.
No one thinks "let the fans stay off and overheat" is a good idea unless they were convinced to do that against their common sense. Other manufacturers' models either had the expected automatic fan control via EC firmware, or defaulted to the fans being on some intermediate speed.
Reverse engineering the first bit of Apple's flagship M1 CPU, which was implemented in the original iPhone released back in '08, through an ARM-based architectural analysis, and integrating it into Linux kernel system calls is an extreme measure. This reduplication of the original dump, checking its hex value, then altering it to see if the application is functional. Doing x,y, and z, then seeing if it works inside VM hypervisor-space.
What would be possible if everyone interested in Linux-on-macOS chipped in $5 each month? I wonder if this could fund a big enough team to make this super-polished.
I definitely respect them for everything they're doing, and I really want to like this project, but my very first exposure to this project was when someone linked to it on HN, I clicked the link, and the website told me that since it detected that I was from HN I am not welcomed and it refused to show me the article... it just left a really weird taste in my mouth.
I don't know if they still do that but that's the first thing I think of every time I see Asahi mentioned or I think about giving it a try.
That was the result of an previous thread on asahi. It was the only time I've been embarrassed by the HN community and it's treatment of people contributing to open source.
Anyone who actually cares about Asahi Linux knows who Asahi Lina is. If it is actually intended to be a secret it is one of the worst kept secrets I can think of.
That discussion and similar discussions that seems to come up regularly was probably the reason they don't want HN traffic, and it's unfortunate the the moderators here haven't worked harder to shut that stuff down. The gender of their developers just isn't relevant to the quality of their work.
That's really surprising. I've been on HN a while now and I've never noticed that type of comment. The reason I'm on HN at all is that it reminds me of the "old internet" where people posted links and discussed stuff. You're always gonna get someone saying stuff you don't like, but banning all of HN considering how decent the 99.9% is.. is odd. It's not 4chan.
Well, you just didn't look hard enough, this is from literally the last thread about Asahi before this one. Pretty much every single post about Asahi Linux on HN has transphobic comments.
Can you find me a single HN comment on this topic that states (or even implies) that the gender of a developer is relevant to the quality of their work?
Because I haven't seen anything like that, so you seem to be attacking a strawthem.
Maybe it has changed since the project was launched, and moderation has been applied. I remember lots and lots of "what's the point?" and "it's a shame your efforts are wasted" sentiments, and "apple is just going to snap their fingers and shut this down one day".
They've already thought about these things and it's still what they chose to work on, so I can understand them blocking a contributing source of those comments.
That being said, I agree the more recent threads have more positivity than negativity so that's good.
its insane to me that people are working so hard on reverse engineering apple silicon. like, the diagrams are right there in cupertino. it just seems like such a waste. its like during some kind of economic depression there are people starving and struggling while a bunch of food and resources are just sitting around not being used. existential grid-lock.
This definitely sucks. I feel similarly about e.g. the jailbreaking community: I appreciate the work they do and at the same time I very much wish it wasn't necessary.
If Apple and other companies like them were a little less greedy we could have far more nice things for free and Alyssa and other brilliant engineers could go work on other great projects. Also if regulators were a little more vigilant and a little less corrupt.
To me what makes it suck even more is the fact that Apple has no qualms exploiting FOSS themselves. BSD, WebKit, their new “game porting toolkit”. And look what they provide in return. It’s so gross.
Agree, at least WebKit can be used outside of Apple. They still did KHTML dirty though.
Your point about working for free is right on the money. I get that asahi is probably intellectually stimulating to work on, but I couldn’t do it knowing I am essentially enriching a company that doesn’t have public benefit in mind.
Clang can definitely be used outside of Apple, and can even compile Linux. Swift technically can be used anywhere, though it is largely driven by Apple and laughs at backward compatibility.
The people I know at Apple actually do have public benefit in mind. They believe in things like usability, privacy, accessibility, sustainability, etc. They don't want to replace you with intrusive AI (yet). And personally I like Apple's products and am using one right now. Unfortunately all large companies tend to turn into monsters that do their best to grind up resources - natural or human - in order to turn them into money. And the designer of the iPhone regrets turning us into zombies - that was not an intended effect.
I have to agree with this take, as much as I appreciate the indelible hacker spirit of jailbreaking closed hardware and making it available to freedom loving individuals... A huge, vocal part of me also feels like the entire effort is truly just empowering Apple to sell more closed down hardware and even the freedom loving nerds will buy it because it's sexy.
There's no getting around the sexiness of Apple hardware. But there's also no getting around how closed it is. You implicitly support Apple's business model when you buy their hardware, regardless of whether you run Asahi or Mac OS.
I think it only fuels the possibility that Apple would open up the architecture documentation where it otherwise wouldn't if you didn't have people diligently reverse engineering it.
Something similar to this happened in the early days of the iPhone, with the iPhone Dev Team. Initially, iPhone "apps" were going to be web pages, but then these reverse engineers came along and developed their own toolchain. Apple realized they had to take action and their "webpages as an app" strategy wasn't going to work.
I was there, part of a small community writing apps pre-SDK.
Neither, I, nor anyone else, can promise you it wasn't just a simple $ calculation.
That being said, literally every signal, inside, outside, or leaked, was that apps / public SDK, if it existed meaningfully before release, had to be accelerated due to a poor reaction to the infamous "sweet solution", web apps.
I agree its logically possible, but I'd like to note for the historical record that this does not jive with what happened, at the time. Even setting that aside, it doesn't sound right in the context of that management team. That version of Apple wasn't proud of selling complements to their goods, they weren't huge on maximizing revenue from selling music or bragging about it. But they were huge on bragging about selling iPods.
I was there, writing apps with the DevTeam's toolchain before Apple ever released theirs. Were you?
Also, I assume you haven't read the Steve Jobs biography, which discusses this and contradicts your point.
One positive outcome of your comment is that it reminded me I still have the 2008 book, "iPhone Open Application Development" by Jonathan Zdziarski. That was a nice walk down memory lane.
It was not. But you got contradicted by people who actually remember what happened. It is fairly well documented, and was common knowledge even at the time. Jobs was initially sold on Web Apps for several reasons, and the state of iPhoneOS 1 and its internal APIs was very precarious and not stable enough for serious third-party development. Again, this was known at the time thanks to the jailbreak community, and it has been explained in details over the years by people who left Apple since then, and Jobs himself in Isaacson’s biography.
When they pivoted towards the AppStore model, there was no predicting how big it would become, or even if it would be successful at all. The iPhone was exceeding expectations, but those were initially quite modest. It was far from world-record revenue.
Apple literally designed a new boot loader for Macs that allows the computer's owner to install and run an unsigned OS without having it degrade the security of the system when booted into the first party OS.
My guess would be that it was personally advocated for by someone who has enough influence within Apple to make it happen. Possibly someone on the hardware team, as I hear that the people developing the Apple Silicon processors run linux on them while they're in development.
This used to be one of the best things about Apple when Steve Jobs was still running the company: you'd get a bunch of features that a purely profit-focussed "rational business" would never approve, just because Steve wanted them. And I suspect Apple still has some of that culture.
On the internet it seems antitrust law can just be used to explain every. Antitrust actually has a pretty strict legal definition. And not a lot of thing fall into that. And if it Antitrust did apply, it would apply far more to the IPhone.
It would take an outright legal revolution in the definition of antitrust for this to be even a remote possibility, and frankly that is not happening.
This is tiresome. They cannot lock down the Mac without losing one of its biggest markets, software development. It was mentioned at a WWDC 5 or 6 years ago I think that software developers were the largest of the professional groups using Macs. You can’t have a development environment that doesn’t allow for the downloading, writing, and installation of arbitrary code.
As long as Apple wants people to develop software for its platforms and/or sell to web developers, Android developers, scientific computing, etc. they will not lock down the Mac.
Specially because Apple seems to not care much about the project even after current progress.
m3 support still not there (let alone m4) because things broke. Which is expected from Apple, they are just doing their thing and improving their products.
If they cared they would have at least hired these people by now. It wouldn't make a dent in their budget.
You are misreading the comment. It is indicting Apple, not the Asahi team, for not caring. If Apple cared and hired the Asahi folks and provided them with help, they would probably be able to churn out drivers faster.
My wife's 2017 MBP has gotten so dog-slow since Apple dropped support for it that it can't handle more than 3 Chrome tabs at a time now. The reality of Apple products is that the manufacturer sees them as eminently disposable. As early ARM macbooks age, the ability to run something, anything that isn't MacOS will be an asset. Otherwise, they're all landfill fodder.
I have an old google nexus 7 tablet that I recently installed uboot and postmarketos on. I can ssh to it, run X apps over ssh, print directly from it. It's pretty cool.
I also have a really old iPad 2. It works perfectly HW wise, screen, wifi etc. But is effectively a paper weight due to software.
I am logged into it from my old Apple account, that was only ever used for this tablet.
I have the username and password but cannot log in as I don't know the security questions, so I can't reset the device or try to install apps. I even phoned apple but they said there's nothing they can do.
It pains me to just dump a perfectly good piece of hardware.
More than half of "recycled" e-waste just gets exported to developing countries without environmental regulations where it either gets burned or buried.
The only sustainable thing you can do with a bad laptop is fix it or part it out, but for all my years taking apart fragile electronics, is it really worth the effort to take apart a device that was intentionally designed to be difficult or impossible for the owner to repair?
The last few macOS updates have really been killing performance on Intel Macs. Your 2014 is probably safe because it'll still be running an older macOS.
For many people, the Apple Silicon GPU is an interesting problem to solve given that the firmware is loaded by the bootloader and all and its actually generally easier to interact with than say NVIDIA while having decent perf. Also GPUs in general are really complex beasts involving IP from tons of companies in general. Would not be surprised if even Apple doesn't have the full schematics...
> and its actually generally easier to interact with than say NVIDIA while having decent perf
I’m pretty sure that Turing and newer work the same way. The drivers basically do nothing but load the firmware & do some basic memory management if I recall correctly.
You're just stating the problem the parent content was upset about. It's all well and good to state the facts and say "face reality" but in this case we all apparently know that it's a fragile state of affairs.
It doesn't have to be 100% libre. This is about booting any OS you want in the first place.
If you take some random windows laptop off the shelf, it will boot linux (and continue to do so in the future) because they have to support UEFI. If you take a "linux" friendly vendor off the shelf, you may even have coreboot or something on-board.
But with this apple situation there is no guarantee the next generation of hardware won't lock you out, or they might push out an OTA firmware update that locks you out. It's like porting linux to the playstation or something.
I’ll go back a little further: at one point before Apple purchased NeXT, Apple had its own version of the Linux kernel called Mklinux (https://en.m.wikipedia.org/wiki/MkLinux).
Oh please. OpenDarwin lasted what, 2 years? The people running it realized their efforts were merely going towards enriching OSX, it was not a truly open source effort.
If people wonder why some of us don't like Apple, this is the fundamental philosophy why. It's not about the M series, it's been their modus operandi since time immemorial. It's like if Microsoft owned x86 and nobody could run anything on it but Windows. And people would like it because it's a "cohesive ecosystem" or whatever.
I'm not sure that's really the same thing. Apple doesn't own ARM and the main issue here seems to be the GPU no? Is this much different from how things work with Nvidia? I guess the difference is that Nvidia provides drivers for Linux while Apple does not. As far as I know Nvidia Linux drivers aren't open source either though.
The point is that apple acts as both the source of hardware and software. Your analogy is not applicable because you can't run apple's OS on generic third-party ARM hardware.
But isn’t this whole thread about running Linux on Apple hardware? I haven’t seen anyone in this thread complaining that they can’t run macOS on non Apple hardware.
Nvidia is not much better, but they do only make one component and generally ensure compatibility. If Nvidia went full Apple, their cards would have a special power connector for the Nvidia PSU, a custom PCIe express lane that only works with Nvidia motherboards, which also requires Nvidia RAM sticks and only boots NvidiaOS. And also most of the software that would run on it would be blocked from running on other OSes because fuck you that's why. Also if you tried running NvidiaOS in a VM, they would sue you.
It's still profoundly weird to me that nobody can run Safari outside MacOS, even for testing. At least the EU has strong armed them into dropping thunderbolt ports now, so we have that minor interoperability going for us, which is nice.
You left out that they would also cost ~double per unit of performance. And that when Nvidia claims to be better for graphics and video, they can back those claims (albeit unfairly, some might say), whereas Apple marketing appears to avoid any price/value comparisons. So, I guess, even when you're dressing Nvidia up to sound ugly for a hypothetical, they still sound better than Apple.
Are we living in the same world? Nvidia only recently started caring about Linux (due to profit obviously, it turns out servers don't run anything else nowadays).
May I remind you of the famous `--my-next-gpu-wont-be-nvidia` flag in Linux compositor? Meanwhile, apple literally went out of their way to make secure boot for third-party OSs possible.
Conversely, Nvidia provides first-party Linux support for most of the hardware they sell, and Apple goes out of their way to make booting third-party OSes on the majority of hardware they sell (read: all non-Mac devices) all but impossible.
You think that's bad? Imagine how much churn there is because NVIDIA doesn't have open source drivers. I'll actually do you one better: part of my PhD was working around issues in Vivado. If it were open source I could've just patched it and moved on to real "science" but that's not the world we live in. And I'm by far not the only "scientist" doing this kind of "research".
Yeah, I agree. I do respect the effort itself, but it always feels like a huge waste of talent. Imagine what could have been created with all that time and energy.
I believe the best solution to proprietary hardware is not to buy them in the first place. Buy from vendors who are more open/supportive, like Framework and Thinkpad. This helps those vendors keep supporting open source.
A recurring theme you'll encounter across most of Apple's products is that any feature that forces first-party Apple software to compete on fair terms with other products is conspicuously missing.
I'm not familiar with this. Suppose Apple released docs under an "as is" type disclaimer like is so common in the open source community: would doing so potentially come back to bite them?
I get what you mean. I'm glad that they're doing this; it's great that the best laptop hardware is going to run Linux before long; it's a fun endeavor -- but when you zoom way out and take the philosophical view, yeah, it seems silly that it should be necessary, in the same that way it feels absurd that impressive and courageous feats in battle should have actually needed to happen.
Is it really a far comparison? Apple has a proper bootloader capable of secure booting 3rd party OSs. What part of the open-source ecosystem was built differently?
It just so happened that after possibly even more painstaking reverse engineering, the responsible hardware vendor later realized that server machines are de facto linux, and they better support it properly. Like, that Intel chip working that good was not achieved differently, we just like to wear rose-tinted glasses.
1. Even if one loves macOS, Apple doesn't support its hardware forever. Being able to run an alternative operating system on unsupported hardware helps extends that device's useful life. My 2013 Mac Pro is now unsupported, but I could install Linux on it and thus run up-to-date software.
2. Some people want to use Apple's impressive ARM hardware, but their needs require an alternative operating system.
if you want an arm laptop with incredible specs, incredible build quality, incredible battery life and incredible performance that runs linux what other option is there?
Yeah, the M4 is apparently the fastest CPU on single-core benchmarks. If you want a fast laptop, you have to get it. Not being forced to use Mac OS would be nice.
This is not a comparable user experience to running Linux natively for a variety of reasons, but the most obvious one is the relatively limited graphics acceleration. That's pretty important for a variety of use cases!
It is an inspirational demonstration of the hacker spirit and a way for the individuals involved to both expand their technical abilities and demonstrate them to prospective employers.
I personally consider it very inspirational though I recognize that I will probably never be able to undertake such a difficult task. I can imagine that it is very inspirational to the next generation of extremely proficient and dedicated teens who want to master software development and explore leading edge hardware.
the negative comments in this thread are frankly disappointing especially for a place called "hacker news". like Linux doesn't have roots in reverse engineering and continued reverse engineering and people here constantly "advocating" for open source drivers from likes of Nvidia instead of the close source binary blobs.
yet here someone makes great effort and most comments are negative Nancy's asking why it's being done or bringing up support issues with newer hardware revisions from a 1-3 person outfit that everyone said would be impossible to do.
Godspeed to the Asahi team, but as much as I envy the performance and efficiency of Apple silicon, I could never depend on a small group of hackers to reverse engineer every part of a closed system and to maintain it in perpetuity so that I can run free software on it. As brilliant as this team is, and as much progress as they've made, fighting against a trillion-dollar corporation that can decide to shut it down at any moment is a sisyphean endeavor. Spending thousands of dollars on that bet is a hard sell, even for tech nerds.
Not to mention that you'd be supporting a corporation that has this hostile stance towards their customers to begin with.
Meanwhile, other x86 and ARM manufacturers are making substantial improvements that are shortening Apple's lead. You're not losing much by buying a new CPU from them in 2024 or 2025, but you gain much more in return. Most importantly, the freedom to run any software you choose.
Aren't tons of Linux drivers for x86 laptops based entirely on reverse engineering? Maybe even most of them? I haven't used Linux seriously in almost two decades, but that's my memory.
Most of the x86 platform (ACPI) is well defined and openly accessible (not free but open).
There’s still some offenders (Surface, HP, Broadcom) that introduce quirks that break sleep and some HID accessories but most of it works out of the box.
ARM has been the Wild West for a while but they’re going in the right direction with device trees et al. Apple however doesn’t have to care about the “wider” ecosystem since they left x86 for their own silicon and tighter integration from bottom up allows for some really nice end user experience.
I still think it’s much better to use the VM infrastructure and just run Linux as a guest. Mac as a platform is very end user friendly as-is unlike Windows.
Device Trees are becoming an old thing now. With ARM SystemReady, most devices need to have UEFI, SMBIOS and ACPI. Only the SystemReady IR (IoT) variant is defined as using Device Trees.
Microsoft is the only one pushing for ACPI on ARM, but as you said they have hefty weight in that area. I don't think it is right for the platform, but if it works who am I to say.
You've been out of the game for too long; almost every major hardware vendor has at least one or two people that are regularly submitting patches to the Linux kernel. My ThinkPad work computer running Linux is a major thing of joy; in many ways it performs more reliably on Linux than it does on Windows.
Even though, there are no lack of issues that are simply not cared about. E.g. thermal throttling for ThinkPads is a very annoying problem the best solution to is simply a python script that just periodically overwrites a memory location.
I'm not sure if "tons" is accurate, but some of them are, yes. And most of them are not great IME. Not discrediting the talented programmers who work on them, it's just the nature of supporting hardware in the dark, without any support from the manufacturer. Though this was more common during the early days of Linux, and nowadays many manufacturers offer some kind of support.
OpenChrome far exceeded Unichrome Windows drivers in performance. But things have changed. Modern engineers prefer “official” software. I understand why. Systems are more complex now.
While I don't feel I have enough information to comment about the likelihood that Apple would try to stop the Asahi project, those who are knowledgable are of the opinion that they would not.
However, as a Mac Studio M1 owner that has used Asahi as a daily driver for software development since the first release (originally Arch, later Fedora), I can confidently say that I could care less. By running the software I want to run far faster than macOS could on the same hardware, Asahi has saved me countless hours and made me far more productive. And I'm incredibly grateful for this tangible benefit, regardless of what happens in the future.
You claim that your apps are faster on Linux on an M1 than macOS on the M1; can you add more detail? Which apps, and did you run benchmarks? I find it hard to believe that Apple hasn't optimized apps to be faster on their own OS and hardware.
I suspect this is primarily due to Linux being a more performance-optimized OS compared to macOS, which seems to have introduced a great deal of bloat over the years.
They won't stop Asahi with a frontal assault, they'll stop it by churning out new chips every year until the work to support them all is unsustainable.
Not sure why you are being down-voted. We're already seeing this with the team saying they won't work on M3 yet because they aren't even close to done with M2...
How is that apple's fault, nor any form of "deliberate attack"? Like, come on, neither of the parties are malicious, especially not for the sake of it.
> As brilliant as this team is, and as much progress as they've made, fighting against a trillion-dollar corporation that can decide to shut it down at any moment is a sisyphean endeavor
Apple historically cares very little about Linux on Mac whereas it seems like you’re talking about the non-Mac product lines. Indeed, they go out of their way, if I recall correctly, to make it possible and easier in the first place.
I wouldn't describe leaving the bootloader unlocked as "going out of their way" to make all of this possible. Clearly, if just booting another kernel would be sufficient, running Linux on their machines should be easy. Yet none of this is. "Going out of their way" would at the very least be providing documentation and support so that reverse engineering their hardware wouldn't be necessary.
Also, what's not to say that they will decide to lock the bootloader just as they do on all their other devices? What does Apple practically gain by leaving it unlocked? If they're doing it as a gesture of good faith to their users, they're doing an awful job at it. Doing it so they can sell a negligible amount of machines to a niche group of hackers also doesn't make business sense.
Depending on the good will of a corporation that historically hasn't shown much of it to the hacker community is not a great idea.
The changes made were not as simple as not-setting a lock fuse bit. Making the bootloader unlockable in a way that didn't compromise their existing security model did require going out of their way. The status-quo for previous "apple silicon" bootchains (iphone, ipad, etc.) was not built this way.
Even T2 macs had no way to boot custom firmware on the T2 chip, without exploits.
Sure, they could've done way more, but evidently they'd rather not lock down the platform completely.
If they don't release that code to the public, what good does it do? (Also, if they are only doing a temporary in-house version for initial hardware work, they can do all kinds of ugly hacks that wouldn't really be good for upstream use any anyway.)
Don't let worries about future hardware get in the way of using gear that works in the present. If future Macs aren't supported for some reason, that doesn't break your current hardware, and you can buy different hardware next time.
There are people running Linux on abandoned hardware from companies that went out of business, and that's okay.
I don't see any difference. This "vote" is a rounding error unless lots of people do it, which is not going to happen without organization. The effect on you matters and the effect on the vendor is ignorable.
Indeed, the Apple Silicon hype stemmed from the terrible Intel gen at the time (unlike AMD), and because Apple monopolized the TSMC N5 node for months.
Most of that lead is gone, x86 is cheaper, open, and as battery friendly now.
It's also not just the CPU: the laptops themselves are simply phenomenal. I have never had a x86 laptop that's this much of a genuine pleasure to use (ignoring macOS, which is my least-favourite operating system). Some of the newer Windows laptops are ahead, but they're still worse in at least one metric.
My AMD Ryzen Framework Laptop 13 running Ubuntu 22.04 is a joy to use. Huge difference between that laptop and the first generation Intel Framework Laptop 13.
Sadly, this is the exact reason why I hold back trying Asahi and run the chance of liking any of it :-(
Recently saw someone wondering why no one has tried building a laptop with as much quality as an Apple? A special version of Linux to run on such a laptop would offer more long-term commitment and maybe pull in more adoption.
I am genuinely looking forward to a Dell XPS 13 or Lenovo X1 Carbon which is fanless and have the battery duration and performance of the Apple Macbook Air.
What is the difference in battery life between linux on and macos on the Apple M1?
That is, I would be surprised if linux on the M1 had close to macos levels of battery life. My theory being the better battery life on the M1 is more due to the tight integration between the OS and the hardware power profiles than the power profiles themself.
I’m sure Apple has some unique tricks when it comes to energy efficiency, but I haven’t seen the same level of optimization in other operating systems. Apple’s energy management is just another competitive advantage, offering a level of sophistication that sets it apart technically and strategically. Just add the Mx chips to the equation.
Honestly, I don't think this is true. I had identical efficiency on my Intel MacBook Air running Linux, compared to OS X. Both ran out of battery +/- 10 minutes of each other on the same hardware.
The only distinct advantage is Safari, which is heavily optimized for efficiency. But a lightweight Linux desktop, with fewer active services, can compensate for that.
I'd be surprised if MacOS could match the efficiency of Linux. MacOS relies on a hybrid kernel architecture that emulates a variety of different APIs that aren't used or integrated fully. The simple act of running software on MacOS is deliberately somewhat inefficient, which is a perfectly fine tradeoff for a desktop OS that's not intended for server or edge applications.
The fundamental hardware of Apple Silicon is very efficient but I don't think MacOS is inherently optimized any better than the others. My experience installing Linux on Intel and PowerPC Macbooks tended to increase their battery quite noticeably.
Well, if in 2021 you took your MacBook Air M1 (8GB) out on a Friday, downloaded movies, watched them, browsed the internet, did some casual development, and came back late Sunday without needing to charge it, I’d be impressed.
It's the hardware that's the problem, not Linux support. Simply, the hardware manufacturers don't make fanless, thin, light, performant, power efficient laptops.
Yup that's a good start! It proves that a company other than apple can do something fanless. Probably they're plastic-y, but they are thin, light, and fanless. Power efficiency and performance are likely not good, but, at least google doesn't deliberately obfuscate their hardware like Apple does. Instead, they just let everything that's not ChromeOS fester, since they're trying to make money. But anyone who wants to start a business selling Alpine on Chromebooks can ;)
Exactly this. All that work, for reverse-engineering a handful of laptops made by a company whose only desire is to lock its users into its ecosystem and ergonomics. Even more demotivating is that the M1 and the M2 are already superseded.
Similarly, I completely do not understand the popularity of Apple's laptops in this community. Endemic mindless fanboyism.
I’m with you in spirit, but most of the work has already been done. Purchased hardware won’t change. Firmware updates could be held if needed as well. Another team could take a crack at it.
Worse case you restore macos and possibly sell at a medium loss. That said I’m still waiting.
> other x86 and ARM manufacturers are making substantial improvements that are shortening Apple's lead
x86 has fundamental issues that I believe prevent it from ever achieving the MIPS per watt efficiency of anything from ARM. I mean... the newest M4 laptop will have a 24 hour battery life. That exceeds anything remotely possible in the same laptop form factor but with x86 by nearly an order of magnitude.
So now you're talking just ARM. Linux has been compilable on ARM for a while now, so where are the competing ARM laptops that are anywhere close to the power of Apple's version of ARM?
I do get what you're saying though (I'm definitely a Linux fan and have a Linux Framework laptop), but I wish it wasn't an x86 laptop because its battery life is crap (and that is sometimes important).
> the newest M4 laptop will have a 24 hour battery life. That exceeds anything remotely possible in the same laptop form factor but with x86 by nearly an order of magnitude.
Honestly, why is this such an appealing feature? Are you often away from an outlet for 20+ hours?
I use 6+ year old laptops that last 4 hours at most on a single charge, even after a battery replacement. Plugging them in every few hours is not a big inconvenience for me. If I'm traveling, I can usually find an outlet anywhere. It's not like I'm working in remote places where this could even be an issue.
Then there's the concern about fan noise and the appeal of completely silent computers. Sure, it's a bit annoying when the fan ramps up, but again, this is not something that makes my computers unbearable to use.
And finally, the performance gap is closing. Qualcomm's, AMD's and Intel's latest chips might not be "M4 killers" (or M3, for that matter), but they're certainly competitive. This will only keep improving in 2025.
It's not that these are must-haves: it's that it removes any such anxiety about these to begin with. I can take the laptop with me wherever I'm going, and not have to worry about charging it, or that the heat will make it unbearable to use on my lap, or that the fan will be noticeable. It means I can work at a cafe, in a car, on a bus, on a train, on a flight without power, and not have to worry.
And these things compound, as the other poster mentioned: 24 hours of light use means longer heavy use, which actually does matter to me. I often move around while using the MacBook because it's good to change up my (physical) perspective - and it means I can do that without the charger while hammering every core with rustc.
Once you see that better things are possible, it's very hard to go back to the comparatively terrible performance-per-watt and heat generation of equally powerful x86 laptops.
> Once you see that better things are possible, it’s very hard to go back
Yeah, there’s something a bit freeing about being able to go all day or more without charging. Just not needing to think about power or charging when you’re busy focusing on other things.
I’m glad other manufacturers got a bit of pressure to catch up as well. Now people come to expect laptops to just run for days at a time without charging.
With all due respect I think this is a "640k is enough for everyone" problem, in the sense that you don't realize what something enables because you're simply so used to not having it:
1) Internet cafes that removed outlets to encourage laptop people not to squat :)
2) Airports where you can sit anywhere you want instead of just near an outlet
3) Airplanes when the power doesn't work (has happened more than once in my experience)
4) Cars, trains, subways, buses
5) My boat, sometimes I like to work from it for a change of pace
6) Don't have to hunt for an outlet when my fam is at the grandparents' house
7) I can work on my deck without dragging a power cord out to the table
8) I can go to a pretty overlook and watch the sunset while working
9) Conference rooms, don't have to deal with the hassle
10) Libraries, same thing. I can sit outside or in the stacks (quieter there) instead of only in the reading room (those tables are the only ones with power, in my local library)
11) Power outs and just other situations where you lose power for a time
12) It's extra juice to power your phone off of (or anything else)
Over years, I've worn laptops (with sealed-in batteries) down to three-ish reliable hours. There are never enough power outlets (or AC vents or even seats at the table) for a big meeting. That's a problem for a very long meeting format like a war room or a promo committee.
Tech companies wire rows of desks for laptops and big monitors, but I think it'd be hard to find a meeting room where you could safely plug in more than a dozen 140 W chargers.
> so where are the competing ARM laptops that are anywhere close to the power of Apple's version of ARM?
Better question: where are the incentives for them to make it? Apple is pretty much the only company with an outstanding architectural license to design ARM cores, and the best off-the-shelf ARM core designs don't even compete with 5-year-old x86 ones. If you're a company that has Apple-level capital and Apple-tier core design chops, you might as well embrace RISC-V and save yourself the ARM licensing fee. That's what Nvidia does for many of their GPU SOCs.
If SoftBank offered ARM licenses under more attractive terms, there would be genuine competition for good ARM CPUs. Given that Apple has a controlling stake in SoftBank, I wouldn't hold out faith.
> Apple is pretty much the only company with an outstanding architectural license to design ARM cores
Many other companies have done this to great effect in the past, but in recent years it has become more common to just license one of ARM's own stock cores, instead of designing your own from scratch.
This follows a period where companies like Qualcomm and Samsung were still trying to roll their own core designs from scratch, but ending up with designs that were slower and less power efficient than the cheaper stock cores you could license from ARM.
Apparently Qualcomm's new ARM laptops are pretty close.
However I think ARM platforms tend to be way less open source friendly than x86, at least on mobile. Maybe the laptops are better because they have to run Windows and Microsoft probably wouldn't put up with the shit vendors pull on Android. I don't know.
What they've been able to accomplish in such a short time is nothing short of amazing, and I applaud them for their efforts.
That said, I've been using Asahi for a month, and I'm ditching it. Maybe in a year or two it'll be stable, but for now it's got too many bugs and unsupported features. A lot of the problems come down to Wayland and KDE/Gnome, as you literally have to use Wayland. But there's plenty of other buggy or challenging parts that all add up to a very difficult working experience.
One of the biggest challenges I see is support for hardware and 3rd party apps. Not only do all the apps need to support this slightly odd Arm system, but so do hardware driver developers. I never realized before just how much of a Linux system works because most people had an incredibly common platform (x86_64). Even if Linux on Mac became incredibly popular, it would actually take away development focus on x86_64, and we'd see less getting done.
(This kind of problem is totally common among Linux laptops, btw; there's a ton of hardware out there and Linux bugs may exist in each one. Adding a new model doesn't add to the number of developers supporting them all. If anything, the Mac is probably benefited by the fact that it has so few models compared to the x86_64 world. But it's still only got so many devs, and 3rd party devs aren't all going to join the party overnight)
Yeah, I can definitely see this being an issue going forward for quite some time. The existence of non-Apple ARM devices should hopefully lead to general interest in addressing these issues, but there's so much hardware and software out there, and only so many devs with the time, interest and access to fix them.
On the other hand, I suspect people will start making choices for their hardware/software that maximise compatibility, as they already do for Linux x86. ("Don't buy NVIDIA if you want functioning Wayland", etc.) It'll be tough, but things will hopefully get better over time.
I don't get why you were downvoted to oblivion. A perspective from someone who actually used Asahi is very valuable, so thanks for sharing.
You're definitely right that having a usable system is not just about supporting first-party hardware. Linux on its own is a huge mess of different components that all somehow need to work together, and it's a miracle of engineering that it works as well as it does, even on well-supported hardware. I can't imagine how difficult it must be getting all of this to work on hardware that requires reverse engineering. It seems practically impossible to me.
On HN downvotes are usually because of disagreement. OP's experience doesn't match mine: I have used Asahi for quite a bit longer than OP and I have experienced no serious bugs.
But then again I only use those software that's available in the distribution or those that can be compiled by me. So naturally I don't deal with incompatible third party software.
That's great. Is your experience somehow more valid then?
Downvoting because of disagreement is asinine to begin with. Burying opinions that contribute to the discussion does nothing but perpetuate the hive mind.
I haven't listened to this podcast yet so I don't know if this comes up, but a particularly scary part of running a custom OS on Apple Silicon machines is that the internal speakers temperature is regulated in software. The Asahi devs have had to painstakingly reverse engineer and reimplement the safety DSP that macOS uses on each device, and add some safety margin, because if they get it wrong they could literally destroy the speakers (and IIRC at least one of their own MacBooks did have its speakers sacrificed along the way).
I wonder if there will be a similar issue with the displays when Asahi gets around to supporting HDR on machines equipped with FALD mini-LED backlights (or XDR, as Apple calls it). HDR displays usually regulate their brightness to keep the panel from getting too hot, and if Apple does that in software too then Asahi will need to replicate it.
As someone who builds speakers commercially, there aren't any devices that will actually monitor a speaker's temperature. I think what you mean is perhaps, wattage? The speaker's temperature arises from the coil, which is what moves to produce sound. In large speakers, I've seen manufacturers attach some sort of sensors pointing at the moving coil to produce an estimate, but for the speakers used in the Macbooks, they are so thin that this would be a near impossible feat. Maybe Apple uses an estimation from the DSP based on the watts supplied vs time. But, certainly, you cannot attach any sort of device to the coil without making it distort or non-functional.
I know that Apple like to supply the speakers with slightly more power (Watts) than they can deal with to get them loud, but I haven't seen nor heard of this temperature monitoring so far. I couldn't find anything related to it either. Please share citations, if you have.
The temperature is modelled based on how much power is put through it. Power, not temperature is measured.
https://github.com/AsahiLinux/speakersafetyd has a readme and the code
I don't know what apple is doing, but in theory, it should be possible to measure ohmic resistance of the coil, and infer temperature via PTC coefficient of coil material. The effect isn't very big at temperature differences you could tolerate in a loudspeaker coil though (~0,5%/K).
You're probably right. I looked at Neumann ("Independent soft clip, peak and thermo limiters for woofer and tweeter; Woofer excursion limiter; thermo limiter for the electronics and amplifiers") and Genelec (https://www.genelec.com/key-technologies/protection-circuitr...) for info - as they're the reference in the monitoring world - and my guess is that it's implemented like Apple, via some simulation function from time-integrated wattage and frequency to bool (limit on/off).
Apple isn't the first to use advanced limiters to get the most of loudspeakers, if I remember well, this trick is what allowed Devialet to make the Phantom what it is.
Which is why loudspeaker measurements should _always_ include something like this: http://0x0.st/XG1y.png
The fact that speaker temperature even needs to be considered is something that boggles the mind. I can't find any other references elsewhere on the Internet. Are you sure it's not DC blocking that they've cheaped out on somehow? It somewhat reminds me of this: https://news.ycombinator.com/item?id=7205759
https://github.com/AsahiLinux/speakersafetyd#some-background...
They do it because it lets them drive the speakers much louder than they could do safely with a simple power limiter. Apparently there are amplifier chips which can do that smart regulation internally but Apple decided to do it in software for whatever reason. Maybe they can squeeze out more sound quality with a more complex DSP that way, I doubt it's cost-cutting given their margins.
Certainly I can see the appeal of implementing control in the silicon I own rather than buying a vendor's chip and having to deal with their supply chain, their sales team, their space claim on my board, etc.
It's probably a combo of cost-cutting and control. If you use a hardware chip, the chip itself only costs cents, but space is at a premium, and there assembly costs and reduced flexibility (you have to lock in your design well in advance).
Cost-cutting, control, malicious compliance, and more importantly, plausible deniability. Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy. Making the design overly complex and fragile (and spending extra $$$ in the process), just to make it harder to correctly use, seems to be a common theme. They will of course come up with plausible arguments why their design is "better", but in reality it's only better for them.
https://news.ycombinator.com/item?id=36926276
https://news.ycombinator.com/item?id=24955071
> Apparently there are amplifier chips which can do that smart regulation internally but Apple decided to do it in software for whatever reason.
> Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy.
Using fewer specialized hardware chips to dynamically regulate speaker temperatures and implementing it in software seems like it would be more repair-friendly, not less. Unless you meant this in some more general sense that doesn't include repair, per se?
I've worked with PC laptop manufacturers before, and they do exactly the same thing. In every case, it's either "how can we take really cheap speakers with limited placement options and make them sound decent, to save costs", or "how can we take slightly better speakers and slightly better placement options and make them sound like surround sound so our audio can be a selling point".
It doesn't require malice or conspiracy to wind up with a closed proprietary design that makes up for cheap flawed hardware with software workarounds. It's the default if you don't have the opposite as an key value and make a deliberate effort to achieve it.
Yeah - the same thing has been happening for well over a decade with phone cameras. I bought a fancy full frame dslr camera recently and the image quality is incredible. But the sensor on my camera is 30x larger than my phone’s sensor (or something like that). And good lenses are massive - my arms get a workout from shooting.
It really puts into relief how weird it is that we get such good quality out of phone cameras. They’re almost as much generative AI images as they are actual photos.
Of all of those reasons, malicious compliance and plausible deniability seem the least likely to me. With and from what, exactly?
I get being upset at Apple for e.g. 30% take on in-app purchases. But what exactly would they be trying to do by making it complex to control their speakers?
But what exactly would they be trying to do by making it complex to control their speakers?
Making it harder for alternative OSes to use their hardware without accidentally damaging it, further cementing macOS' position as the only OS to be trusted.
It's similar to their war against third-party repair shops by deliberately making it difficult to replace parts, even with genuine ones --- see my second link about the iPhone 12 camera.
This is the automotive equivalent of adding sensors and ECU firmware that detects if the engine is using the manufacturer's proprietary oil, and subtly changing the parameters to decrease power and fuel economy, increase emissions, and/or shorten the life of the engine if it isn't. Then blaming third-party repair shops when customers complain.
> Making it harder for alternative OSes to use their hardware without accidentally damaging it, further cementing macOS' position as the only OS to be trusted.
If apple wanted to kill asahi linux, they wouldn't have had to lift a finger. Its the opposite - Apple engineers have needed to do several small things to keep the door open to easily run 3rd party operating systems on their computers. Remember, a modern mac has essentially identical hardware as an ipad. Apple has locked down the entire boot process and uses digital signatures all through the boot chain. They actively changed how macs boot to make sure things like asahi linux can run on their computers at all.
I don't think they deserve special praise for that. But it does make it a stretch to claim their speakers were intentionally designed hurt linux-on-mac. If they wanted to stop asahi linux, they had plenty of much easier, much more effective ways to do so.
Sounds like "be glad they gave you some bones from the table" instead, you know, the company providing the actual proper means for users to reliably run whatever on the hardware they bought, not just the manufacturers blessed OS.
Sometimes I wonder if it really makes sense to spend so much time to do the work Apple should have done in the first place & with no guarantee it will even work after a firmware upgrade or on the next model.
Spending the same effort on more open platforms or open hardware efforts might be wiser in the long term.
On one hand I agree with you, on the other hand I'm happy they are taking the effort to do this because it will reduce e-waste. When those MacBooks no longer receive updates, they can get a second life thanks to this work.
Yes, I did say in my comment that I don’t think they deserve special praise for making a computer a computer. I have complicated feelings about it all too. On one hand, I wish my Apple devices were more open: I think the App Store tax is anticompetitive abuse of a monopoly. I have an expectation of actually owning the things I buy, and I don’t feel that way about the software ecosystem on my iPhone.
On the other hand, I adore the security and cross-application sandboxing that iOS brings. I wish Linux had more ways to sandbox applications from one another - since it’s clear that “trust all computer programs” is an insanely idiotic policy in 2024. And “implicitly trust all code” is de facto how Linux, npm, cargo, etc all run today.
My comment was in response to the idea that the speakers are maliciously designed to hurt asahi Linux, which just seems obviously wrong given how much power Apple has over their devices. They could completely kill asahi Linux with a flick of the wrist if they ever want to. There’s no way that’s the reason for their complex speaker setup.
Considering how well Apple speakers perform for their size, I would say you’re making a huge leap not supported by evidence.
They make and sell more speakers than any other company on earth and are routinely praised for their quality to size ratio.
> Apple has been caught sabotaging third-party repair multiple times and this sort of design is totally in line with that strategy. Making the design overly complex and fragile (and spending extra $$$ in the process), just to make it harder to correctly use, seems to be a common theme.
This, and the nannying nature of their OS is why I could never have a mac as a primary machine. I'm always slightly mind-boggled at the amount of technical people that do. But then I guess many just live in an IDE which works fine.
Apple has a very long history of implementing hardware drivers "in software". Steve Wozniak famously did this for a floppy disk drive for one of Apple's early computers.
I think it's just a different, integrated, approach to hardware a software development. If you're doing things custom anyway, then why add an extra chip?
Why add an extra chip?
Because: 1. Software is more likely to fail at protection with worse consequences when it does (fire, damaged goods, warranty claims). Not just now, but also the future updates. 2. It eats away at the resources that are intended for the user. In other words: it makes the machine slightly slower for the user, for no good reason to the user. 3. You can do things that are impossible in laptop OS software. It gives redundancy, even if the OS freezes you can still make sure the speaker doesn't overheat. If it's implemented in a seperate chip. Also there is real time ability, etc. 4. It makes the OS and drivers much much simpler, which is important if you want to support other OSes on the same laptop.
Advantages, for Apple, to do it in software: 1. Software upgrade is easier and cheaper (assuming they never ever fail). 2. Cheaper. 3. You can keep competing OS'es off of your hardware, because it's too hard to write drivers for your secret sauce closed source drivers that include functionality that is "preventing parts from frying themselves".
I wish they had a hardware power limiter, because I am pretty sure a buggy Boot Camp audio driver damaged my MacBook Pro speakers by overdriving them.
Also I note that it took them more than a decade to fix the bug where left-right balance would drift.
This was indeed a known problem with the 2016 Macbook Pro, the driver initially liked to destroy the speakers.
The control loop is handled by the OS? What if the (relatively complex and therefore much more likely to crash) OS crashes? Why would there not at least be some kind of basic thermal throttling implemented in hardware as a fallback? Oh wait, it's Apple, never mind.
It's managed by the Linux kernel communicating with a user space daemon (speakersafetyd). If the user land crashes or if the user space daemon is too slow the kernel can still fall back to a ridiculously low limit that will not damage the speakers for any audio. If the kernel crashes, well you get no audio in that case. IIRC the reason they couldn't do it completely in the kernel was because the temperature model uses floating point which is not allowed in the kernel.
If the kernel crashes, well you get no audio in that case.
More likely to be repeating audio, whatever was last in the buffer.
A floating to fixed conversion is possible but would take some painstaking numerical analysis.
In the late 2000s I remember hearing of PC laptops which if left in the BIOS setup would overheat and shut down due to the fan control being exclusively done by OS drivers, so this sort of "planned fragility" isn't exclusive to Apple.
I had one such laptop. It was always a crapshoot whether Windows updates (the big ones where it rebooted one or more times) would succeed or not before the laptop overheated and shut down. The reason was that the BIOS didn't do any power management and would always run the CPU at maximum power with the fans blowing like a jet engine. It was only once Windows (or Linux) was fully booted that they would take over and do proper power management. But the Windows updates were so slow that they would often spend too much time in that pre-boot environment leading to overheating and shutdown.
How do you know it was intentional?
No one thinks "let the fans stay off and overheat" is a good idea unless they were convinced to do that against their common sense. Other manufacturers' models either had the expected automatic fan control via EC firmware, or defaulted to the fans being on some intermediate speed.
Reverse engineering the first bit of Apple's flagship M1 CPU, which was implemented in the original iPhone released back in '08, through an ARM-based architectural analysis, and integrating it into Linux kernel system calls is an extreme measure. This reduplication of the original dump, checking its hex value, then altering it to see if the application is functional. Doing x,y, and z, then seeing if it works inside VM hypervisor-space.
https://softwareengineeringdaily.com/wp-content/uploads/2024...
What would be possible if everyone interested in Linux-on-macOS chipped in $5 each month? I wonder if this could fund a big enough team to make this super-polished.
I definitely respect them for everything they're doing, and I really want to like this project, but my very first exposure to this project was when someone linked to it on HN, I clicked the link, and the website told me that since it detected that I was from HN I am not welcomed and it refused to show me the article... it just left a really weird taste in my mouth.
I don't know if they still do that but that's the first thing I think of every time I see Asahi mentioned or I think about giving it a try.
That was the result of an previous thread on asahi. It was the only time I've been embarrassed by the HN community and it's treatment of people contributing to open source.
Link to back story? This is the first I've heard of that.
Just based on the quality of the comments that comes from HN towards the Asahi project, I don't blame them for doing that.
I dunno. To me, the comments seem 99% positive every time this project is discussed here. What are you referring to?
I think there are a small minority of people who consistently want to guess at the identity of Asahi Lina and make comments about her gender.
Anyone who actually cares about Asahi Linux knows who Asahi Lina is. If it is actually intended to be a secret it is one of the worst kept secrets I can think of.
I don't think it's that hard to find out, given the discussion that was already held here on the topic
That discussion and similar discussions that seems to come up regularly was probably the reason they don't want HN traffic, and it's unfortunate the the moderators here haven't worked harder to shut that stuff down. The gender of their developers just isn't relevant to the quality of their work.
That's really surprising. I've been on HN a while now and I've never noticed that type of comment. The reason I'm on HN at all is that it reminds me of the "old internet" where people posted links and discussed stuff. You're always gonna get someone saying stuff you don't like, but banning all of HN considering how decent the 99.9% is.. is odd. It's not 4chan.
https://news.ycombinator.com/item?id=42014188
Well, you just didn't look hard enough, this is from literally the last thread about Asahi before this one. Pretty much every single post about Asahi Linux on HN has transphobic comments.
Can you find me a single HN comment on this topic that states (or even implies) that the gender of a developer is relevant to the quality of their work?
Because I haven't seen anything like that, so you seem to be attacking a strawthem.
Maybe it has changed since the project was launched, and moderation has been applied. I remember lots and lots of "what's the point?" and "it's a shame your efforts are wasted" sentiments, and "apple is just going to snap their fingers and shut this down one day".
They've already thought about these things and it's still what they chose to work on, so I can understand them blocking a contributing source of those comments.
That being said, I agree the more recent threads have more positivity than negativity so that's good.
Honestly, I'd want to donate to them more for that
its insane to me that people are working so hard on reverse engineering apple silicon. like, the diagrams are right there in cupertino. it just seems like such a waste. its like during some kind of economic depression there are people starving and struggling while a bunch of food and resources are just sitting around not being used. existential grid-lock.
This definitely sucks. I feel similarly about e.g. the jailbreaking community: I appreciate the work they do and at the same time I very much wish it wasn't necessary.
If Apple and other companies like them were a little less greedy we could have far more nice things for free and Alyssa and other brilliant engineers could go work on other great projects. Also if regulators were a little more vigilant and a little less corrupt.
Someday.
To me what makes it suck even more is the fact that Apple has no qualms exploiting FOSS themselves. BSD, WebKit, their new “game porting toolkit”. And look what they provide in return. It’s so gross.
Apple does develop WebKit, Swift, and clang/llvm, as well as other things like open source ML models, but I understand what you're saying.
One important lesson I've learned regarding open source is that companies absolutely love it when you work for them for free.
Something I've learned about Apple is that one of their primary businesses is selling hardware with proprietary software running on it.
Agree, at least WebKit can be used outside of Apple. They still did KHTML dirty though.
Your point about working for free is right on the money. I get that asahi is probably intellectually stimulating to work on, but I couldn’t do it knowing I am essentially enriching a company that doesn’t have public benefit in mind.
Clang can definitely be used outside of Apple, and can even compile Linux. Swift technically can be used anywhere, though it is largely driven by Apple and laughs at backward compatibility.
The people I know at Apple actually do have public benefit in mind. They believe in things like usability, privacy, accessibility, sustainability, etc. They don't want to replace you with intrusive AI (yet). And personally I like Apple's products and am using one right now. Unfortunately all large companies tend to turn into monsters that do their best to grind up resources - natural or human - in order to turn them into money. And the designer of the iPhone regrets turning us into zombies - that was not an intended effect.
> And the designer of the iPhone regrets turning us into zombies - that was not an intended effect.
People were already zombies. They just swapped out television for smart phones.
> companies absolutely love it when you work for them for free
Which is why everyone should AGPLv3 their code.
I have to agree with this take, as much as I appreciate the indelible hacker spirit of jailbreaking closed hardware and making it available to freedom loving individuals... A huge, vocal part of me also feels like the entire effort is truly just empowering Apple to sell more closed down hardware and even the freedom loving nerds will buy it because it's sexy.
There's no getting around the sexiness of Apple hardware. But there's also no getting around how closed it is. You implicitly support Apple's business model when you buy their hardware, regardless of whether you run Asahi or Mac OS.
I think it only fuels the possibility that Apple would open up the architecture documentation where it otherwise wouldn't if you didn't have people diligently reverse engineering it.
Something similar to this happened in the early days of the iPhone, with the iPhone Dev Team. Initially, iPhone "apps" were going to be web pages, but then these reverse engineers came along and developed their own toolchain. Apple realized they had to take action and their "webpages as an app" strategy wasn't going to work.
That's a rather incomplete, revisionist and rose tinted glasses view of the history of native vs web apps in iPhone.
A much more plausible theory is that Apple likes their 30% app store commission from big players.
> https://www.apple.com/newsroom/2023/05/developers-generated-...
"App Store developers generated $1.1 trillion in total billings and sales in the App Store ecosystem in 2022"
People forget the only thing fueling big corps is profit.
Sure, but at the time that Apple made the decision, they had $0.0 trillion in billings and sales.
A decision which changed once, you know, they saw the income potential.
I was there, part of a small community writing apps pre-SDK.
Neither, I, nor anyone else, can promise you it wasn't just a simple $ calculation.
That being said, literally every signal, inside, outside, or leaked, was that apps / public SDK, if it existed meaningfully before release, had to be accelerated due to a poor reaction to the infamous "sweet solution", web apps.
I agree its logically possible, but I'd like to note for the historical record that this does not jive with what happened, at the time. Even setting that aside, it doesn't sound right in the context of that management team. That version of Apple wasn't proud of selling complements to their goods, they weren't huge on maximizing revenue from selling music or bragging about it. But they were huge on bragging about selling iPods.
Thanks. I appreciate your information. Always nice to know how things started.
I was there, writing apps with the DevTeam's toolchain before Apple ever released theirs. Were you?
Also, I assume you haven't read the Steve Jobs biography, which discusses this and contradicts your point.
One positive outcome of your comment is that it reminded me I still have the 2008 book, "iPhone Open Application Development" by Jonathan Zdziarski. That was a nice walk down memory lane.
https://www.amazon.com/iPhone-Open-Application-Development-A...
That's a rather incomplete, revisionist and rose tinted glasses view of the history of native vs web apps in iPhone.
As someone who built one of the first web apps featured by Apple, I can say that your view, too, is incomplete and revisionist.
A much more plausible theory
Theories are not necessary. Apple was very up-front about its trajectory with the iPhone at launch.
up-front at launch doesn't prevent changing their minds when looking at world record revenue.
what makes you think it was set in stone?
> what makes you think it was set in stone?
It was not. But you got contradicted by people who actually remember what happened. It is fairly well documented, and was common knowledge even at the time. Jobs was initially sold on Web Apps for several reasons, and the state of iPhoneOS 1 and its internal APIs was very precarious and not stable enough for serious third-party development. Again, this was known at the time thanks to the jailbreak community, and it has been explained in details over the years by people who left Apple since then, and Jobs himself in Isaacson’s biography.
When they pivoted towards the AppStore model, there was no predicting how big it would become, or even if it would be successful at all. The iPhone was exceeding expectations, but those were initially quite modest. It was far from world-record revenue.
Apple would close down their macos just like iOS if they could get away with it so they can get their 30 percent on apps installed .
However since their moat is now filling with European soil this is not something they will attempt at this point IMO.
Apple literally designed a new boot loader for Macs that allows the computer's owner to install and run an unsigned OS without having it degrade the security of the system when booted into the first party OS.
That didn't happen accidentally.
But perhaps it happened not out of the goodness of their hearts, but for unfathomable reasons like warding off antitrust lawsuits.
My guess would be that it was personally advocated for by someone who has enough influence within Apple to make it happen. Possibly someone on the hardware team, as I hear that the people developing the Apple Silicon processors run linux on them while they're in development.
This used to be one of the best things about Apple when Steve Jobs was still running the company: you'd get a bunch of features that a purely profit-focussed "rational business" would never approve, just because Steve wanted them. And I suspect Apple still has some of that culture.
Given that the boot liader design predates any antitrust action against this Apple, you'll have to find a different conspiracy theory.
On the internet it seems antitrust law can just be used to explain every. Antitrust actually has a pretty strict legal definition. And not a lot of thing fall into that. And if it Antitrust did apply, it would apply far more to the IPhone.
It would take an outright legal revolution in the definition of antitrust for this to be even a remote possibility, and frankly that is not happening.
Not all lawsuits need to be legitimate. They just need to be plausible and expensive to influence corporate decision-making.
This is tiresome. They cannot lock down the Mac without losing one of its biggest markets, software development. It was mentioned at a WWDC 5 or 6 years ago I think that software developers were the largest of the professional groups using Macs. You can’t have a development environment that doesn’t allow for the downloading, writing, and installation of arbitrary code.
As long as Apple wants people to develop software for its platforms and/or sell to web developers, Android developers, scientific computing, etc. they will not lock down the Mac.
Exactly! It’s like, Apple never budges—until someone reverse engineers it. Maybe Asahi can finally give them a nudge
> It’s like, Apple never budges—until someone reverse engineers it.
Could you give 3 examples of this? Because I cannot think of many.
Specially because Apple seems to not care much about the project even after current progress.
m3 support still not there (let alone m4) because things broke. Which is expected from Apple, they are just doing their thing and improving their products.
If they cared they would have at least hired these people by now. It wouldn't make a dent in their budget.
M3 and M4 is not there because the Asahi Teams have a roadmap and stick to it.
They don't want to leave M1/M2 half botched before moving on to the next gen that will ultimately support more features.
If you are not happy with the pace go on and contribute, but don't invent false issues.
You are misreading the comment. It is indicting Apple, not the Asahi team, for not caring. If Apple cared and hired the Asahi folks and provided them with help, they would probably be able to churn out drivers faster.
Apple does not want it.
My wife's 2017 MBP has gotten so dog-slow since Apple dropped support for it that it can't handle more than 3 Chrome tabs at a time now. The reality of Apple products is that the manufacturer sees them as eminently disposable. As early ARM macbooks age, the ability to run something, anything that isn't MacOS will be an asset. Otherwise, they're all landfill fodder.
I have an old google nexus 7 tablet that I recently installed uboot and postmarketos on. I can ssh to it, run X apps over ssh, print directly from it. It's pretty cool.
I also have a really old iPad 2. It works perfectly HW wise, screen, wifi etc. But is effectively a paper weight due to software.
I am logged into it from my old Apple account, that was only ever used for this tablet. I have the username and password but cannot log in as I don't know the security questions, so I can't reset the device or try to install apps. I even phoned apple but they said there's nothing they can do.
It pains me to just dump a perfectly good piece of hardware.
My 2014 MBP is still going strong and can handle more than 3 browser tabs, not sure what's up with your wife's machine but that doesn't sound normal.
You can also trade it back in to get recycled, no one should be straight up throwing computer hardware in the trash.
More than half of "recycled" e-waste just gets exported to developing countries without environmental regulations where it either gets burned or buried.
The only sustainable thing you can do with a bad laptop is fix it or part it out, but for all my years taking apart fragile electronics, is it really worth the effort to take apart a device that was intentionally designed to be difficult or impossible for the owner to repair?
The last few macOS updates have really been killing performance on Intel Macs. Your 2014 is probably safe because it'll still be running an older macOS.
For many people, the Apple Silicon GPU is an interesting problem to solve given that the firmware is loaded by the bootloader and all and its actually generally easier to interact with than say NVIDIA while having decent perf. Also GPUs in general are really complex beasts involving IP from tons of companies in general. Would not be surprised if even Apple doesn't have the full schematics...
> and its actually generally easier to interact with than say NVIDIA while having decent perf
I’m pretty sure that Turing and newer work the same way. The drivers basically do nothing but load the firmware & do some basic memory management if I recall correctly.
Honestly I can only imagine it's because the team enjoys the challenge. Once they get bored/fed up, goodbye Linux on Macs.
If you think you want to run Linux, don't buy hardware from a company that views it as a threat to their business model, simple as that.
You're just stating the problem the parent content was upset about. It's all well and good to state the facts and say "face reality" but in this case we all apparently know that it's a fragile state of affairs.
> If you think you want to run Linux, don't buy hardware from a company that views it as a threat to their business model, simple as that.
Show me any hardware that is 100% "libre"? Even the pinephone itself has plenty of closed source blobs running as firmware.
It doesn't have to be 100% libre. This is about booting any OS you want in the first place.
If you take some random windows laptop off the shelf, it will boot linux (and continue to do so in the future) because they have to support UEFI. If you take a "linux" friendly vendor off the shelf, you may even have coreboot or something on-board.
But with this apple situation there is no guarantee the next generation of hardware won't lock you out, or they might push out an OTA firmware update that locks you out. It's like porting linux to the playstation or something.
False dichotomy. You know very well there is a spectrum on which Apple resides on nearly the opposite side of pine phone.
Does it?
Seriously, Macs would be so much more attractive if Apple just straight up supported Linux
In a roundabout way, Apple tried this in the x86 era with OpenDarwin, and there was no interest for an Apple led open source operating system.
Apple does not have a reason to support any other operating system.
Apple engineers do however both officially and unofficially support Asahi, so there's that.
I’ll go back a little further: at one point before Apple purchased NeXT, Apple had its own version of the Linux kernel called Mklinux (https://en.m.wikipedia.org/wiki/MkLinux).
Oh please. OpenDarwin lasted what, 2 years? The people running it realized their efforts were merely going towards enriching OSX, it was not a truly open source effort.
If people wonder why some of us don't like Apple, this is the fundamental philosophy why. It's not about the M series, it's been their modus operandi since time immemorial. It's like if Microsoft owned x86 and nobody could run anything on it but Windows. And people would like it because it's a "cohesive ecosystem" or whatever.
I'm not sure that's really the same thing. Apple doesn't own ARM and the main issue here seems to be the GPU no? Is this much different from how things work with Nvidia? I guess the difference is that Nvidia provides drivers for Linux while Apple does not. As far as I know Nvidia Linux drivers aren't open source either though.
The point is that apple acts as both the source of hardware and software. Your analogy is not applicable because you can't run apple's OS on generic third-party ARM hardware.
But isn’t this whole thread about running Linux on Apple hardware? I haven’t seen anyone in this thread complaining that they can’t run macOS on non Apple hardware.
Nvidia is not much better, but they do only make one component and generally ensure compatibility. If Nvidia went full Apple, their cards would have a special power connector for the Nvidia PSU, a custom PCIe express lane that only works with Nvidia motherboards, which also requires Nvidia RAM sticks and only boots NvidiaOS. And also most of the software that would run on it would be blocked from running on other OSes because fuck you that's why. Also if you tried running NvidiaOS in a VM, they would sue you.
It's still profoundly weird to me that nobody can run Safari outside MacOS, even for testing. At least the EU has strong armed them into dropping thunderbolt ports now, so we have that minor interoperability going for us, which is nice.
You left out that they would also cost ~double per unit of performance. And that when Nvidia claims to be better for graphics and video, they can back those claims (albeit unfairly, some might say), whereas Apple marketing appears to avoid any price/value comparisons. So, I guess, even when you're dressing Nvidia up to sound ugly for a hypothetical, they still sound better than Apple.
Are we living in the same world? Nvidia only recently started caring about Linux (due to profit obviously, it turns out servers don't run anything else nowadays).
May I remind you of the famous `--my-next-gpu-wont-be-nvidia` flag in Linux compositor? Meanwhile, apple literally went out of their way to make secure boot for third-party OSs possible.
Conversely, Nvidia provides first-party Linux support for most of the hardware they sell, and Apple goes out of their way to make booting third-party OSes on the majority of hardware they sell (read: all non-Mac devices) all but impossible.
Except for the M-line where they went out of their way to make it possible in a secure way..
You think that's bad? Imagine how much churn there is because NVIDIA doesn't have open source drivers. I'll actually do you one better: part of my PhD was working around issues in Vivado. If it were open source I could've just patched it and moved on to real "science" but that's not the world we live in. And I'm by far not the only "scientist" doing this kind of "research".
Yeah, I agree. I do respect the effort itself, but it always feels like a huge waste of talent. Imagine what could have been created with all that time and energy.
I believe the best solution to proprietary hardware is not to buy them in the first place. Buy from vendors who are more open/supportive, like Framework and Thinkpad. This helps those vendors keep supporting open source.
A recurring theme you'll encounter across most of Apple's products is that any feature that forces first-party Apple software to compete on fair terms with other products is conspicuously missing.
blame the lawyers. any effort to share specs would be an implicit license.
I'm not familiar with this. Suppose Apple released docs under an "as is" type disclaimer like is so common in the open source community: would doing so potentially come back to bite them?
I get what you mean. I'm glad that they're doing this; it's great that the best laptop hardware is going to run Linux before long; it's a fun endeavor -- but when you zoom way out and take the philosophical view, yeah, it seems silly that it should be necessary, in the same that way it feels absurd that impressive and courageous feats in battle should have actually needed to happen.
Is it really a far comparison? Apple has a proper bootloader capable of secure booting 3rd party OSs. What part of the open-source ecosystem was built differently?
It just so happened that after possibly even more painstaking reverse engineering, the responsible hardware vendor later realized that server machines are de facto linux, and they better support it properly. Like, that Intel chip working that good was not achieved differently, we just like to wear rose-tinted glasses.
whats the point of reverse engineering this again?
1. Even if one loves macOS, Apple doesn't support its hardware forever. Being able to run an alternative operating system on unsupported hardware helps extends that device's useful life. My 2013 Mac Pro is now unsupported, but I could install Linux on it and thus run up-to-date software.
2. Some people want to use Apple's impressive ARM hardware, but their needs require an alternative operating system.
if you want an arm laptop with incredible specs, incredible build quality, incredible battery life and incredible performance that runs linux what other option is there?
You could wait a decade
Yeah, the M4 is apparently the fastest CPU on single-core benchmarks. If you want a fast laptop, you have to get it. Not being forced to use Mac OS would be nice.
Just run Linux inside a VM. Problem solved.
This is not a comparable user experience to running Linux natively for a variety of reasons, but the most obvious one is the relatively limited graphics acceleration. That's pretty important for a variety of use cases!
It is an inspirational demonstration of the hacker spirit and a way for the individuals involved to both expand their technical abilities and demonstrate them to prospective employers.
I personally consider it very inspirational though I recognize that I will probably never be able to undertake such a difficult task. I can imagine that it is very inspirational to the next generation of extremely proficient and dedicated teens who want to master software development and explore leading edge hardware.
Some people really like the hardware but can't stand the software, and have the skills to do something about it.
macOS sucks. It does a disservice to the greatest laptop hardware package ever made.
To run Linux on MBPs
It's fun :)
the negative comments in this thread are frankly disappointing especially for a place called "hacker news". like Linux doesn't have roots in reverse engineering and continued reverse engineering and people here constantly "advocating" for open source drivers from likes of Nvidia instead of the close source binary blobs.
yet here someone makes great effort and most comments are negative Nancy's asking why it's being done or bringing up support issues with newer hardware revisions from a 1-3 person outfit that everyone said would be impossible to do.
This effort is fantastic! I just ignore the drama and appreciate being able to learn something about the intense Apple environment.
Godspeed to the Asahi team, but as much as I envy the performance and efficiency of Apple silicon, I could never depend on a small group of hackers to reverse engineer every part of a closed system and to maintain it in perpetuity so that I can run free software on it. As brilliant as this team is, and as much progress as they've made, fighting against a trillion-dollar corporation that can decide to shut it down at any moment is a sisyphean endeavor. Spending thousands of dollars on that bet is a hard sell, even for tech nerds.
Not to mention that you'd be supporting a corporation that has this hostile stance towards their customers to begin with.
Meanwhile, other x86 and ARM manufacturers are making substantial improvements that are shortening Apple's lead. You're not losing much by buying a new CPU from them in 2024 or 2025, but you gain much more in return. Most importantly, the freedom to run any software you choose.
Aren't tons of Linux drivers for x86 laptops based entirely on reverse engineering? Maybe even most of them? I haven't used Linux seriously in almost two decades, but that's my memory.
Most of the x86 platform (ACPI) is well defined and openly accessible (not free but open).
There’s still some offenders (Surface, HP, Broadcom) that introduce quirks that break sleep and some HID accessories but most of it works out of the box.
ARM has been the Wild West for a while but they’re going in the right direction with device trees et al. Apple however doesn’t have to care about the “wider” ecosystem since they left x86 for their own silicon and tighter integration from bottom up allows for some really nice end user experience.
I still think it’s much better to use the VM infrastructure and just run Linux as a guest. Mac as a platform is very end user friendly as-is unlike Windows.
Device Trees are becoming an old thing now. With ARM SystemReady, most devices need to have UEFI, SMBIOS and ACPI. Only the SystemReady IR (IoT) variant is defined as using Device Trees.
https://github.com/ArmDeveloperEcosystem/systemready-guides?...
Microsoft is the only one pushing for ACPI on ARM, but as you said they have hefty weight in that area. I don't think it is right for the platform, but if it works who am I to say.
You've been out of the game for too long; almost every major hardware vendor has at least one or two people that are regularly submitting patches to the Linux kernel. My ThinkPad work computer running Linux is a major thing of joy; in many ways it performs more reliably on Linux than it does on Windows.
Even though, there are no lack of issues that are simply not cared about. E.g. thermal throttling for ThinkPads is a very annoying problem the best solution to is simply a python script that just periodically overwrites a memory location.
It isn't nearly so dire. The modern solution for this is https://gitlab.freedesktop.org/upower/power-profiles-daemon
it's using dbus how modern can it be? :-)
We are well beyond the days of NDISWrapper, most of the kernel contributions come from hardware manufacturers or integrators
I'm not sure if "tons" is accurate, but some of them are, yes. And most of them are not great IME. Not discrediting the talented programmers who work on them, it's just the nature of supporting hardware in the dark, without any support from the manufacturer. Though this was more common during the early days of Linux, and nowadays many manufacturers offer some kind of support.
OpenChrome far exceeded Unichrome Windows drivers in performance. But things have changed. Modern engineers prefer “official” software. I understand why. Systems are more complex now.
While I don't feel I have enough information to comment about the likelihood that Apple would try to stop the Asahi project, those who are knowledgable are of the opinion that they would not.
However, as a Mac Studio M1 owner that has used Asahi as a daily driver for software development since the first release (originally Arch, later Fedora), I can confidently say that I could care less. By running the software I want to run far faster than macOS could on the same hardware, Asahi has saved me countless hours and made me far more productive. And I'm incredibly grateful for this tangible benefit, regardless of what happens in the future.
You claim that your apps are faster on Linux on an M1 than macOS on the M1; can you add more detail? Which apps, and did you run benchmarks? I find it hard to believe that Apple hasn't optimized apps to be faster on their own OS and hardware.
In short, all apps run faster - I have some more detail in this blog post (which also includes a link to my original blog post with additional performance details): https://jasoneckert.github.io/myblog/fedora-asahi-remix/
I suspect this is primarily due to Linux being a more performance-optimized OS compared to macOS, which seems to have introduced a great deal of bloat over the years.
They won't stop Asahi with a frontal assault, they'll stop it by churning out new chips every year until the work to support them all is unsustainable.
Not sure why you are being down-voted. We're already seeing this with the team saying they won't work on M3 yet because they aren't even close to done with M2...
actually they're saying they're close to getting M3 support and the big thing that has prevented work on it as the lack of a M3 Mac Minis.
How is that apple's fault, nor any form of "deliberate attack"? Like, come on, neither of the parties are malicious, especially not for the sake of it.
I didn’t say it was anyone’s fault. It’s the reality of a closed system.
> As brilliant as this team is, and as much progress as they've made, fighting against a trillion-dollar corporation that can decide to shut it down at any moment is a sisyphean endeavor
Apple historically cares very little about Linux on Mac whereas it seems like you’re talking about the non-Mac product lines. Indeed, they go out of their way, if I recall correctly, to make it possible and easier in the first place.
I wouldn't describe leaving the bootloader unlocked as "going out of their way" to make all of this possible. Clearly, if just booting another kernel would be sufficient, running Linux on their machines should be easy. Yet none of this is. "Going out of their way" would at the very least be providing documentation and support so that reverse engineering their hardware wouldn't be necessary.
Also, what's not to say that they will decide to lock the bootloader just as they do on all their other devices? What does Apple practically gain by leaving it unlocked? If they're doing it as a gesture of good faith to their users, they're doing an awful job at it. Doing it so they can sell a negligible amount of machines to a niche group of hackers also doesn't make business sense.
Depending on the good will of a corporation that historically hasn't shown much of it to the hacker community is not a great idea.
The changes made were not as simple as not-setting a lock fuse bit. Making the bootloader unlockable in a way that didn't compromise their existing security model did require going out of their way. The status-quo for previous "apple silicon" bootchains (iphone, ipad, etc.) was not built this way.
Even T2 macs had no way to boot custom firmware on the T2 chip, without exploits.
Sure, they could've done way more, but evidently they'd rather not lock down the platform completely.
Apple does new hardware bring up using the Linux kernel.
If they don't release that code to the public, what good does it do? (Also, if they are only doing a temporary in-house version for initial hardware work, they can do all kinds of ugly hacks that wouldn't really be good for upstream use any anyway.)
Don't let worries about future hardware get in the way of using gear that works in the present. If future Macs aren't supported for some reason, that doesn't break your current hardware, and you can buy different hardware next time.
There are people running Linux on abandoned hardware from companies that went out of business, and that's okay.
No, because from your perspective you are supporting the wrong company. Competition is good for consumers, especially if you're a minority group.
I don't think boycotting Apple because they don't support open source enough is likely to have much effect?
Maybe there are other companies you'd prefer to support, but it's still only going to work if lots of other people buy stuff from them, too.
It's not boycotting, just voting with your wallet ...
I don't see any difference. This "vote" is a rounding error unless lots of people do it, which is not going to happen without organization. The effect on you matters and the effect on the vendor is ignorable.
> Not to mention that you'd be supporting a corporation that has this hostile stance towards their customers to begin with.
Is the "this" in that sentence your previous paragraph of concern that Apple will purposefully break AsahiLinux?
> You're not losing much by buying a new CPU from them in 2024 or 2025, but you gain much more in return
That's just false. Performance-per-watt, AMD, Intel, Nvidia and Qualcomm still get smoked by comparable M-series chips.
On desktops the equation is different because CPUs and GPUs can go ham with the wattage.
Indeed, the Apple Silicon hype stemmed from the terrible Intel gen at the time (unlike AMD), and because Apple monopolized the TSMC N5 node for months.
Most of that lead is gone, x86 is cheaper, open, and as battery friendly now.
From https://news.ycombinator.com/item?id=42016931, it looks like M4 is very much still in the lead.
It's also not just the CPU: the laptops themselves are simply phenomenal. I have never had a x86 laptop that's this much of a genuine pleasure to use (ignoring macOS, which is my least-favourite operating system). Some of the newer Windows laptops are ahead, but they're still worse in at least one metric.
Not to mention Mac laptops are some of the only laptops with non 1080p or non 16:9 aspect ratio.
Microsoft also has forced makers to drop the old S3 sleep modes which set x86 laptops back decades regarding sleep and power management.
My AMD Ryzen Framework Laptop 13 running Ubuntu 22.04 is a joy to use. Huge difference between that laptop and the first generation Intel Framework Laptop 13.
I don’t notice the fan running on it much at all.
That's an unconfirmed leak. I only compare chips that are available
Sadly, this is the exact reason why I hold back trying Asahi and run the chance of liking any of it :-(
Recently saw someone wondering why no one has tried building a laptop with as much quality as an Apple? A special version of Linux to run on such a laptop would offer more long-term commitment and maybe pull in more adoption.
I am genuinely looking forward to a Dell XPS 13 or Lenovo X1 Carbon which is fanless and have the battery duration and performance of the Apple Macbook Air.
What is the difference in battery life between linux on and macos on the Apple M1?
That is, I would be surprised if linux on the M1 had close to macos levels of battery life. My theory being the better battery life on the M1 is more due to the tight integration between the OS and the hardware power profiles than the power profiles themself.
I’m sure Apple has some unique tricks when it comes to energy efficiency, but I haven’t seen the same level of optimization in other operating systems. Apple’s energy management is just another competitive advantage, offering a level of sophistication that sets it apart technically and strategically. Just add the Mx chips to the equation.
Honestly, I don't think this is true. I had identical efficiency on my Intel MacBook Air running Linux, compared to OS X. Both ran out of battery +/- 10 minutes of each other on the same hardware.
The only distinct advantage is Safari, which is heavily optimized for efficiency. But a lightweight Linux desktop, with fewer active services, can compensate for that.
I'd be surprised if MacOS could match the efficiency of Linux. MacOS relies on a hybrid kernel architecture that emulates a variety of different APIs that aren't used or integrated fully. The simple act of running software on MacOS is deliberately somewhat inefficient, which is a perfectly fine tradeoff for a desktop OS that's not intended for server or edge applications.
The fundamental hardware of Apple Silicon is very efficient but I don't think MacOS is inherently optimized any better than the others. My experience installing Linux on Intel and PowerPC Macbooks tended to increase their battery quite noticeably.
Well, if in 2021 you took your MacBook Air M1 (8GB) out on a Friday, downloaded movies, watched them, browsed the internet, did some casual development, and came back late Sunday without needing to charge it, I’d be impressed.
Is the Linux support not there yet? How close is it?
It's the hardware that's the problem, not Linux support. Simply, the hardware manufacturers don't make fanless, thin, light, performant, power efficient laptops.
What about fanless chromebooks?
Yup that's a good start! It proves that a company other than apple can do something fanless. Probably they're plastic-y, but they are thin, light, and fanless. Power efficiency and performance are likely not good, but, at least google doesn't deliberately obfuscate their hardware like Apple does. Instead, they just let everything that's not ChromeOS fester, since they're trying to make money. But anyone who wants to start a business selling Alpine on Chromebooks can ;)
Exactly this. All that work, for reverse-engineering a handful of laptops made by a company whose only desire is to lock its users into its ecosystem and ergonomics. Even more demotivating is that the M1 and the M2 are already superseded.
Similarly, I completely do not understand the popularity of Apple's laptops in this community. Endemic mindless fanboyism.
I’m with you in spirit, but most of the work has already been done. Purchased hardware won’t change. Firmware updates could be held if needed as well. Another team could take a crack at it.
Worse case you restore macos and possibly sell at a medium loss. That said I’m still waiting.
> other x86 and ARM manufacturers are making substantial improvements that are shortening Apple's lead
x86 has fundamental issues that I believe prevent it from ever achieving the MIPS per watt efficiency of anything from ARM. I mean... the newest M4 laptop will have a 24 hour battery life. That exceeds anything remotely possible in the same laptop form factor but with x86 by nearly an order of magnitude.
So now you're talking just ARM. Linux has been compilable on ARM for a while now, so where are the competing ARM laptops that are anywhere close to the power of Apple's version of ARM?
I do get what you're saying though (I'm definitely a Linux fan and have a Linux Framework laptop), but I wish it wasn't an x86 laptop because its battery life is crap (and that is sometimes important).
> the newest M4 laptop will have a 24 hour battery life. That exceeds anything remotely possible in the same laptop form factor but with x86 by nearly an order of magnitude.
Honestly, why is this such an appealing feature? Are you often away from an outlet for 20+ hours?
I use 6+ year old laptops that last 4 hours at most on a single charge, even after a battery replacement. Plugging them in every few hours is not a big inconvenience for me. If I'm traveling, I can usually find an outlet anywhere. It's not like I'm working in remote places where this could even be an issue.
Then there's the concern about fan noise and the appeal of completely silent computers. Sure, it's a bit annoying when the fan ramps up, but again, this is not something that makes my computers unbearable to use.
And finally, the performance gap is closing. Qualcomm's, AMD's and Intel's latest chips might not be "M4 killers" (or M3, for that matter), but they're certainly competitive. This will only keep improving in 2025.
It's not that these are must-haves: it's that it removes any such anxiety about these to begin with. I can take the laptop with me wherever I'm going, and not have to worry about charging it, or that the heat will make it unbearable to use on my lap, or that the fan will be noticeable. It means I can work at a cafe, in a car, on a bus, on a train, on a flight without power, and not have to worry.
And these things compound, as the other poster mentioned: 24 hours of light use means longer heavy use, which actually does matter to me. I often move around while using the MacBook because it's good to change up my (physical) perspective - and it means I can do that without the charger while hammering every core with rustc.
Once you see that better things are possible, it's very hard to go back to the comparatively terrible performance-per-watt and heat generation of equally powerful x86 laptops.
> Once you see that better things are possible, it’s very hard to go back
Yeah, there’s something a bit freeing about being able to go all day or more without charging. Just not needing to think about power or charging when you’re busy focusing on other things.
I’m glad other manufacturers got a bit of pressure to catch up as well. Now people come to expect laptops to just run for days at a time without charging.
With all due respect I think this is a "640k is enough for everyone" problem, in the sense that you don't realize what something enables because you're simply so used to not having it:
1) Internet cafes that removed outlets to encourage laptop people not to squat :)
2) Airports where you can sit anywhere you want instead of just near an outlet
3) Airplanes when the power doesn't work (has happened more than once in my experience)
4) Cars, trains, subways, buses
5) My boat, sometimes I like to work from it for a change of pace
6) Don't have to hunt for an outlet when my fam is at the grandparents' house
7) I can work on my deck without dragging a power cord out to the table
8) I can go to a pretty overlook and watch the sunset while working
9) Conference rooms, don't have to deal with the hassle
10) Libraries, same thing. I can sit outside or in the stacks (quieter there) instead of only in the reading room (those tables are the only ones with power, in my local library)
11) Power outs and just other situations where you lose power for a time
12) It's extra juice to power your phone off of (or anything else)
I'm certainly forgetting more examples.
Over years, I've worn laptops (with sealed-in batteries) down to three-ish reliable hours. There are never enough power outlets (or AC vents or even seats at the table) for a big meeting. That's a problem for a very long meeting format like a war room or a promo committee.
That's why you throw a couple of multi outlets in the drawer or a cabinet in that room.
Tech companies wire rows of desks for laptops and big monitors, but I think it'd be hard to find a meeting room where you could safely plug in more than a dozen 140 W chargers.
24hrs of web/office is a work day of something more intense, or a few hours of something crazy.
> so where are the competing ARM laptops that are anywhere close to the power of Apple's version of ARM?
Better question: where are the incentives for them to make it? Apple is pretty much the only company with an outstanding architectural license to design ARM cores, and the best off-the-shelf ARM core designs don't even compete with 5-year-old x86 ones. If you're a company that has Apple-level capital and Apple-tier core design chops, you might as well embrace RISC-V and save yourself the ARM licensing fee. That's what Nvidia does for many of their GPU SOCs.
If SoftBank offered ARM licenses under more attractive terms, there would be genuine competition for good ARM CPUs. Given that Apple has a controlling stake in SoftBank, I wouldn't hold out faith.
> Apple is pretty much the only company with an outstanding architectural license to design ARM cores
Many other companies have done this to great effect in the past, but in recent years it has become more common to just license one of ARM's own stock cores, instead of designing your own from scratch.
This follows a period where companies like Qualcomm and Samsung were still trying to roll their own core designs from scratch, but ending up with designs that were slower and less power efficient than the cheaper stock cores you could license from ARM.
Apparently Qualcomm's new ARM laptops are pretty close.
However I think ARM platforms tend to be way less open source friendly than x86, at least on mobile. Maybe the laptops are better because they have to run Windows and Microsoft probably wouldn't put up with the shit vendors pull on Android. I don't know.
Except that ARM just cancelled Qualcomm's chip design license, so... oopsy
We might not know if it's a valid cancellation for another few months and many millions of lawyer dollars.
I wouldn't be surprised if Arm is strongarmed into playing hardball with Qualcomm by Apple itself.
What they've been able to accomplish in such a short time is nothing short of amazing, and I applaud them for their efforts.
That said, I've been using Asahi for a month, and I'm ditching it. Maybe in a year or two it'll be stable, but for now it's got too many bugs and unsupported features. A lot of the problems come down to Wayland and KDE/Gnome, as you literally have to use Wayland. But there's plenty of other buggy or challenging parts that all add up to a very difficult working experience.
One of the biggest challenges I see is support for hardware and 3rd party apps. Not only do all the apps need to support this slightly odd Arm system, but so do hardware driver developers. I never realized before just how much of a Linux system works because most people had an incredibly common platform (x86_64). Even if Linux on Mac became incredibly popular, it would actually take away development focus on x86_64, and we'd see less getting done.
(This kind of problem is totally common among Linux laptops, btw; there's a ton of hardware out there and Linux bugs may exist in each one. Adding a new model doesn't add to the number of developers supporting them all. If anything, the Mac is probably benefited by the fact that it has so few models compared to the x86_64 world. But it's still only got so many devs, and 3rd party devs aren't all going to join the party overnight)
Yeah, I can definitely see this being an issue going forward for quite some time. The existence of non-Apple ARM devices should hopefully lead to general interest in addressing these issues, but there's so much hardware and software out there, and only so many devs with the time, interest and access to fix them.
On the other hand, I suspect people will start making choices for their hardware/software that maximise compatibility, as they already do for Linux x86. ("Don't buy NVIDIA if you want functioning Wayland", etc.) It'll be tough, but things will hopefully get better over time.
'Non-Apple ARM' is still a bit like second class citizen, at least on Arch. But at least there is option to compile things by yourself.
I don't get why you were downvoted to oblivion. A perspective from someone who actually used Asahi is very valuable, so thanks for sharing.
You're definitely right that having a usable system is not just about supporting first-party hardware. Linux on its own is a huge mess of different components that all somehow need to work together, and it's a miracle of engineering that it works as well as it does, even on well-supported hardware. I can't imagine how difficult it must be getting all of this to work on hardware that requires reverse engineering. It seems practically impossible to me.
On HN downvotes are usually because of disagreement. OP's experience doesn't match mine: I have used Asahi for quite a bit longer than OP and I have experienced no serious bugs.
But then again I only use those software that's available in the distribution or those that can be compiled by me. So naturally I don't deal with incompatible third party software.
> OP's experience doesn't match mine
That's great. Is your experience somehow more valid then?
Downvoting because of disagreement is asinine to begin with. Burying opinions that contribute to the discussion does nothing but perpetuate the hive mind.
For those who didn't get my joke, she commonly dresses up as a witch at XDC :)
https://lwn.net/SubscriberLink/995383/34dc5950cab5e739/
Do you dress as kkk for Halloween?