Tiny Core Linux has a version for Raspberry Pis called piCore [0] that I wish more people would look at, because it loads itself entirely into RAM and does not touch the SD card at all after that until and unless you explicitly tell it to.
Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot.
This is actually how I have powered the backend data collection and processing for [2], as I wrote about in [3]. The end result is a static site built in Hugo but I was careful to pick parts I could safely leave to wheedle on their own for a long time.
"Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot."
Yes, this is exactly what I want, except I need some simple node servers running, which is not so ultra light. Would you happen to know, if this still all works within the ram out of box, or does this require extra work?
Wondering if it would be a good idea to setup a VM with this. Setup remote connection, and intellij. Just have a script to clone it for a new project and connect from anywhere using a remote app.
It will increase the size of the VM but the template would be smaller than a full blown OS
Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option
I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij
Ive experimented with several small distros for this when doing cross platform development.
In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.
This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.
The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.
To really hammer this home: Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.
I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.
I like using old hardware, and Tiny Core was my daily driver for 5+ years on a Thinkpad T42 (died recently) and Dell Mini 9 (still working). I tried other distros on those machines, but eventually always came back to TC. RAM-booting makes the system fast and quiet on that 15+ years old iron, and I loved how easy it was to hand-tailor the OS - e.g. the packages loaded during boot are simply listed in a single flat file (onboot.lst).
I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.
I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting on a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.
It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.
All in all, an interesting distro that may "grow on you".
All sorts. Having a full bootable OS on a CD or USB was always cool. When I left the military and was a security I used to use them to boot computers in the buildings I worked in so I could browse the internet.
Before encryption by default, get files from windows for family when they messed up their computers. Or change the passwords.
Before browser profiles and containers I used them in VMs for different things like banning, shopping, etc.
Down to your imagination really.
Not too mention just to play around with them too.
In college I used a Slax (version 6 IIRC) SD card for schoolwork. I did my work across various junk laptops, a gaming PC, and lab computers, so it gave me consistency across all of those.
Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.
I was just thinking today how I miss my DSL (Damn Small Linux) setup. A Pentium 2 Dell laptop, booted from mini-CD, usb drive for persistence. It ran a decent "dumb" terminal, X3270, and stripped down browser (dillo I believe). Was fine for a good chunk of my work day.
I ran it on a Via single board computer, a tiny board that sipped power and was still more than beefy enough to do real time control of 3 axis stepper motors and maintain a connection to the outside world. I cheated a bit by disabling interrupts during time critical sections and re-enabling the devices afterwards took some figuring out but overall the system was extremely reliable. I used it to cut up to 1/4" steel sheet for the windmill (it would cut up to 1" but then the kerf would be quite ugly), as well as much thinner sheet for the laminations. The latter was quite problematic because it tended to warp up towards the cutter nozzle while cutting and that would short out the arc. In the end we measured the voltage across the arc and then automatically had the nozzle back off in case of warping, which worked quite well, the resulting inaccuracies were very minor.
They can be nice for running low footprint VMs (e.g. in LXD / Incus) where you don't want to use a container. Alpine in particular is popular for this. The downside is there are sometimes compatibility issues where packages expect certain dependencies that Alpine doesn't provide.
A single 1920x1080 framebuffer (which is a low resolution monitor in 2025 IMO) is 2MB. Add any compositing into the mix for multi window displays and it literally doesn’t fit in memory.
I had a 386 PC with 4MB of RAM when I was a kid, and it ran Windows 3.1 with a GUI, but that also had a VGA display at 640x480, and only 16-bit color (4 bits per pixel). So 153,600 bytes for the frame buffer.
The Amiga 500 had high res graphics (or high color graphics … but not on the same scanline), multitasking, 15 bit sound (with a lot of work - the hardware had 4 channels of 8 bit DACs but a 6-bit volume, so …)
In 1985, and with 512K of RAM. It was very usable for work.
For OCS/ECS hardware 2bit HiRes - 640x256 or 640x200 depending on region - was default resolution for OS, and you could add interlacing or up color depth to 3 and 4 bit at cost of response lag; starting with OS2.0 the resolution setting was basically limited by chip memory and what your output device could actually display. I got my 1200 to display crisp 1440x550 on my LCD by just sliding screen parameters to max on default display driver.
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can easily poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer some CPU framebuffers on top anyway.
Someone last winter was asking for help with large docker images and it came about that it was for AI pipelines. The vast majority of the image was Nvidia binaries. That was wild. Horrifying, really. WTF is going on over there?
You’re assuming a discrete GPU with separate VRAM, and only supporting hardware accelerated rendering. If you have that you almost certainly have more than 2MB of ram
The IBM PGC (1984) was a discrete GPU with 320kB of RAM and slightly over 64kB of ROM.
The EGA (1984) and VGA (1987) could conceivably be considered a GPU although not turning complete. EGA had 64, 128, 192, or 256K and VGA 256K.
The 8514/A (1987) was Turing complete although it had 512kB. The Image Adapter/A (1989) was far more powerful, pretty much the first modern GPU as we know them and came with 1MB expandable to 3MB.
That's only RISC OS 2 though. RISC OS 3 was 2MB, and even 3.7 didn't have everything in ROM as Acorn had introduced the !Boot directory for softloading a large amount of 'stuff' at boot time.
It was GUI defined manually by pixel coordinates, having more flexible guis that could autoscale and other snazy things made things really "slow" back then..
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
RISC OS has the concept of "OS units" which don't map directly onto pixels 1:1, and it was possible to fiddle with the ratio on the RiscPC from 1994 onwards, giving reasonably-scaled windows and icons in high-resolution modes such as 1080p.
When I first started using QNX back in 1987/88 it was distributed on a couple of 1.4MB floppy diskettes! And you could install a graphical desktop that was a 40KB distribution!
I love lightweight distros. QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers. After years of wrangling with Slackware CDs, it was pretty wild to boot into a fully functional system from a floppy.
Licensing, and QNX missed a consumer launch window by around 17 years.
Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.
It is a decision all CEO must make eventually. Best of luck =3
"The Rules for Rulers: How All Leaders Stay in Power"
This also underscores my explanation for the “worse is better” phenomenon: worse is free.
Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).
But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.
Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.
Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.
There is also the option by well written professional wherer the startergy is to grab as much market share as they can by allowing the proliferation of their product to lockup market/mindshare and rleaget the $ enforcement for later - successfully used by MSWindows for the longest time and Photoshop .
Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.
Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.
As a business, dealing with Microsoft and Oracle is not a clean transactional sale.
They turned into legal-service-firms along the way, and stopped real software development/risk at some point in 2004.
These firms have been selling the same product for decades. Yet once they get their hooks into a business, few survive the incurred variable costs of the 3000lb mosquito. =3
And incredibly responsive compared to the operatings systems of even today. Imagine that: 30 years of progress to end up behind where we were. Human input should always run at the highest priority in the system, not the lowest.
This is cool. My first into to a practical application of Linux in the early 2000s was using Damn Small Linux to recover files off of cooked Windows Machines. I looked up the project the other day while reminiscing and thought it would be interesting if someone took a real shot at reviving the spirit of the project.
As I updated my thinkpad to 32 GB of RAM this morning (£150) I remembered my £2k (corporate) thinkpad in 1999, running Windows 98, had 32 MB of RAM. And it ran full Office and Lotus notes just fine :)
In around 2002, I got my hands on an old 386 which I was planning to use for teaching myself things. I was able to breathe life into it using MicroLinux. Two superformatted 1.44" floppy disks and the thing booted. Basic kernel, 16 colour X display, C compiler and Editor.
I don't know if there are any other options for older machines other than stripped down Linux distros.
I have an older laptop with a 32-bit processor and found that TinyCoreLinux runs well on it. It has its own package manager that was easy to learn. This distro can be handy in these niche situations.
Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives
Personally, I think that dropping 32 bit support for Linux is a mistake. There is a vast number of people in developing countries on 32 bit platforms as well as many low cost embedded platforms and this move feels more than a little insensitive.
Is that actually tiny core? It’s _likely_ it is, but that’s not good enough.
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.
> I am used to code signing with HSMs
Me too, but that requires distributing the public key securely which… is exactly where we started this!
An integrity check where both what you're checking and the hash you're checking against is literally not better than nothing if you're trying to prevent downloading compromised software. It'd flag corrupted downloads at least, so that's cool, but for security purposes the hash for a artifact has to be served OOB.
It is better than nothing if you note it down. You can compare it later if somebody / or you was compromised to see whether you had the same download as everyone else.
Sorry but this is nonsense. It’s better than nothing if you proactively log the hashes before you need them, but it’s actively harmful for anyone wi downloads it after it’s compromised.
"It is better than nothing" is literally what I said. But thinking about it more, I actually think is quite useful. Any kind of signature or out-of-band hash is also only good if the source is not compromised, but knowing after the fact whether you are affected or not is extremely valuable.
I will add that most places, forums, sites don’t deliver the hash OOB. Unless you mean like GPG but that would have came from same site. For example if you download a Packer plugin from GitHub, files and hash all comes from same site.
I used to run Puppy Linux and then TCL (and its predecessor DSL) on a super old Pentium 3 laptop with like 700mb of RAM or something. Made it actually usable!
Thank you for that comment, I did not realize Pi Zero and Pi Zero 2W worked with TCL. I am brewing an application for that environment right now so this may just save the day and make my life a lot easier. Have you tried video support for the Pi specific cams under TCL?
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
I sympathize, but I feel compelled to point out that the parent didn’t say that the interface had to look like a contemporary desktop.
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
If you look at the screenshots it immediately jumps out that it is unpolished: the spacings are all over the place, the window maximize/minimize/close buttons have different widths and weird margins.
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
It's not about the damn borders it is about the spacing between the buttons and other UI elements as you can see in the screenshot. I don't want them to introduce some shitty modern design, just fix the spacing so it doesn't immediately jump out as odd and unpolished.
Pretty sure it was not about presence of visible borders, but about missing spacing between borders and buttons. That on some screenshots, but not others. It's not like this ui has some high-density philosophy, it's just very inconsistent
This just looks like a standard _old_ *nix project. I've used Tiny, a couple of decades ago IIRC, from a magazine cover CD.
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
/* On the website, body { font-size: 70%; } — why? To drive home the idea that it's tiny? The default font size is normally set to the value comfortable for the user, would be great to respect it. */
Tiny Core also runs from ramdisk, uses a packaging systems based on tarballs mounted in a fusefs and can be installed on a dos formatted usb key. It also has a subdistro named dCore[1] which uses debian packages (which it unpacks and mounts in the fusefs) so you get access to the ~70K packages of debian.
Tiny Core Linux has a version for Raspberry Pis called piCore [0] that I wish more people would look at, because it loads itself entirely into RAM and does not touch the SD card at all after that until and unless you explicitly tell it to.
Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot.
This is actually how I have powered the backend data collection and processing for [2], as I wrote about in [3]. The end result is a static site built in Hugo but I was careful to pick parts I could safely leave to wheedle on their own for a long time.
[1]: https://til.andrew-quinn.me/posts/consider-the-cronslave/
[2]: https://hiandrewquinn.github.io/selkouutiset-archive/
[3]: https://til.andrew-quinn.me/posts/lessons-learned-from-2-yea...
"Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot."
Yes, this is exactly what I want, except I need some simple node servers running, which is not so ultra light. Would you happen to know, if this still all works within the ram out of box, or does this require extra work?
I've used many of these small Linux distros. I used to have Tiny Core in a VM for different things.
I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/
Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/
And there's also Alpine too.
I tried a handful of small distros in order to give new life to an old laptop with an AMD C-50 and 2GB of RAM.
The most responsive one, unexpectedly, was Raspberry Pi OS.
Puppy was the first Linux distro I ever tried since it was such a small download (250ish MB) and I had limited bandwidth. Good memories.
Wondering if it would be a good idea to setup a VM with this. Setup remote connection, and intellij. Just have a script to clone it for a new project and connect from anywhere using a remote app.
It will increase the size of the VM but the template would be smaller than a full blown OS
Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option
I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij
Ive experimented with several small distros for this when doing cross platform development.
In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.
This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.
The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.
To really hammer this home: Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.
I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.
Isn’t this what GitHub remote envs are (or whatever they call it)?
Never really got what it’s for.
JetBrains has Gateway which allows connecting to a remote instance and work on it.
Yes, but it requires JetBrain running on the client too.
moonlight / sunshine might work if you can't run it locally.
It'd be best with hardwired network though.
> I also like SliTaz
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
> puppy Linux, which I could never get into
In what way? Do you mean you didn't get the chance to use it much, or something about it you couldn't abide?
No I tried to use it but it didn't click with me. I had it on cd but I'd normally reach for something else.
Wow, Slax is still around and supports Debian now too? Thanks for sharing.
I used to use it during the netbook era, was great for that.
wondering what's your typical usage for those small distros?
I like using old hardware, and Tiny Core was my daily driver for 5+ years on a Thinkpad T42 (died recently) and Dell Mini 9 (still working). I tried other distros on those machines, but eventually always came back to TC. RAM-booting makes the system fast and quiet on that 15+ years old iron, and I loved how easy it was to hand-tailor the OS - e.g. the packages loaded during boot are simply listed in a single flat file (onboot.lst).
I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.
I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting on a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.
It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.
All in all, an interesting distro that may "grow on you".
1: http://www.tinycorelinux.net/book.html
All sorts. Having a full bootable OS on a CD or USB was always cool. When I left the military and was a security I used to use them to boot computers in the buildings I worked in so I could browse the internet.
Before encryption by default, get files from windows for family when they messed up their computers. Or change the passwords.
Before browser profiles and containers I used them in VMs for different things like banning, shopping, etc.
Down to your imagination really.
Not too mention just to play around with them too.
I use one of them to make an old EEE laptop a dedicated Pico-8 machine for my kids. [https://www.lexaloffle.com/pico-8.php]
In college I used a Slax (version 6 IIRC) SD card for schoolwork. I did my work across various junk laptops, a gaming PC, and lab computers, so it gave me consistency across all of those.
Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.
I used DSL for the control of a homebrew 8' x 4' CNC plasmacutter.
I was just thinking today how I miss my DSL (Damn Small Linux) setup. A Pentium 2 Dell laptop, booted from mini-CD, usb drive for persistence. It ran a decent "dumb" terminal, X3270, and stripped down browser (dillo I believe). Was fine for a good chunk of my work day.
I ran it on a Via single board computer, a tiny board that sipped power and was still more than beefy enough to do real time control of 3 axis stepper motors and maintain a connection to the outside world. I cheated a bit by disabling interrupts during time critical sections and re-enabling the devices afterwards took some figuring out but overall the system was extremely reliable. I used it to cut up to 1/4" steel sheet for the windmill (it would cut up to 1" but then the kerf would be quite ugly), as well as much thinner sheet for the laminations. The latter was quite problematic because it tended to warp up towards the cutter nozzle while cutting and that would short out the arc. In the end we measured the voltage across the arc and then automatically had the nozzle back off in case of warping, which worked quite well, the resulting inaccuracies were very minor.
https://jacquesmattheij.com/dscn3995.jpg
They can be nice for running low footprint VMs (e.g. in LXD / Incus) where you don't want to use a container. Alpine in particular is popular for this. The downside is there are sometimes compatibility issues where packages expect certain dependencies that Alpine doesn't provide.
Not to disrespect this, but it used to be entirely normal to have a GUI environment on a machine with 2MB of RAM and a 40MB disk.
Or 128K of ram and 400 kb disk for that matter.
A single 1920x1080 framebuffer (which is a low resolution monitor in 2025 IMO) is 2MB. Add any compositing into the mix for multi window displays and it literally doesn’t fit in memory.
I had a 386 PC with 4MB of RAM when I was a kid, and it ran Windows 3.1 with a GUI, but that also had a VGA display at 640x480, and only 16-bit color (4 bits per pixel). So 153,600 bytes for the frame buffer.
640 * 480 / 2 = 150KB for a classic 16-color VGA screen.
The Amiga 500 had high res graphics (or high color graphics … but not on the same scanline), multitasking, 15 bit sound (with a lot of work - the hardware had 4 channels of 8 bit DACs but a 6-bit volume, so …)
In 1985, and with 512K of RAM. It was very usable for work.
a 320x200 6bit color depth wasn't exactly a pleasure to use. I think the games could double the res in certain mode (was it called 13h?)
For OCS/ECS hardware 2bit HiRes - 640x256 or 640x200 depending on region - was default resolution for OS, and you could add interlacing or up color depth to 3 and 4 bit at cost of response lag; starting with OS2.0 the resolution setting was basically limited by chip memory and what your output device could actually display. I got my 1200 to display crisp 1440x550 on my LCD by just sliding screen parameters to max on default display driver.
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
You might be thinking of DOS mode 13h, which was VGA 320x200, 8 bits per pixel.
It's so much fun working with systems with more pixels than ram though. Manually interleaving interrupts. What joy.
Do you really need the framebuffer in RAM? Wouldn't that be entirely in the GPU RAM?
To put it in GPU RAM, you need GPU drivers.
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can easily poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer some CPU framebuffers on top anyway.
Yes if you have UEFI.
well, if you poke framebuffer pixels directly you might as well do scanline racing.
Alas, I don't think UEFI exposes vblank/hblank interrupts so you'd just have to YOLO the timing.
> NVIDIA GPU drivers are typically around 800M-1.5G.
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
Even the open source drivers without those hacks are massive. Each type of card has its own almost 100MB of firmware that runs on the card on Nvidia.
That's 100MB of RISC-V code, believe it or not, despite Nvidias ARM fixation.
Someone last winter was asking for help with large docker images and it came about that it was for AI pipelines. The vast majority of the image was Nvidia binaries. That was wild. Horrifying, really. WTF is going on over there?
You’re assuming a discrete GPU with separate VRAM, and only supporting hardware accelerated rendering. If you have that you almost certainly have more than 2MB of ram
Aren’t you cheating by having additional ram dedicated for gpu use exclusively? :)
VGA standard supports up to 256k
Computers didn't used to have GPUs back then when 150kB was a significant amount of graphics memory.
The IBM PGC (1984) was a discrete GPU with 320kB of RAM and slightly over 64kB of ROM.
The EGA (1984) and VGA (1987) could conceivably be considered a GPU although not turning complete. EGA had 64, 128, 192, or 256K and VGA 256K.
The 8514/A (1987) was Turing complete although it had 512kB. The Image Adapter/A (1989) was far more powerful, pretty much the first modern GPU as we know them and came with 1MB expandable to 3MB.
The Acorn Archimedes had the whole OS on a 512KB ROM.
That said, OSs came with a lot less stuff then.
That's only RISC OS 2 though. RISC OS 3 was 2MB, and even 3.7 didn't have everything in ROM as Acorn had introduced the !Boot directory for softloading a large amount of 'stuff' at boot time.
If that is a lot less of things not needed for the specific use case, that is still a big plus.
It was GUI defined manually by pixel coordinates, having more flexible guis that could autoscale and other snazy things made things really "slow" back then..
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
RISC OS has the concept of "OS units" which don't map directly onto pixels 1:1, and it was possible to fiddle with the ratio on the RiscPC from 1994 onwards, giving reasonably-scaled windows and icons in high-resolution modes such as 1080p.
It's hinted at in this tutorial, but you'd have to go through the Programmer's Reference Manual for the full details: https://www.stevefryatt.org.uk/risc-os/wimp-prog/window-theo...
RISC OS 3.5 (1994) was still 2MB in size, supplied on ROM.
The OS did ship with bezier vector font support. AFAIK it was the first GUI to do so.
P.S. I should probably mention that there wasn't room in the ROM for the vector fonts; these needed to be loaded from some other medium.
Yea, but those platforms were not 64bit
Switch to an ILP32 ABI and you get a lot of that space back
64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.
When I first started using QNX back in 1987/88 it was distributed on a couple of 1.4MB floppy diskettes! And you could install a graphical desktop that was a 40KB distribution!
I would like to have this again
I prefer to use additional RAM and disk for data not code
There’s an installation option to run apps off disk. It’s called “The Mount Mode of Operation: TCE/Install”.
To think that the entire distro would fit in a reasonable LLC (last level cache)..
I've been wondering if I could pull the DIMM from a running machine if everything was cached.
Probably not due to DMA buffers. Maybe a headless machine.
But would be funny to see.
Like the k language!
With 320x240 pixels and 256 colors
640x480 with 16 colours was standard in offices in the late 80s.
If you were someone special, you got 1024x768.
"640k ought to be enough for everyone!"
> Or 128K of ram and 400 kb disk for that matter.
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
I love lightweight distros. QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers. After years of wrangling with Slackware CDs, it was pretty wild to boot into a fully functional system from a floppy.
> QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers.
I don’t think that had the X Windows system. https://web.archive.org/web/19991128112050/http://www.qnx.co... and https://marc.info/?l=freebsd-chat&m=103030933111004 confirm that. It ran the Photon microGUI Windowing System (https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx....)
Somebody has build it: https://membarrier.wordpress.com/2017/04/12/qnx-7-desktop/
I never understood how that QNX desktop didn't pick up instanntly, it was amazing !
Licensing, and QNX missed a consumer launch window by around 17 years.
Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.
It is a decision all CEO must make eventually. Best of luck =3
"The Rules for Rulers: How All Leaders Stay in Power"
https://www.youtube.com/watch?v=rStL7niR7gs
This also underscores my explanation for the “worse is better” phenomenon: worse is free.
Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).
But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.
Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.
Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.
There is also the option by well written professional wherer the startergy is to grab as much market share as they can by allowing the proliferation of their product to lockup market/mindshare and rleaget the $ enforcement for later - successfully used by MSWindows for the longest time and Photoshop .
Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.
Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.
As a business, dealing with Microsoft and Oracle is not a clean transactional sale.
They turned into legal-service-firms along the way, and stopped real software development/risk at some point in 2004.
These firms have been selling the same product for decades. Yet once they get their hooks into a business, few survive the incurred variable costs of the 3000lb mosquito. =3
The only reason FOSS sometime works was because the replication cost is almost $0.
In *nix, most users had a rational self-interest to improve the platform. "All software is terrible, but some of it is useful." =3
because it's not free and their aim was at developers and the embedded space. How many people have even heard of QNX?
That famous QNX boot disk was the first thing I thought of when reading the title as well.
Me too! And the GUI was only a 40KB distribution and was waaaaaay better than Windows 3.0!
And incredibly responsive compared to the operatings systems of even today. Imagine that: 30 years of progress to end up behind where we were. Human input should always run at the highest priority in the system, not the lowest.
yeah but what can you do with free QNX? With tinycore, you can install many packages. What packages exist for QNX?
This is cool. My first into to a practical application of Linux in the early 2000s was using Damn Small Linux to recover files off of cooked Windows Machines. I looked up the project the other day while reminiscing and thought it would be interesting if someone took a real shot at reviving the spirit of the project.
and who is using this?
As I updated my thinkpad to 32 GB of RAM this morning (£150) I remembered my £2k (corporate) thinkpad in 1999, running Windows 98, had 32 MB of RAM. And it ran full Office and Lotus notes just fine :)
In around 2002, I got my hands on an old 386 which I was planning to use for teaching myself things. I was able to breathe life into it using MicroLinux. Two superformatted 1.44" floppy disks and the thing booted. Basic kernel, 16 colour X display, C compiler and Editor.
I don't know if there are any other options for older machines other than stripped down Linux distros.
https://freedos.org/
I mean - DOS or it's equivalents still exist and for older computers you will probably be able to find drivers.
I have an older laptop with a 32-bit processor and found that TinyCoreLinux runs well on it. It has its own package manager that was easy to learn. This distro can be handy in these niche situations.
Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives
Same situation but I'm using NetBSD instead. I'm betting it'll still be supporting 32-bit x86 long after the linux kernel drops it.
Personally, I think that dropping 32 bit support for Linux is a mistake. There is a vast number of people in developing countries on 32 bit platforms as well as many low cost embedded platforms and this move feels more than a little insensitive.
That’s even smaller than these!
https://en.wikipedia.org/wiki/Bootable_business_card
I've used it around early 2010s as a live cd to fix partitions etc. Definitely recommend as a lightweight distro.
Was a little tricky to install on disk and even on disk it behaved mostly like a live cd and file changes had to be committed to disk IIRC.
Hope they improved the experience now.
The site doesn't have HTTPS and there doesn't seem to be any mention of signatures on the downloads page. Any way to check it hasn't been MITM'd?
https://github.com/tinycorelinux
Ideas to decrease risk of MITM:
Download from at least one more location (like some AWS/GCP instance) and checksum.
Download from the Internet Archive and checksum:
https://web.archive.org/web/20250000000000*/http://www.tinyc...
Not foolproof. Could compute MD5 or SHA256 after downloading.
And compare it against what?
EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/
Which is served from the same insecure domain. If the download is compromised you should assume the hash from here is too.
An integrity check is better than nothing, but yes it says nothing about its authenticity.
You can use this site
https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here
https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.
I don’t see any way to check the hash OOB
Also this same thing came up a few years ago
https://www.linuxquestions.org/questions/linux-newbie-8/reli...
Is that actually tiny core? It’s _likely_ it is, but that’s not good enough.
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
I usually only install on like a Raspberry Pi or VM for these toy distros
Are any distros using block chain for this ?
I am used to using code signing with HSMs
I’d install it as a VM maybe,
> are any sisters using blockchain
I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.
> I am used to code signing with HSMs
Me too, but that requires distributing the public key securely which… is exactly where we started this!
An integrity check where both what you're checking and the hash you're checking against is literally not better than nothing if you're trying to prevent downloading compromised software. It'd flag corrupted downloads at least, so that's cool, but for security purposes the hash for a artifact has to be served OOB.
It is better than nothing if you note it down. You can compare it later if somebody / or you was compromised to see whether you had the same download as everyone else.
Sorry but this is nonsense. It’s better than nothing if you proactively log the hashes before you need them, but it’s actively harmful for anyone wi downloads it after it’s compromised.
"It is better than nothing" is literally what I said. But thinking about it more, I actually think is quite useful. Any kind of signature or out-of-band hash is also only good if the source is not compromised, but knowing after the fact whether you are affected or not is extremely valuable.
It’s not better than nothing - it’s arguably worse.
There is a secure domain to download from as a mirror. For extra high security, the hash should be delivered OOB like on a mailing list but it isn’t
Where is that mirror linked from? If for the HTTP site that’s no better than downloading it from the website in the first place.
> for extra high security,
No, sending the hash on a mailing list and delivering downloads over https is the _bare minimum_ of security in this day and age.
You can use this site https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
I posted that above in this thread.
I will add that most places, forums, sites don’t deliver the hash OOB. Unless you mean like GPG but that would have came from same site. For example if you download a Packer plugin from GitHub, files and hash all comes from same site.
Because there's big demand to mitm users of an extremely small and limited distribution from 2008?
I swear to god. Won't somebody think of the yaks.
I used to run Puppy Linux and then TCL (and its predecessor DSL) on a super old Pentium 3 laptop with like 700mb of RAM or something. Made it actually usable!
That's a ton of ram for a pIII
I love Tiny Core Linux for use cases where I need fast boot times or have few resources. Testing old PCs, Pi Zero and Pi Zero 2W are great use cases.
Thank you for that comment, I did not realize Pi Zero and Pi Zero 2W worked with TCL. I am brewing an application for that environment right now so this may just save the day and make my life a lot easier. Have you tried video support for the Pi specific cams under TCL?
Another small one is the xwoaf (X Windows On A Floppy) rebuild project 4.0 https://web.archive.org/web/20240901115514/https://pupngo.dk...
Showcase video https://www.youtube.com/watch?v=8or3ehc5YDo
iso https://web.archive.org/web/20240901115514/https://pupngo.dk...
2.1mb, 2.2.26 kernel
>The forth version of xwoaf-rebuild is containing a lot of applications contained in only two binaries: busybox and mcb_xawplus. You get xcalc, xcalendar, xfilemanager, xminesweep, chimera, xed, xsetroot, xcmd, xinit, menu, jwm, desklaunch, rxvt, xtet42, torsmo, djpeg, xban2, text2pdf, Xvesa, xsnap, xmessage, xvl, xtmix, pupslock, xautolock and minimp3 via mcb_xawplus. And you get ash, basename, bunzip2, busybox, bzcat, cat, chgrp, chmod, chown, chroot, clear, cp, cut, date, dd, df, dirname, dmesg, du, echo, env, extlinux, false, fdisk, fgrep, find, free, getty, grep, gunzip, gzip, halt, head, hostname, id, ifconfig, init, insmod, kill, killall, klogd, ln, loadkmap, logger, login, losetup, ls, lsmod, lzmacat, mesg, mkdir, mke2fs, mkfs.ext2, mkfs.ext3, mknod, mkswap, mount, mv, nslookup, openvt, passwd, ping, poweroff, pr, ps, pwd, readlink, reboot, reset, rm, rmdir, rmmod, route, sed, sh, sleep, sort, swapoff, swapon, sync, syslogd, tail, tar, test, top, touch, tr, true, tty, udhcpc, umount, uname, uncompress, unlzma, unzip, uptime, wc, which, whoami, yes, zcat via busybox. On top you get extensive help system, install scripts, mount scripts, configure scripts etc.
It is so tiny that it is http only because https was too big...
This would be perfect if it had an old Mac OS 7 Platinum-like look and window shading.
Looks really nice, I like the idea.
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
Modern UX trends are a scourge of excessive whitespace and low information density that get in the way of actually accomplishing tasks.
Any project that rejects those trends gets bonus points in my book.
I sympathize, but I feel compelled to point out that the parent didn’t say that the interface had to look like a contemporary desktop.
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
With CorePlus, you have the the choice of some 10 GUI environments. I prefer openbox or jwm.
If you look at the screenshots it immediately jumps out that it is unpolished: the spacings are all over the place, the window maximize/minimize/close buttons have different widths and weird margins.
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all
Exactly.
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
There is a balance.
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
One could argue that visible borders are a feature, not a bug.
If you are trying to maximize for accessibility, that is.
It's not about the damn borders it is about the spacing between the buttons and other UI elements as you can see in the screenshot. I don't want them to introduce some shitty modern design, just fix the spacing so it doesn't immediately jump out as odd and unpolished.
Pretty sure it was not about presence of visible borders, but about missing spacing between borders and buttons. That on some screenshots, but not others. It's not like this ui has some high-density philosophy, it's just very inconsistent
This just looks like a standard _old_ *nix project. I've used Tiny, a couple of decades ago IIRC, from a magazine cover CD.
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
for a moment I thought about a Corel Linux revamp :)
/* On the website, body { font-size: 70%; } — why? To drive home the idea that it's tiny? The default font size is normally set to the value comfortable for the user, would be great to respect it. */
Tiny Core also runs from ramdisk, uses a packaging systems based on tarballs mounted in a fusefs and can be installed on a dos formatted usb key. It also has a subdistro named dCore[1] which uses debian packages (which it unpacks and mounts in the fusefs) so you get access to the ~70K packages of debian.
It's documentation is a free book : http://www.tinycorelinux.net/book.html
[1] https://wiki.tinycorelinux.net/doku.php?id=dcore:welcome
“Tiny” :)
I remember booting Linux off a 1.44Mb floppy
What gui were you running?
For unknown reasons, tinycorelinux's website is geoblocked in Japan.
Does it run docker?
With some modifications, yes. Boot2docker and boot2podman were based on tinycorelinux.
https://luxferre.top http://t3x.org
All of the minilaguages exposed there will run on TC even with 32MB of RAM.
On TC, set IceWM the default WM with no opaque moving/resizing as default and get rid of that horrible dock.