1) You don't have to keep copyrights up to date (and in fact you don't have to put them at all), 2) Every single startup i've seen on HN is sketchy af. Racking laptops in a cage at a Hetzner DC is probably the least sketchy product i've seen here.
And honestly, not a terrible idea, I have old laptops that would work as a VPS. $7/month for somebody to host a public server for me, and not on my crappy residential isp? All I have to lose is an old laptop I haven't touched in 5 years? Sign me up
(they do need a real domain before i'll give them money tho, lol)
Yeah but for $6/mo you can get a tiny linode or digital ocean droplet, and not worry about hardware failing. It's true that a laptop probably has more resources than the smallest VMs, but no remote management interface and can't scale if you suddenly had a surge of traffic.
> Yeah but for $6/mo you can get a tiny linode or digital ocean droplet
That gets you, what, 1 "vCPU" with maybe a gig of ram and a couple of dozen gig of disk.
If you (or a friend) work for a company of any size, there's probably a cupboard full of laptops that won't upgrade to Win11 sitting there doing nothing that you could get for free just by asking the right person. It'll have 4 or 8 cores, each of which is more powerful that the "vCPU" in that droplet. It'll have 8 or maybe 16gig of ram, and at least half a TB of disk and depending on that laptop quite likely to be able to be configured with half a TB of fast nVME storage and a few TB of slower spinning rust storage.
If you want 8vCPUs/cores, 16GB of ram, and 500GB of SSD, all of a sudden Digital Ocean looks more like $250/month.
If you are somewhere in that grey area where you need more than ivCPU and 1GB of memory, grabbing the laptop out of the cupboard that your PM or one of the admin staff upgraded from last year and shipping not off to a datacenter with your flavour of linux installed seems like it's worth considering.
Hell, get together with a friend and have two laptops hosted for 14Euro/month between you, and be each others "failing hardware" backup plan...
I bet colos will plug a KVM into your hardware and give you remote access to that KVM. I also bet rachelbythebay has at least one article that talks about the topic.
> ...can't scale if you suddenly had a surge of traffic.
1) If your public server serves entirely or nearly-entirely static data, you're going to saturate your network before you saturate the CPU resources on that laptop.
2) Even if it isn't, computers are way faster than folks give them credit for when you're not weighing them down with Kubernetes and/or running swarms of VMs. [0]
Colocating itself, though isn't new at all. Lots of different ways to host, including servers, mac minis, laptops are conceivable too because they share the same kinds of parts that mac minis might have.
> Website copyright is out of date by two years...
Can you explain how a copyright can be "out of date by two years"?
I always thought the copyright notice should reflect the year of creation, and that it's actually bad (from a legal POV) to always show the current year through scripting.
So many people want to believe in this sort of thing for various reasons that I get fatigued at the very thought of trying to explain to people who believe in it earnestly that it is not a good idea. (e.g. commercial hosting services are really competitive; for a long time the cost of computing has been going down over time though I don't know if that is reversing because we've hit the end of the real Moore's law [1] or if it is a temporary blip)
[1] the motor behind it is cost reduction, once that stops it stops because we can't afford it anymore!
Well, it exists, but it exists if you’re willing to buy server hardware on eBay, hustle to get old parts working together, negotiate a good deal on a cabinet, get space from ARIN and announce it and so on. There are probably 10-50x cost efficiencies vs. renting 5 year old CPU families on AWS at huge markup.
A laptop isn’t the way to do that though. And your typical VC-fueled startup isn’t going to know how to do it either. It takes a very narrow slice of competence to be able to do that correctly.
I think it's most likely testing the waters for a real offering. It's not that weird. Many colo data centers already have policies about hosting laptops because it's already something that happens. It just isn't common and usually isn't for hosting servers.
If the battery in the laptop is still good, it comes with it's own UPS. My MBPs haven't had an ethernet port in a minute, so do you have to supply your own adapters as well??? You could fit ~15 MBPs on their edge in 9RUs. That'd be an interesting looking rack. Not quite a blade chassis. It'd be rather boring looking as there's no blinky-blinkies
I didn't really think that any of what I wrote would be taken seriously to the point of needing a retort. I mentioned blade servers and knew rack unit measurements which as context clues would have suggested I was familiar with actual data center equipment.
> Your old laptop packs more CPU power, RAM, and storage than their entry-level offerings - and with us, you'll pay just €7/month for professional hosting
This is basically the same price as the cheapest options on Hetzner: https://snipboard.io/C9epWo.jpg. Sure my old laptop does have more RAM and a bigger SSD, but I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day. So is the tradeoff really worth it? It's hard for me to believe that this is a genuine improvement for most things. The only definite winning case I can think of is if I have a process I want to run, but I don't care if it just suddenly stops working. But when would that ever be the case? and to save a couple dollars per month?
> I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day
I’m a happy Hetzner customer but I have had servers that I rented from them die a couple of times.
I rent physical servers from them that have been previously rented to other customers. At some point hard drives fail.
However, I have solid backup setup in place (ZFS send and recv to other physical hosts in different physical locations) with that in mind, so I haven’t lost data with Hetzner. But if I naively did not have any backup then data would have gotten lost a couple of times.
Of course. Just pointing out that even if the hardware might be server grade, doesn’t mean one can assume that the risk of hardware failure is negligibly low. And that one always needs to have offsite backups.
Not sure how Hetzner works, but do they have IDRAC type access to their servers and/or remote hands available to fix stuff? Guess you'd be on the hook for that sort of thing here, making the Hetzner price more appealing if they do include that kind of functionality.
Its a page hosted on CLoudFlare's "pages.dev" service. Their method of contact is a Google Form which does have an email address on this domain "CoLaptop [dot] com", but that as a web address does not work.
Old laptops as low cost servers? Absolutely, build a homelab in your own basement, rent a cheap VPS, set up wireguard and viola - instant data center for tens of dollars per month. It's not production grade but you'll learn a ton.
But colocation?
Strip away the learning component and add production uptime requirements - why would you even consider using crusty old laptops for this? If you have production grade needs, look to a standard cloud provider or, at the very least, a colo facility where you can put production-grade equipment.
I don't see it. Hobby projects can use a VPN tunnel to make a data center from local equipment. Real projects that choose colocation have uptime requirements that simply can't be met by random consumer hardware. The venn diagrams don't intersect.
There's no middle ground where you try to run a real business on old laptops. That's insane. You either keep things small/hobby and stay simple, or graduate to production-grade equipment once you have real requirements.
The middle ground, taking on production colocation problems plus the unreliability of random hardware, sounds like the worst of both worlds. There are both simpler and more robust options.
They aren't targeting no one (and looks like they aren't at all).
Just do the math: for a measly €2000 a month, a salary of a cashier in Amsterdam, you already need to have 285 clients - and this is without taxes and revenue.
I have always dreamed of substituting a really expensive rack of servers with a couple of elderly laptops, with their built in UPS, handy screens, keyboards and trackpads. However, for pet projects, I now have a better way of being cheapskate.
Some ecommerce software stacks really need gargantuan amounts of RAM and CPU, which gets expensive on the cloud. However, it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement, therefore having the setup reasonably secure and cheap.
Downsides to this, having customer details in the basement rather than a secure facility, but how many developers have huge customer databases just casually lying around on USB sticks and whatnot? It happens.
The core density is really low. You can run a 96 core Epyc from the previous generation at 700 W and that’s a lot of compute. It makes sense for a home server (and I have an old Mac playing that role at home) but otherwise I don’t think it makes sense unless you’re taking off the display and racking them super tight.
Even then, you’re probably better off with Cloudflare tunnel and using it as a home server.
Just gonna point this out since I noticed it a few weeks ago and notice is still there, Hetzner has paused selling new colocation service: https://www.hetzner.com/colocation/
I work for IPinfo and we operate a distributed network consisting of around 1,400 servers. I think we have reached a point where it is extremely hard for us purchase VPSes from interesting ASNs.
To support lots of ISPs, universities, and different organizations we have been asking them if they have an old laptop lying around that they can host our software on. Goal is to reach 70,000 probes within the next couple of years.
It is a simple probe software and we share some data or we can pay 20-30 bucks a month for it. We have a couple of NUCs in remote regions but no laptops yet. Basically, we are even happy if an ISP (or any one) hosts our software from a laptop dangling by a charging cable from a socket in some random corner.
We can send over a RPI or NUC, but with remote hands, and setup and all that it can get quite expensive. So, we always first ask if they have an old laptop lying around and can install our software there.
For us, at least, we are not interested in the hardware aspect. We are interested in the network. The old laptop approach only acts as a last resort. We will be more than happy to go with the predictability of a traditional VPS hosted in a traditional data center. Colocation, no matter what form it takes, involves a lot of moving parts.
Interesting challenge! My first thought: 70k probes is a lot and having to set that up is quite a task. Why not develop an phone app with exit node capabilities (similar to Tailscale) so you can use that for probing? The real win is that people move around, obtaining you even more data points from other network.
We actually have app-based data collection capabilities and initiatives. Our goal, or more appropriately, vision, is to map the internet in real time. This involves SSH access to devices to run different forms of measurements at a very high frequency and have control over those devices.
Managing 70k probes is not going to be super hard.
Managing 1,400 servers is just a normal business operation, not a technical challenge. Each probe has a standard OS-level configuration. Automation and configuration are deployed from a central system. Each probe is actively monitored and troubleshot. Data is dumped to a data warehouse. We make incremental improvements to our network. When servers go down, we talk to vendors.
We do a lot of novel engineering things from the infrastructure, data, and research team. Having a very identical set of servers really allows us to focus on product and performance engineering, not troubleshooting engineering. With application-based probing, I assume it will complicate things quite a bit, as there are different operating systems, different devices, etc.
For us, lately the challenge is not technical. It has been exclusively procurement. This quarter (https://ipinfo.io/blog/probenet-q1-2026-expansion), we exclusively focused on regional diversity which involved outreach to national ISPs or telecoms. Securing servers from telecoms is an extremely bureaucratic and expensive process. So, we are hoping to partner up with eyeball networks and the larger NOG community.
Any recommendations on inexpensive colo for personal projects/servers? A few years ago ran across a few links for places to host a box and I didn't save them, and have regretted it.
ISTR one was basically just industrial office space that was running a lower-tier colo, and another was some guys in a metro area that got a rack in a data center and were spreading the cost around with other like-minded folks. At my work I have machines in an Iron Mountain facility, but for personal projects I don't need anything like that, but I'd like something that's more capable than AWS that I'm paying $80/mo for a couple VMs.
There is one scenario it would be good for. People running stock trading programs often need a better network and always on environment than they can get at home
Not sure if this is legit... I could see it working well enough if they require the laptop to support at least say thunderbolt3/usb4 then they can use a single connection interface to a management/dock interface that includes a network connection (1gb/2.5gb)
The trouble is a lot of laptops won't power-on with the screen closed and have heavy sleep/suspend behaviors in general. Not to mention general airflow in whatever shelving system is used with the laptops, assuming 2-4 laptops per shelf, per 1u. Not to mention, one would probably want/need some means of ensuring appropriate driver support, or an appropriate Linux or other setup for said hardware.
While I can see it working, depending on shipping costs can definitely see some problematic bits.
I’m curious if they remove the displays. Not every laptop works with the display closed and it might cause heat issues that throttle the CPU or reduce the life of the machine to run it like that long-term.
There is no way they are partnering with Hetzner, or charging just 7€/month flat rate... they specifically want to know the model of the laptop, and offer to send send a courrier to your door...
I would be really surprised if this was a scam. It doesn't have the smell of a scam at all. Who would target a very tech savvy audience just to get old laptops?
Given that the "sign up" link goes to a survey form, my guess is this is just some idea someone had and they made this page to see if anyone actually wants it before they put any effort into making it happen.
Colo scams are pretty common. Some percentage of people will offer to send expensive laptops, and the scammers can discard the rest of "interested customers".
It is inviable to colo old laptops, a regulatory nightmare - Hetzner would NEVER accept those in their datacenters. It is also absurd to think they are partnering with Hetzner to begin with.
It makes no sense to believe they will even EXPORT laptops from Europe to the US if you choose the US location. It just makes no sense, so I don't get why I am getting downvoted.
I don't want to crap on peoples ideas. Really, I don't.
But getting some closet case computer with unknown hardware and turning it into a server, at scale, is an impossible scheme.
The only way to make it work would be to buy hundreds of laptops at once and refurb, new storage, and standardize with custom power delivery. Because who wants hundreds of laptop PSU's plugged into power strips. And those do in fact die.
And then there's the horror of manually removing wifi hardware and batteries. Battery disposal is an issue. And having worked on hundreds of laptops, some of them are major pains in the neck to get to the battery. Consumer HP's come to mind. The bottom cover can be difficult to remove without breaking any of the clips.
Yea this is a stupid idea. Old laptops don't have good performance per watt compared to new servers once you factor in that they are many many times slower.
A ton of old batteries in one place. The batteries themselves are probably not a concern, but if something happens to the facility, then you have a ton of problems.
Security of the facility is a concern if someone can get in and walk out with an armful of laptops.
Laptops don’t scale from a stacking stand point. Sure, close the lids and line them up. Then you’ll have a lot of failures. Older laptops are intended to cool through the keyboard and top vents by the screen.
+ The usual limiting factor in data centers is power, so laptops could be more optimized for greater cycle efficiency per power than comparable old servers.
+ Laptops are generally compact and so achieve greater rack densities than individual co-lo servers. I'm thinking about 34 or 51 laptops could be stored in 9 or 10U either 2 or 3 rows deep by 17 wide.
+ Shipping a laptop to a co-lo data center is cheaper than a 1U server.
~ Reusing electronics saves e-waste and reduces unnecessary consumption, either old servers or old laptops.
- Laptops lack ECC RAM.
- Laptops typically don't use nearly as fast CPUs or RAM as contemporaneous servers.
- Laptops are limited in their storage options.
- Laptops lack remote, lights-out management of real servers.
- Repairing old failed laptop components is more difficult than old servers.
~ Old laptops tend not to have usable batteries, so there's unlikely to be much an inherently distributed battery backup capability.
- Old laptop batteries of various origins could be a li-ion NMC fire hazard at scale.
~ Reusing old stuff at any sort of scale would prefer standardization, and it's sometimes difficult to amass many of the same discontinued model.
Conclusion: Do it if it works for you. It's kinda cool.
I think it's one of those ideas that only works with nostalgia or hoarding impulses to support it.
I think normal virtualization approaches are far more power efficient, at a fleet level, than any kind of cluster of laptop scenarios. You can pile in the cores and amortize the costs of memory controllers etc. over a large set of guests.
It is a funny way to get features of both worlds. One reason to want colo (rather than VMs) is for predictability, but laptops still give you the funny throughput problems, because of thermal throttling instead of competing guests.
Sure that can work for individuals and small groups with physical separate high availability. It maybe faster and simpler to find another replacement, but I'm thinking about it from a permaculture perspective of sometimes old parts inventory exist somewhere for cheap or it's only a small broken component that could be fixed to avoid unnecessary e-waste contributions and spending more money on consumption to fix a problem.
Typical enterprise server lifecycle is 4-6 years purposefully throws away uncertain remaining potential value because budgets needs to be spent, risk aversion to repairing what's considered "outdated", and possibly acquiring faster and more energy-efficiency equipment. I would guess it's about the same lifecycle length for enterprise and personal laptops too.
Eeek, I can't imagine what this is like if it scales. What happens to the fire risk when theres 20,000 laptops with aging batteries all sitting together? I hope they take the batteries out, however many laptops use batteries to smooth out power fluctuations.
Laptops aren't designed to be servers - peg your laptop CPU and GPU at 100% and see how long it lasts, I've done this before and the answer is about "2 months", yep sure, this effort isn't targeting that workload, but how many bad apples does it take to start a fire? In their page they say "kubernetes server - no problem" kubernetes DOES keep the CPUs busy, not pegged, but busy enough so that they wont step down their frequency.
I admire the effort to reuse old tech, but boy oh boy would I not want to be a sysadmin here!
My old Lenovo t420 has been running 24/7 pegged as a multi-camera DVR since 2011, no issues whatsoever. Of course the battery is removed, but I don't see many decent laptops struggling running under load for prolonged periods.
I worked for a place that did something akin to this in the early 2010s. Someone figured out how to add 32-bit company laptops to the virtualization cluster (likely because they were using one as a stand in for a server that at the time would have been in the works but not yet purchased) and so once that work had been incurred they just kept "retiring" unserviceable company laptops to the cluster. Imagine a standard wire metro-rack crammed in a telecom closet beside a normal server rack. Now imagine that metro rack literally full of Toshiba Satellite Pro's from about 2005-9. The cluster hosted virtual machines for testing.
No fires, no hardware problems. No special cooling other than the mini-split that was in the closet to cool the server rack. They just kept trucking. But modern hardware is much more high strung and I don't doubt you'd have weird failures.
Edit: Back then VMs were how things were done and RAM was seemingly always the bottleneck by a mile, so the cluster did add up to a meaningful amount of extra performance compared to not having it.
uh yeah i mean we 'colo' at work because its cheaper than buying a windows server with multiple RDP licenses. We have some legacy stuff that must be run on site.... so we buy $200 laptops and people can remote in for years.
This seems very sketchy. Give us your laptop and we promise we won't keep it...
> © 2024 CoLaptop. All rights reserved.
Website copyright is out of date by two years... And the website has been online since then. https://crt.sh/?q=colaptop.pages.dev
> Thank you for your interest. Please submit the form below and we'll get back to you within 2 working days.
> - Team @ CoLaptop.com
Also colaptop.com is not even registered anymore. If I had to guess the pages.dev site stayed up but the domain and email are nowhere.
> > © 2024 CoLaptop. All rights reserved.
> Website copyright is out of date by two years... And the website has been online since then. https://crt.sh/?q=colaptop.pages.dev
That's exactly what it should be then. A copyright notice lists the year of publication. Not the current year.
> A proper copyright notice consists of three elements: a © symbol, the year of publication, and the copyright owner’s name.
https://copyrightalliance.org/faqs/what-is-copyright-notice/
1) You don't have to keep copyrights up to date (and in fact you don't have to put them at all), 2) Every single startup i've seen on HN is sketchy af. Racking laptops in a cage at a Hetzner DC is probably the least sketchy product i've seen here.
And honestly, not a terrible idea, I have old laptops that would work as a VPS. $7/month for somebody to host a public server for me, and not on my crappy residential isp? All I have to lose is an old laptop I haven't touched in 5 years? Sign me up
(they do need a real domain before i'll give them money tho, lol)
Yeah but for $6/mo you can get a tiny linode or digital ocean droplet, and not worry about hardware failing. It's true that a laptop probably has more resources than the smallest VMs, but no remote management interface and can't scale if you suddenly had a surge of traffic.
> Yeah but for $6/mo you can get a tiny linode or digital ocean droplet
That gets you, what, 1 "vCPU" with maybe a gig of ram and a couple of dozen gig of disk.
If you (or a friend) work for a company of any size, there's probably a cupboard full of laptops that won't upgrade to Win11 sitting there doing nothing that you could get for free just by asking the right person. It'll have 4 or 8 cores, each of which is more powerful that the "vCPU" in that droplet. It'll have 8 or maybe 16gig of ram, and at least half a TB of disk and depending on that laptop quite likely to be able to be configured with half a TB of fast nVME storage and a few TB of slower spinning rust storage.
If you want 8vCPUs/cores, 16GB of ram, and 500GB of SSD, all of a sudden Digital Ocean looks more like $250/month.
If you are somewhere in that grey area where you need more than ivCPU and 1GB of memory, grabbing the laptop out of the cupboard that your PM or one of the admin staff upgraded from last year and shipping not off to a datacenter with your flavour of linux installed seems like it's worth considering.
Hell, get together with a friend and have two laptops hosted for 14Euro/month between you, and be each others "failing hardware" backup plan...
> ...no remote management interface...
I bet colos will plug a KVM into your hardware and give you remote access to that KVM. I also bet rachelbythebay has at least one article that talks about the topic.
> ...can't scale if you suddenly had a surge of traffic.
1) If your public server serves entirely or nearly-entirely static data, you're going to saturate your network before you saturate the CPU resources on that laptop.
2) Even if it isn't, computers are way faster than folks give them credit for when you're not weighing them down with Kubernetes and/or running swarms of VMs. [0]
3) <https://www.usenix.org/system/files/conference/hotos15/hotos...> (2015)
[0] These are useful tools. But if you're going to be tossing a laptop in a colo (or buying a "tiny linode or [DO] droplet"), YAGNI.
It could be a pre-sales site to estimate demand.
Colocating itself, though isn't new at all. Lots of different ways to host, including servers, mac minis, laptops are conceivable too because they share the same kinds of parts that mac minis might have.
> Website copyright is out of date by two years...
Can you explain how a copyright can be "out of date by two years"?
I always thought the copyright notice should reflect the year of creation, and that it's actually bad (from a legal POV) to always show the current year through scripting.
The premise was kinda dumb, wouldn't be surprised if its just a scam.
So many people want to believe in this sort of thing for various reasons that I get fatigued at the very thought of trying to explain to people who believe in it earnestly that it is not a good idea. (e.g. commercial hosting services are really competitive; for a long time the cost of computing has been going down over time though I don't know if that is reversing because we've hit the end of the real Moore's law [1] or if it is a temporary blip)
[1] the motor behind it is cost reduction, once that stops it stops because we can't afford it anymore!
Well, it exists, but it exists if you’re willing to buy server hardware on eBay, hustle to get old parts working together, negotiate a good deal on a cabinet, get space from ARIN and announce it and so on. There are probably 10-50x cost efficiencies vs. renting 5 year old CPU families on AWS at huge markup.
A laptop isn’t the way to do that though. And your typical VC-fueled startup isn’t going to know how to do it either. It takes a very narrow slice of competence to be able to do that correctly.
More likely a prank.
I think it's most likely testing the waters for a real offering. It's not that weird. Many colo data centers already have policies about hosting laptops because it's already something that happens. It just isn't common and usually isn't for hosting servers.
If the battery in the laptop is still good, it comes with it's own UPS. My MBPs haven't had an ethernet port in a minute, so do you have to supply your own adapters as well??? You could fit ~15 MBPs on their edge in 9RUs. That'd be an interesting looking rack. Not quite a blade chassis. It'd be rather boring looking as there's no blinky-blinkies
Putting a UPS in a rack is a prosumer/corporate IT thing, it’s not done in real datacenters.
They typically have their own UPS in another room and multiple power lanes. And it’s going to be much more reliable than a laptop battery.
I didn't really think that any of what I wrote would be taken seriously to the point of needing a retort. I mentioned blade servers and knew rack unit measurements which as context clues would have suggested I was familiar with actual data center equipment.
I would like to put my Raspberry Pi Pico in colocation, would it work?
There are a number of places that colocate normal Raspberry Pi.
https://lowendbox.com/blog/little-machines-in-big-datacenter...
I am sure that some of them either already colocate Pico ones too, or are willing to do so if asked.
The title says PoC, so I presume it's a PoC.
> Give us your laptop
There's no way to read this without hearing Scottish accent. It's like a sleeper agent activation phrase.
https://www.youtube.com/watch?v=hKfAjlW6E30
> Website copyright is out of date by two years
It's fixed now.
And someone bought the .com domain: https://crt.sh/?id=25447880244
What if it’s a compute Ponzi scheme?
> Your old laptop packs more CPU power, RAM, and storage than their entry-level offerings - and with us, you'll pay just €7/month for professional hosting
This is basically the same price as the cheapest options on Hetzner: https://snipboard.io/C9epWo.jpg. Sure my old laptop does have more RAM and a bigger SSD, but I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day. So is the tradeoff really worth it? It's hard for me to believe that this is a genuine improvement for most things. The only definite winning case I can think of is if I have a process I want to run, but I don't care if it just suddenly stops working. But when would that ever be the case? and to save a couple dollars per month?
Edit: Maybe this is what github is doing :P
> I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day
I’m a happy Hetzner customer but I have had servers that I rented from them die a couple of times.
I rent physical servers from them that have been previously rented to other customers. At some point hard drives fail.
However, I have solid backup setup in place (ZFS send and recv to other physical hosts in different physical locations) with that in mind, so I haven’t lost data with Hetzner. But if I naively did not have any backup then data would have gotten lost a couple of times.
Well, yeah, but that's not really a Hetzner thing. That's just computers in general.
Just monitor them so you can act proactively.
Of course. Just pointing out that even if the hardware might be server grade, doesn’t mean one can assume that the risk of hardware failure is negligibly low. And that one always needs to have offsite backups.
Not sure how Hetzner works, but do they have IDRAC type access to their servers and/or remote hands available to fix stuff? Guess you'd be on the hook for that sort of thing here, making the Hetzner price more appealing if they do include that kind of functionality.
For physical machines of course yes.
The linked one is VPS, so all trouble fixing is easier.
> Edit: Maybe this is what github is doing :P
Announcing the new "mobile" tier on azure.
Great idea but is this real?
Its a page hosted on CLoudFlare's "pages.dev" service. Their method of contact is a Google Form which does have an email address on this domain "CoLaptop [dot] com", but that as a web address does not work.
I'm not sure they have their act together.
Old laptops as low cost servers? Absolutely, build a homelab in your own basement, rent a cheap VPS, set up wireguard and viola - instant data center for tens of dollars per month. It's not production grade but you'll learn a ton.
But colocation?
Strip away the learning component and add production uptime requirements - why would you even consider using crusty old laptops for this? If you have production grade needs, look to a standard cloud provider or, at the very least, a colo facility where you can put production-grade equipment.
They aren't targeting big companies for sure, but maybe a small or medium-sized office could make use of this.
I don't see it. Hobby projects can use a VPN tunnel to make a data center from local equipment. Real projects that choose colocation have uptime requirements that simply can't be met by random consumer hardware. The venn diagrams don't intersect.
There's no middle ground where you try to run a real business on old laptops. That's insane. You either keep things small/hobby and stay simple, or graduate to production-grade equipment once you have real requirements.
The middle ground, taking on production colocation problems plus the unreliability of random hardware, sounds like the worst of both worlds. There are both simpler and more robust options.
The problem with hosting locally is using residential internet.
In Australia, for example, we're capping out at 100Mbit/s upload speeds on plans that cost ~US$70/mo and regularly go down for maintenance.
In other countries with cheap symmetrical plans this may make more sense.
They aren't targeting no one (and looks like they aren't at all).
Just do the math: for a measly €2000 a month, a salary of a cashier in Amsterdam, you already need to have 285 clients - and this is without taxes and revenue.
I have always dreamed of substituting a really expensive rack of servers with a couple of elderly laptops, with their built in UPS, handy screens, keyboards and trackpads. However, for pet projects, I now have a better way of being cheapskate.
Some ecommerce software stacks really need gargantuan amounts of RAM and CPU, which gets expensive on the cloud. However, it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement, therefore having the setup reasonably secure and cheap.
Downsides to this, having customer details in the basement rather than a secure facility, but how many developers have huge customer databases just casually lying around on USB sticks and whatnot? It happens.
The core density is really low. You can run a 96 core Epyc from the previous generation at 700 W and that’s a lot of compute. It makes sense for a home server (and I have an old Mac playing that role at home) but otherwise I don’t think it makes sense unless you’re taking off the display and racking them super tight.
Even then, you’re probably better off with Cloudflare tunnel and using it as a home server.
> We're based in Amsterdam and aim to work with Hetzner
I wonder if Hetzner knows their aim.
> We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.
Yeah, just use the DC's UPS.
The folks that run the colo I keep our servers in would beat me to death with a shoe if I did either of these things:
- Mount something in a rack not firmly attached to brackets or a shelf
- Install anything with a battery larger than you'd find in a RAID card
Not to mention all the other ways this is sub-par in terms of airflow, density, serviceability, out-of-band management, etc.
I get the allure of it, but I wouldn't really want my gear anywhere near a bunch of laptops stuck in a cabinet.
It’s not about your battery, it’s the battery in all the other laptops that would have me concerned. Plenty of fire risk.
Not just the batteries. User supplied power bricks sounds like an incredibly bad idea at pretty much any scale.
Wait, power bricks are a fire hazard?
Just gonna point this out since I noticed it a few weeks ago and notice is still there, Hetzner has paused selling new colocation service: https://www.hetzner.com/colocation/
So this is probably a joke site or a scam.
I work for IPinfo and we operate a distributed network consisting of around 1,400 servers. I think we have reached a point where it is extremely hard for us purchase VPSes from interesting ASNs.
To support lots of ISPs, universities, and different organizations we have been asking them if they have an old laptop lying around that they can host our software on. Goal is to reach 70,000 probes within the next couple of years.
It is a simple probe software and we share some data or we can pay 20-30 bucks a month for it. We have a couple of NUCs in remote regions but no laptops yet. Basically, we are even happy if an ISP (or any one) hosts our software from a laptop dangling by a charging cable from a socket in some random corner.
We can send over a RPI or NUC, but with remote hands, and setup and all that it can get quite expensive. So, we always first ask if they have an old laptop lying around and can install our software there.
For us, at least, we are not interested in the hardware aspect. We are interested in the network. The old laptop approach only acts as a last resort. We will be more than happy to go with the predictability of a traditional VPS hosted in a traditional data center. Colocation, no matter what form it takes, involves a lot of moving parts.
Interesting challenge! My first thought: 70k probes is a lot and having to set that up is quite a task. Why not develop an phone app with exit node capabilities (similar to Tailscale) so you can use that for probing? The real win is that people move around, obtaining you even more data points from other network.
We actually have app-based data collection capabilities and initiatives. Our goal, or more appropriately, vision, is to map the internet in real time. This involves SSH access to devices to run different forms of measurements at a very high frequency and have control over those devices.
Managing 70k probes is not going to be super hard.
Managing 1,400 servers is just a normal business operation, not a technical challenge. Each probe has a standard OS-level configuration. Automation and configuration are deployed from a central system. Each probe is actively monitored and troubleshot. Data is dumped to a data warehouse. We make incremental improvements to our network. When servers go down, we talk to vendors.
We do a lot of novel engineering things from the infrastructure, data, and research team. Having a very identical set of servers really allows us to focus on product and performance engineering, not troubleshooting engineering. With application-based probing, I assume it will complicate things quite a bit, as there are different operating systems, different devices, etc.
For us, lately the challenge is not technical. It has been exclusively procurement. This quarter (https://ipinfo.io/blog/probenet-q1-2026-expansion), we exclusively focused on regional diversity which involved outreach to national ISPs or telecoms. Securing servers from telecoms is an extremely bureaucratic and expensive process. So, we are hoping to partner up with eyeball networks and the larger NOG community.
Any recommendations on inexpensive colo for personal projects/servers? A few years ago ran across a few links for places to host a box and I didn't save them, and have regretted it.
ISTR one was basically just industrial office space that was running a lower-tier colo, and another was some guys in a metro area that got a rack in a data center and were spreading the cost around with other like-minded folks. At my work I have machines in an Iron Mountain facility, but for personal projects I don't need anything like that, but I'd like something that's more capable than AWS that I'm paying $80/mo for a couple VMs.
I've been using Hetzner and OVH, i used to use GCP and AWS, my bills are now 1/10th of what they were
if you do not use their platform specific features, it's better to run on bare metal with redundancy.
Collocating a bunch of lithium-ion heat pillows all in one place, what could go wrong!
Most laptops work perfectly fine with the battery removed and for those who cannot replacing it with a large capacitor is usually a solution.
There is one scenario it would be good for. People running stock trading programs often need a better network and always on environment than they can get at home
Does anybody know if they also accept mac minis? Or is the keyboard/display a fundamental requirement to their offering?
tons of places do mac mini colo, https://www.macminivault.com/mac-mini-colocation/
Marco tells us that if you have 48 Mac minis, buy them yourself and rent a rack.
How does that work if you don't live near the colo?
Most colos provide remote hands - a recent ATP talks about it.
https://www.colocrossing.com/managed-service/remote-hands/
Become the Cloud.
https://appleinsider.com/articles/26/04/07/giant-mac-mini-cl...
Not sure if this is legit... I could see it working well enough if they require the laptop to support at least say thunderbolt3/usb4 then they can use a single connection interface to a management/dock interface that includes a network connection (1gb/2.5gb)
The trouble is a lot of laptops won't power-on with the screen closed and have heavy sleep/suspend behaviors in general. Not to mention general airflow in whatever shelving system is used with the laptops, assuming 2-4 laptops per shelf, per 1u. Not to mention, one would probably want/need some means of ensuring appropriate driver support, or an appropriate Linux or other setup for said hardware.
While I can see it working, depending on shipping costs can definitely see some problematic bits.
lots of proxmox clusters in basements run on old laptops. my pile of t480s beats any cloud vm (except when my ISP goes down).
I presently use an extra laptop to compute and run for batch jobs. Easy, fast.
I’m curious if they remove the displays. Not every laptop works with the display closed and it might cause heat issues that throttle the CPU or reduce the life of the machine to run it like that long-term.
This is CLEARLY a scam.
There is no way they are partnering with Hetzner, or charging just 7€/month flat rate... they specifically want to know the model of the laptop, and offer to send send a courrier to your door...
I would be really surprised if this was a scam. It doesn't have the smell of a scam at all. Who would target a very tech savvy audience just to get old laptops?
Given that the "sign up" link goes to a survey form, my guess is this is just some idea someone had and they made this page to see if anyone actually wants it before they put any effort into making it happen.
Colo scams are pretty common. Some percentage of people will offer to send expensive laptops, and the scammers can discard the rest of "interested customers".
It is inviable to colo old laptops, a regulatory nightmare - Hetzner would NEVER accept those in their datacenters. It is also absurd to think they are partnering with Hetzner to begin with.
It makes no sense to believe they will even EXPORT laptops from Europe to the US if you choose the US location. It just makes no sense, so I don't get why I am getting downvoted.
I don't want to crap on peoples ideas. Really, I don't.
But getting some closet case computer with unknown hardware and turning it into a server, at scale, is an impossible scheme.
The only way to make it work would be to buy hundreds of laptops at once and refurb, new storage, and standardize with custom power delivery. Because who wants hundreds of laptop PSU's plugged into power strips. And those do in fact die.
And then there's the horror of manually removing wifi hardware and batteries. Battery disposal is an issue. And having worked on hundreds of laptops, some of them are major pains in the neck to get to the battery. Consumer HP's come to mind. The bottom cover can be difficult to remove without breaking any of the clips.
Point of Reference: 27 years in web hosting
Say what you want about an old laptop, they sure are a lot faster than a $150/mo azure VM. And to be clear, I mean a _LOT_ faster.
I looked it up for specifics.
Right now the closest I can see is that $121/mo gets you 4 Xeon Platinum 8370C cores and 16GiB of RAM [0] (storage not included!).
Somebody Geekbenched that config here [1] 1274 single core 4256 multi core.
Thats kinda terrible ngl. A mini pc with last gen mobile parts like Ryzen 5 7640HS gets 2610 single and 10768 multi core [2].
[0] https://azure.microsoft.com/en-us/pricing/details/virtual-ma...
[1] https://browser.geekbench.com/v6/cpu/17547159
[2] https://browser.geekbench.com/v6/cpu/17541586
That’s saying a lot about azure, not the laptops.
A friend of mine sent it to me and it seems like an interesting option now that hardware pricing has gone insane?
This is the most vibe-coded looking website possible
It’s as if Claude Code and Bootstrap 3 had had an illegitimate child.
Yea this is a stupid idea. Old laptops don't have good performance per watt compared to new servers once you factor in that they are many many times slower.
This is never a good idea.
A ton of old batteries in one place. The batteries themselves are probably not a concern, but if something happens to the facility, then you have a ton of problems.
Security of the facility is a concern if someone can get in and walk out with an armful of laptops.
Laptops don’t scale from a stacking stand point. Sure, close the lids and line them up. Then you’ll have a lot of failures. Older laptops are intended to cool through the keyboard and top vents by the screen.
that's how my university did a linux cluster for exercises
This seems fishy...
7 euro a month and unlimited bandwidth? Seems unlikely.
Hmm, there's might something to this:
+ The usual limiting factor in data centers is power, so laptops could be more optimized for greater cycle efficiency per power than comparable old servers.
+ Laptops are generally compact and so achieve greater rack densities than individual co-lo servers. I'm thinking about 34 or 51 laptops could be stored in 9 or 10U either 2 or 3 rows deep by 17 wide.
+ Shipping a laptop to a co-lo data center is cheaper than a 1U server.
~ Reusing electronics saves e-waste and reduces unnecessary consumption, either old servers or old laptops.
- Laptops lack ECC RAM.
- Laptops typically don't use nearly as fast CPUs or RAM as contemporaneous servers.
- Laptops are limited in their storage options.
- Laptops lack remote, lights-out management of real servers.
- Repairing old failed laptop components is more difficult than old servers.
~ Old laptops tend not to have usable batteries, so there's unlikely to be much an inherently distributed battery backup capability.
- Old laptop batteries of various origins could be a li-ion NMC fire hazard at scale.
~ Reusing old stuff at any sort of scale would prefer standardization, and it's sometimes difficult to amass many of the same discontinued model.
Conclusion: Do it if it works for you. It's kinda cool.
I think it's one of those ideas that only works with nostalgia or hoarding impulses to support it.
I think normal virtualization approaches are far more power efficient, at a fleet level, than any kind of cluster of laptop scenarios. You can pile in the cores and amortize the costs of memory controllers etc. over a large set of guests.
It is a funny way to get features of both worlds. One reason to want colo (rather than VMs) is for predictability, but laptops still give you the funny throughput problems, because of thermal throttling instead of competing guests.
Aside from this probably being a scam or dead project, but they do say they either remove or disable batteries. Either way battery can be removed.
> - Repairing old failed laptop components is more difficult than old servers.
I think "run it until it's dead" kind of thing.
Sure that can work for individuals and small groups with physical separate high availability. It maybe faster and simpler to find another replacement, but I'm thinking about it from a permaculture perspective of sometimes old parts inventory exist somewhere for cheap or it's only a small broken component that could be fixed to avoid unnecessary e-waste contributions and spending more money on consumption to fix a problem.
Typical enterprise server lifecycle is 4-6 years purposefully throws away uncertain remaining potential value because budgets needs to be spent, risk aversion to repairing what's considered "outdated", and possibly acquiring faster and more energy-efficiency equipment. I would guess it's about the same lifecycle length for enterprise and personal laptops too.
Eeek, I can't imagine what this is like if it scales. What happens to the fire risk when theres 20,000 laptops with aging batteries all sitting together? I hope they take the batteries out, however many laptops use batteries to smooth out power fluctuations.
Laptops aren't designed to be servers - peg your laptop CPU and GPU at 100% and see how long it lasts, I've done this before and the answer is about "2 months", yep sure, this effort isn't targeting that workload, but how many bad apples does it take to start a fire? In their page they say "kubernetes server - no problem" kubernetes DOES keep the CPUs busy, not pegged, but busy enough so that they wont step down their frequency.
I admire the effort to reuse old tech, but boy oh boy would I not want to be a sysadmin here!
My old Lenovo t420 has been running 24/7 pegged as a multi-camera DVR since 2011, no issues whatsoever. Of course the battery is removed, but I don't see many decent laptops struggling running under load for prolonged periods.
I worked for a place that did something akin to this in the early 2010s. Someone figured out how to add 32-bit company laptops to the virtualization cluster (likely because they were using one as a stand in for a server that at the time would have been in the works but not yet purchased) and so once that work had been incurred they just kept "retiring" unserviceable company laptops to the cluster. Imagine a standard wire metro-rack crammed in a telecom closet beside a normal server rack. Now imagine that metro rack literally full of Toshiba Satellite Pro's from about 2005-9. The cluster hosted virtual machines for testing.
No fires, no hardware problems. No special cooling other than the mini-split that was in the closet to cool the server rack. They just kept trucking. But modern hardware is much more high strung and I don't doubt you'd have weird failures.
Edit: Back then VMs were how things were done and RAM was seemingly always the bottleneck by a mile, so the cluster did add up to a meaningful amount of extra performance compared to not having it.
Wait, whats the point of this if I can have my old laptop running in my garage?
Colocation has reliable power, reliable environmental conditions, and internet connections that are better suited for running servers.
In theory—the data center they'd put your laptop in has a much faster, and more reliable, internet connection than your garage.
They can't profit from your garage.
pages.dev, you can't be serious.
i just don't understand, why it isn't acceptable to use pages.dev
why we must all spend money on domain to show off our projects?
Looks like an April 1st article, but there is no date on it.
sounds like a battery fire waiting to happen
Yeah for dev purposes perhaps. Production would be another story.
uh yeah i mean we 'colo' at work because its cheaper than buying a windows server with multiple RDP licenses. We have some legacy stuff that must be run on site.... so we buy $200 laptops and people can remote in for years.
Is this how we bring "works on my machine" in production? /s
"But it works on my machine!"
"Great. Put your laptop in this box, and we'll send it to the DC."
"Done!"