I have 25Gbps from Init7 at home. My "router" is a Minisforum MS-01 with a second-hand Mellanox ConnectX-5, running VyOS.
My main home server is a Supermicro SYS-510D-4C-FN6P. It has dual 25Gbps ports onboard but also an Intel E810-XXVDA4T with another 4x25Gbps ports.
Both of them are perfectly capable of saturating their ports using stock forwarding on Linux, no DPDK, VPP, anything, without breaking a sweat. Both of them were substantially cheaper than the machine in the article.
Is there something I'm missing? Why does this workstation need a ~$1000 motherboard and a ~$1000 Xeon CPU? Those two components alone cost more than either of my computers and seem like severe overkill.
"SCION OSS border router performance reached a ceiling of around 400k-500k packets per second, which is roughly equivalent to 5-6 Gbit/s at a 1500-byte MTU." vs. 1.4 M PPS for IP (on an older CPU) https://toonk.io/linux-kernel-and-measuring-network-throughp...
Does not look like it [1]. It appears to be a protocol that enumerates your exact path, interface by interface, on every data packet. So you can just blindly forward to the next hop written in the packet itself.
By my guess, a competent and efficient implementation should be able to run the routing logic at ~30-100 million packets per second per core. That would be ~300-1,000 Gb/s per core, so you would bottleneck on your memory bandwidth if you have even a single copy.
Nice write up! For this sort of thing, I have leaned towards AMD Epyc, Intel e810, and DPDK for the software stack. Unfortunately, lately the supermicro H13SSL line of mobo's appear to have become near-unobtainable with ridiculous 6+ month lead times.
No idea, you can still get one-off boards here and there, but buying anything in quantity has been tricky. I can only surmise supermicro's resources are largely tied up with AI data center build out, with everything else relegated to short runs.
It is too bad this important work needed to be done on the cheap. You'd think if the Swiss National Bank was involved, you could get a proper budget....
It would have been a lot easier to focus on the important implementation details if the server was an off the shelf Lenovo datacenter server (SD550?) with a pair of 100 gig/s NVIDIA cards in it.
(Source: last month I set up a machine like this for a colleague to do approximately the same task. I spent "copy and paste the production server config" time on it, not a week.)
Personally was always a fan of just going with the largest fans possible - surprised we don't see more cases designed around 140mm and larger. 200mm is much less common but has a more pleasing noise profile
I'm also a fan of that sort of setup. A Fractal Meshify 2 XL will fit a bunch of 140mm fans, or you can get the Torrent which is smaller but has 2x 180mm fans up front. I have both and would recommend them, though the Torrent is a tight fit for a big board, and the shield on the back of the Asus W790 motherboards interferes with the cable routing grommets on the motherboard tray, so you have to remove them.
Noctua makes really good fans, I'm told. Want to get on their level and make a similar amount of money? In a world of slop, quality engineering is valuable.
I have 25Gbps from Init7 at home. My "router" is a Minisforum MS-01 with a second-hand Mellanox ConnectX-5, running VyOS.
My main home server is a Supermicro SYS-510D-4C-FN6P. It has dual 25Gbps ports onboard but also an Intel E810-XXVDA4T with another 4x25Gbps ports.
Both of them are perfectly capable of saturating their ports using stock forwarding on Linux, no DPDK, VPP, anything, without breaking a sweat. Both of them were substantially cheaper than the machine in the article.
Is there something I'm missing? Why does this workstation need a ~$1000 motherboard and a ~$1000 Xeon CPU? Those two components alone cost more than either of my computers and seem like severe overkill.
SCION is much slower than normal IP.
Huh?
"SCION OSS border router performance reached a ceiling of around 400k-500k packets per second, which is roughly equivalent to 5-6 Gbit/s at a 1500-byte MTU." vs. 1.4 M PPS for IP (on an older CPU) https://toonk.io/linux-kernel-and-measuring-network-throughp...
Ah. Thanks!
My understanding is that the setup needs to allow them to work on packet routing at those speeds, not just send/receive, to simulate SCION.
Ah, so they need to hold giant routing tables in memory and do lookups in them or something like that?
Does not look like it [1]. It appears to be a protocol that enumerates your exact path, interface by interface, on every data packet. So you can just blindly forward to the next hop written in the packet itself.
By my guess, a competent and efficient implementation should be able to run the routing logic at ~30-100 million packets per second per core. That would be ~300-1,000 Gb/s per core, so you would bottleneck on your memory bandwidth if you have even a single copy.
[1] https://www.ietf.org/archive/id/draft-dekater-scion-dataplan...
Don't forget checking the MACs.
Most of this was "enthusiasts playing with bigboy stuff", but it turns out ok in the end.
Nice write up! For this sort of thing, I have leaned towards AMD Epyc, Intel e810, and DPDK for the software stack. Unfortunately, lately the supermicro H13SSL line of mobo's appear to have become near-unobtainable with ridiculous 6+ month lead times.
Why that mobo specifically ?
No idea, you can still get one-off boards here and there, but buying anything in quantity has been tricky. I can only surmise supermicro's resources are largely tied up with AI data center build out, with everything else relegated to short runs.
It is too bad this important work needed to be done on the cheap. You'd think if the Swiss National Bank was involved, you could get a proper budget....
It would have been a lot easier to focus on the important implementation details if the server was an off the shelf Lenovo datacenter server (SD550?) with a pair of 100 gig/s NVIDIA cards in it.
(Source: last month I set up a machine like this for a colleague to do approximately the same task. I spent "copy and paste the production server config" time on it, not a week.)
Wow, 249 CHF for 8x fans is insane. The grip Noctua has on people! Nice workstation.
they aren't cheap, but noctua's latest 120mm fans are arguably as good as it gets, in quantifiable ways: https://www.hwcooling.net/en/noctua-nf-a12x25-g2-pwm-the-kin...
Personally was always a fan of just going with the largest fans possible - surprised we don't see more cases designed around 140mm and larger. 200mm is much less common but has a more pleasing noise profile
I'm also a fan of that sort of setup. A Fractal Meshify 2 XL will fit a bunch of 140mm fans, or you can get the Torrent which is smaller but has 2x 180mm fans up front. I have both and would recommend them, though the Torrent is a tight fit for a big board, and the shield on the back of the Asus W790 motherboards interferes with the cable routing grommets on the motherboard tray, so you have to remove them.
Oxide Computer has entered the chat...
Link?
https://oxide.computer/blog/the-cloud-computer
Noctua makes really good fans, I'm told. Want to get on their level and make a similar amount of money? In a world of slop, quality engineering is valuable.