Two-person team, eighteen months bootstrapped.
We just shipped Synrix, a flat fixed-width memory-mapped lattice that runs fifty million persistent nodes on an eight gigabyte Jetson Orin Nano with 186 ns hot-path latency (under 3.2 cycles steady-state), 33× larger than RAM via kernel-managed streaming, and full ACID persistence that survives kill -9 and power yanks with zero corruption. CPU-only, no GPU, no cloud, no telemetry. Redis RESP compatible drop-in.
Demo on the page: raw tegrastats, no cuts, cable pulled mid-run, everything comes back exactly where it left off.
We’ve never seen anything hit these numbers on commodity edge hardware before.
Curious what people think:
For real-world robotics, drones, or autonomy, is sub-200 ns persistent lookup actually useful or just a benchmark flex?
Are there workloads where surviving total power loss with zero data loss would change architecture decisions?
Has anyone else ever gotten close to 50 M persistent nodes on a Jetson without a GPU or external storage?
What would you try to break first if you had this running on your board tomorrow?
Happy to run it live on anyone’s hardware, share perf and cachegrind traces, or just talk through the weirdest edge cases you’ve seen. Feel free to check out our website for me info!
Two-person team, eighteen months bootstrapped. We just shipped Synrix, a flat fixed-width memory-mapped lattice that runs fifty million persistent nodes on an eight gigabyte Jetson Orin Nano with 186 ns hot-path latency (under 3.2 cycles steady-state), 33× larger than RAM via kernel-managed streaming, and full ACID persistence that survives kill -9 and power yanks with zero corruption. CPU-only, no GPU, no cloud, no telemetry. Redis RESP compatible drop-in.
Demo on the page: raw tegrastats, no cuts, cable pulled mid-run, everything comes back exactly where it left off. We’ve never seen anything hit these numbers on commodity edge hardware before. Curious what people think:
For real-world robotics, drones, or autonomy, is sub-200 ns persistent lookup actually useful or just a benchmark flex? Are there workloads where surviving total power loss with zero data loss would change architecture decisions? Has anyone else ever gotten close to 50 M persistent nodes on a Jetson without a GPU or external storage? What would you try to break first if you had this running on your board tomorrow?
Happy to run it live on anyone’s hardware, share perf and cachegrind traces, or just talk through the weirdest edge cases you’ve seen. Feel free to check out our website for me info!