I'm an enterprise IT consultant... 25+ years of infrastructure, not a robotics engineer. Last fall I started using Claude for a client project and hit the same wall everyone hits... the AI forgets everything between sessions. No memory. So I built a tool to fix that. Open source, plain-text Markdown files, persistent across sessions. That's CxMS.
While I was building it I kept thinking... what happens when these models move from chatbots to physical robots? The memory problem goes from annoying to dangerous. A warehouse robot that forgets the floor layout after a reboot? That's not a bug, that's a safety incident.
Then I started looking at how AI safety actually works right now. It's all software watching software. The AI generates something, another piece of software checks it, and if they disagree... it's software all the way down. There's no layer the AI can't reach.
I spent over 25 years watching companies build governance frameworks that only work when everyone follows the rules. Firewalls, compliance checklists, access controls... all bypassable by the thing they're supposed to control. The AI safety field is repeating the same pattern.
So I designed a hardware layer using the same Safe Torque Off principle that industrial motor controllers have used for decades... except applied to AI compute instead of motors. The AI can't prevent its own shutdown because there's no software pathway to the power gate.
But hardware alone isn't enough either. You need software that decides WHEN to act... consensus engines, authority validation, drift monitoring, audit trails. That's where the 9 software patents came from. The hardware enforces what the software decides. Neither one works without the other.
I filed everything as provisionals, working alongside AI, in 13 days. The memory tool I built to solve AI's context problem is what made it possible to keep a coherent design across 120+ sessions.
Some backstory.
I'm an enterprise IT consultant... 25+ years of infrastructure, not a robotics engineer. Last fall I started using Claude for a client project and hit the same wall everyone hits... the AI forgets everything between sessions. No memory. So I built a tool to fix that. Open source, plain-text Markdown files, persistent across sessions. That's CxMS.
While I was building it I kept thinking... what happens when these models move from chatbots to physical robots? The memory problem goes from annoying to dangerous. A warehouse robot that forgets the floor layout after a reboot? That's not a bug, that's a safety incident.
Then I started looking at how AI safety actually works right now. It's all software watching software. The AI generates something, another piece of software checks it, and if they disagree... it's software all the way down. There's no layer the AI can't reach.
I spent over 25 years watching companies build governance frameworks that only work when everyone follows the rules. Firewalls, compliance checklists, access controls... all bypassable by the thing they're supposed to control. The AI safety field is repeating the same pattern.
So I designed a hardware layer using the same Safe Torque Off principle that industrial motor controllers have used for decades... except applied to AI compute instead of motors. The AI can't prevent its own shutdown because there's no software pathway to the power gate.
But hardware alone isn't enough either. You need software that decides WHEN to act... consensus engines, authority validation, drift monitoring, audit trails. That's where the 9 software patents came from. The hardware enforces what the software decides. Neither one works without the other.
I filed everything as provisionals, working alongside AI, in 13 days. The memory tool I built to solve AI's context problem is what made it possible to keep a coherent design across 120+ sessions.
The open-source memory tool: https://github.com/RobSB2/CxMS