I'd be interested, if any one had suggestions, on MPC applied to ML/AI systems -- it seems this is an underserved technique/concern in MLEng, and I'd expect to see more on it.
Another thing to keep in mind is that having AI/ML surrogates which can evaluate expensive functions faster can also be integrated as an information source in model predictive control algorithms.
there's a lot of work in the broad area. most of it doesn't engage with the classical control theory literature (arguably it should).
some keywords to search for recent hot research would be "world model", "decision transformer", "active inference", "control as inference", "model-based RL".
Great job! I'm working in a similar blog post and it was fun seeing how you approached it.
I was surprised the wasm implementation is fast enough, I was even considering writing webGpu compute shaders for my solver
hi, this is brilliant, thank you! I will definitely go through it soon.
I have been trying to figure something out for a while but maybe haven't quite found the right paper for it to click just yet - how would you mix this with video feedback in a real robot - do you forward predict the position and then have some means of telling if they overlap in your simulated image and reality?
I've tried grounding models like cogvlm and yolo, but often the bounding box is just barely useful to go face something, not actually reach out and pick something.
there are grasping datasets, but then I think you still have to train a new model for your given object+gripper pair - so I'm not clear where the MPC part comes in.
so I guess I'm just asking for any hints/papers that might make it easier for a beginner to grasp.
I hacked it using MPPI and it only works on the cartpole model so as to not have to dwell in Javascript too long; just click the 'MPPI Controller' button and you can perturb the model and see it recover.
I am so upset my math skills lagged behind the rest of my technical skills, I struggle greatly to grok the math in this. I also find it quite difficult to brush it up enough to be happy with my level of understanding. I know I am able to, in high school I probably could have tackled this, but not now, with so many things sloshing in my brain.
I'd be interested, if any one had suggestions, on MPC applied to ML/AI systems -- it seems this is an underserved technique/concern in MLEng, and I'd expect to see more on it.
There is a big overlap between Optimal Control and Reinforcement Learning, in case you didn't know.
Also Steve Brunton does a lot on the interface between control theory and ML on his channel: https://www.youtube.com/channel/UCm5mt-A4w61lknZ9lCsZtBw/pla...
Another thing to keep in mind is that having AI/ML surrogates which can evaluate expensive functions faster can also be integrated as an information source in model predictive control algorithms.
Exactly. ML models such as autoencoders can also be used for reduced order modeling / dimensionality reduction e.g. for MPC of fluid systems.
there's a lot of work in the broad area. most of it doesn't engage with the classical control theory literature (arguably it should).
some keywords to search for recent hot research would be "world model", "decision transformer", "active inference", "control as inference", "model-based RL".
Author of the post here - happy to answer any questions.
Great job! I'm working in a similar blog post and it was fun seeing how you approached it. I was surprised the wasm implementation is fast enough, I was even considering writing webGpu compute shaders for my solver
hi, this is brilliant, thank you! I will definitely go through it soon.
I have been trying to figure something out for a while but maybe haven't quite found the right paper for it to click just yet - how would you mix this with video feedback in a real robot - do you forward predict the position and then have some means of telling if they overlap in your simulated image and reality?
I've tried grounding models like cogvlm and yolo, but often the bounding box is just barely useful to go face something, not actually reach out and pick something.
there are grasping datasets, but then I think you still have to train a new model for your given object+gripper pair - so I'm not clear where the MPC part comes in.
so I guess I'm just asking for any hints/papers that might make it easier for a beginner to grasp.
thanks :-)
OT, but can you share the CSS you're using for your site (the blog)? I love how clean it is.
I ended up making my own theme, but my starting point was PicoCSS: https://picocss.com
Beautiful stuff, great post!
Thank you, really appreciate that.
Here's a (hacky) demo of MPC using MuJoCo in the browser: https://klowrey.github.io/mujoco_wasm/
I hacked it using MPPI and it only works on the cartpole model so as to not have to dwell in Javascript too long; just click the 'MPPI Controller' button and you can perturb the model and see it recover.
I am so upset my math skills lagged behind the rest of my technical skills, I struggle greatly to grok the math in this. I also find it quite difficult to brush it up enough to be happy with my level of understanding. I know I am able to, in high school I probably could have tackled this, but not now, with so many things sloshing in my brain.
I love this kind of stuff because it seems like a roughly equal blend of art, science, and engineering.
I've taken the linked Russ Tedrake class and have to say I loved this. Please make more!
Thank you, I appreciate that!
I am delighted to see a renewed interest in the field of systems control! This is awesome work!