I find the "Can you ..." phrasing used in this demo/project fascinating. I would have expected the LLM to basically say "Yes I can, would you like me to do it?" to most of these questions, rather than directly and immediately executing the action.
It's funny that we're getting so much attention funneled towards the thought-to-machine I/O problem now that LLMs are on the scene.
If the improvements are beneficial now, then surely they were beneficial before.
Prior to LLMs, though, we could have been making judicious use of simple algorithmic approaches to process natural language constructs as command language. We didn't see a lot of interest in it.
I finally got around to trying this out right now. Here's how to run it using uvx (so you don't need to install anything first):
I took the simplest route and pasted in an OpenAI API key, then I typed: It generated a couple of chunks of Python, asked my permission to run them, ran them and gave me a good answer.Here's the transcript: https://gist.github.com/simonw/f78a2ebd2e06b821192ec91963995...
I find the "Can you ..." phrasing used in this demo/project fascinating. I would have expected the LLM to basically say "Yes I can, would you like me to do it?" to most of these questions, rather than directly and immediately executing the action.
It's funny that we're getting so much attention funneled towards the thought-to-machine I/O problem now that LLMs are on the scene.
If the improvements are beneficial now, then surely they were beneficial before.
Prior to LLMs, though, we could have been making judicious use of simple algorithmic approaches to process natural language constructs as command language. We didn't see a lot of interest in it.