I like to digitize my handwritten notes. For the basic transcription, there's isn't much prompting to do, but I do tell it some stylistic choices I make and how to interpret those as markdown, like open to-dos as circles and closed to-dos as circles with Xs to their markdown equivalents.
I also upload some common templates for things like my weekly reviews and tell it to use a template if applicable.
I'm sure if I drew diagrams and told it it could use Mermaid, it'd do a good job too. Would like to try when I get the chance.
It saves _so_ much time getting written notes into text. Writing things out helps me plan, but I much prefer to have content digital for syncing, backups, and searching.
This is all in a Claude project for reuse, but I've found most LLMs do a solid job, even the cheap ones like Gemini 2.5 flash (or whatever the low cost current Gemini model is).
The most valuable queries are the ones I know their answer in advance. It’s just that I am too lazy to craft the answer myself. Just like you did. If I were assigned to do your exact same task with terraform (something I don’t have much experience with), I wouldn’t be able to successfully query the LLM to do the job.
was working with mediapipe BlazePose, which gives 33 pose points in world space, but wanted "the pose to always point forward" (virtually this prompt exactly)
it one-shotted 600 lines of code which did the job perfectly. it understood from context the center of body, how to calculate the body normal, to rotate each point around that, all while handling edge cases to avoid errors. would've takens me hours if not days to tweak it manually to work.
Any time that I’m trying to think through something or want an “opinion” about design choices or am I misting something, I type those two words in so it will be critical.
My next favorite prompt, “I’m having an issue with $X and having a hard time tracking it down. Help me work backwards. Don’t assume anything. Ask me clarifying questions as needed”. It’s great for rubber ducking.
For AWS troubleshooting, I ask if to give me AWS CLI commands to help it help me to debug and to always append “ | pbcopy” to it so I can just paste the output.
To be honest thing I find most doing most is asking the LLM to keep thing to some set number of sentences/paragraphs.
"In 4 sentences, how would you do x".
"In 2 paragraphs summerise the pros and cons of y".
Not really specific coding tasks, but ask these types of questions a lot because often I'm not trying to be an expert or deeply understanding something but get a feel for the consensus view.
LLMs tend to be verbose by default.
In terms of coding I often ask, "Don't make changes, but how would you improve this piece of code?" Or "Don't make changes, but what's wrong with this test?".
I find Cursor at least loves to make changes when I didn't really want it to. I was just asking for some thoughts / opinions.
I like to digitize my handwritten notes. For the basic transcription, there's isn't much prompting to do, but I do tell it some stylistic choices I make and how to interpret those as markdown, like open to-dos as circles and closed to-dos as circles with Xs to their markdown equivalents.
I also upload some common templates for things like my weekly reviews and tell it to use a template if applicable.
I'm sure if I drew diagrams and told it it could use Mermaid, it'd do a good job too. Would like to try when I get the chance.
It saves _so_ much time getting written notes into text. Writing things out helps me plan, but I much prefer to have content digital for syncing, backups, and searching.
This is all in a Claude project for reuse, but I've found most LLMs do a solid job, even the cheap ones like Gemini 2.5 flash (or whatever the low cost current Gemini model is).
The most valuable queries are the ones I know their answer in advance. It’s just that I am too lazy to craft the answer myself. Just like you did. If I were assigned to do your exact same task with terraform (something I don’t have much experience with), I wouldn’t be able to successfully query the LLM to do the job.
was working with mediapipe BlazePose, which gives 33 pose points in world space, but wanted "the pose to always point forward" (virtually this prompt exactly)
it one-shotted 600 lines of code which did the job perfectly. it understood from context the center of body, how to calculate the body normal, to rotate each point around that, all while handling edge cases to avoid errors. would've takens me hours if not days to tweak it manually to work.
another example just from today:
I merely selected entire json file, two word prompt: "generate schema"
one-shotted 600 lines json schema, unsolicited 200 lines typescript schema, 150 lines python dataclass model and a README!!! completely unsolicited!
(cursor agent mode)
Have you checked it yet?
Last time I did that from an Open API file rather than transform it, as I was lazy, it hallucinated a bunch of properties and left a load off too.
That was a year or so ago.
a lot changed in a year
Two words - “devils advocate”.
Any time that I’m trying to think through something or want an “opinion” about design choices or am I misting something, I type those two words in so it will be critical.
My next favorite prompt, “I’m having an issue with $X and having a hard time tracking it down. Help me work backwards. Don’t assume anything. Ask me clarifying questions as needed”. It’s great for rubber ducking.
For AWS troubleshooting, I ask if to give me AWS CLI commands to help it help me to debug and to always append “ | pbcopy” to it so I can just paste the output.
To be honest thing I find most doing most is asking the LLM to keep thing to some set number of sentences/paragraphs.
"In 4 sentences, how would you do x".
"In 2 paragraphs summerise the pros and cons of y".
Not really specific coding tasks, but ask these types of questions a lot because often I'm not trying to be an expert or deeply understanding something but get a feel for the consensus view.
LLMs tend to be verbose by default.
In terms of coding I often ask, "Don't make changes, but how would you improve this piece of code?" Or "Don't make changes, but what's wrong with this test?".
I find Cursor at least loves to make changes when I didn't really want it to. I was just asking for some thoughts / opinions.
At least two of the VSCode AI plugins I've tried out have an "ask" mode that explicitly does not change anythying
how many r's in strawberry
"add tests to"