27 comments

  • Barathkanna 9 minutes ago

    This is really clever. Dotprompt as a thin, pipe-friendly layer around LLMs feels way more ergonomic than spinning up a whole agent stack. The single-file + stdlib approach is a nice touch too. How robust is the JSON schema enforcement when chaining multiple steps?

  • cootsnuck 5 hours ago

    This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.

    I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.

    Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".

  • dymk 6 hours ago

    Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.

  • gessha 2 hours ago

    Just like Linus being content with other people working on solutions to common problems, I’m so happy that you made this! I’ve had this idea for a long time but haven’t had the time to work on it.

  • tomComb 5 hours ago

    Everything seems to be about agents. Glad to see a post about enabling simple workflows!

  • oddrationale 5 hours ago

    Interesting! Seems there is a very similar format by Microsoft called `.prompty`. Maybe I'll work on a PR to support either `.prompt` or `.prompty` files.

    https://microsoft.github.io/promptflow/how-to-guides/develop...

    • chr15m 4 hours ago

      Oh interesting. Will investigate, thanks!

  • __MatrixMan__ 5 hours ago

    It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.

    • stephenlf 4 hours ago

      That’s a great idea. Store inputs/outputs in XDG_CACHE_DIR/runprompt.sqlite

    • chr15m 4 hours ago

      Do you mean you want responses cached to e.g. a file based on the inputs?

  • journal 2 hours ago

    i literally vibe coded a tool like this. it supports image in, audio out, and archiving.

  • stephenlf 4 hours ago

    Fun! I love the idea of throwing LLM calls in a bash pipe

  • cedws 5 hours ago

    Can it be made to be directly executable with a shebang line?

    • _joel 5 hours ago

      it already has one - https://github.com/chr15m/runprompt/blob/main/runprompt#L1

      If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.

      • vidarh 5 hours ago

        I'm assuming the intent was to as if the *.prompt files could have a shebang line.

           #!/bin/env runprompt
           ---
           .frontmatter...
           ---
           
           The prompt.
        
        Would be a lot nicer, as then you can just +x the prompt file itself.
    • chr15m 4 hours ago

      That's on my TODO list for tomorrow, thanks!

  • ltbarcly3 4 hours ago

    Ooof, I guess vibecoding is only as good as the vibecoder.

  • orliesaurus 5 hours ago

    Why this over md files I already make and can be read by any agent CLI ( Claude, Gemini, codex, etc)?

    • jsdwarf 4 hours ago

      Claude.md is an input to claude code which requires a monthly plan subscription north of 15€ / month. Same applies to Gemini.md, unless you are ok that they use your prompts for training Gemini. The python script works with a pay per use api key.

    • garfij 4 hours ago

      Do your markdown files have frontmatter configuration?

  • swah 5 hours ago

    Thats pretty good, now lets see simonw's one...

  • stephenlf 4 hours ago

    Seeing lots of good ideas in this thread. I am taking the liberty of adding them as GH issues