30 comments

  • thorum 2 hours ago

    Developed by Jordan Hubbard of NVIDIA (and FreeBSD).

    My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.

    From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.

    • adastra22 10 minutes ago

      I don’t think that assumption holds. For example, only recently have agents started getting Rust code right on the first try, but that hasn’t mattered in the past because the rust compiler and linters give such good feedback that it immediately fixes whatever goof it made.

      This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.

    • vessenes 2 hours ago

      A lot of this depends on your workflow. A language with great typing, type checking and good compiler errors will work better in a loop than one with a large surface overhead and syntax complexity, even if it's well represented. This is the instinct behind, e.g. https://github.com/toon-format/toon, a json alternative format. They test LLM accuracy with the format against JSON, (and are generally slightly ahead of JSON).

      Additionally just the ability to put an entire language into context for an LLM - a single document explaining everything - is also likely to close the gap.

      I was skimming some nano files and while I can't say I loved how it looked, it did look extremely clear. Likely a benefit.

    • Zigurd an hour ago

      It's not just how well the language is represented. Obscure-ish APIs can trip up LLMs. I've been using Antigravity for a Flutter project that uses ATProto. Gemini is very strong at Dart coding, which makes picking up my 17th managed language a breeze. It's also very good at Flutter UI elements. It was noticeably less good at ATProto and its Dart API.

      The characteristics of failures have been interesting: As I anticipated it might be, an over ambitious refactoring was a train wreck, easily reverted. But something as simple as regenerating Android launcher icons in a Flutter project was a total blind spot. I had to Google that like some kind of naked savage running through the jungle.

    • nxobject 2 hours ago

      I think it's depressingly true of any novel language/framework at this point, especially if they have novel ideas.

    • cmrdporcupine an hour ago

      Not my experience, honestly. With a good code base for it to explore and good tooling, and a really good prompt I've had excellent results with frankly quite obscure things, including homegrown languages.

      As others said, the key is feedback and prompting. In a model with long context, it'll figure it out.

      • rocha 29 minutes ago

        But isn't this inefficient since the agent has to "bootstrap" its knowledge of the new language every time it's context window is reset?

        • adastra22 13 minutes ago

          No, it gets it “for free” just by looking around when it is figuring out how to solve whatever problem it is working on.

    • whimsicalism 2 hours ago

      easy enough to solve with RL probably

  • forgotpwd16 9 minutes ago

    Seems like a simplified Rust with partial prefix notation (which the rationale that is better for LLMs is based on vibes really) that compiles to C. Similar language posted here not too long ago: Zen-C => more features, no prefix notation / Rue => no prefix notation, compiles directly to native code (no C target). Surprisingly compared to other LLM "optimized" languages, it isn't so much concerned about token efficiency.

  • deepsquirrelnet 21 minutes ago

    At this point, I am starting to feel like we don’t need new languages, but new ways to create specifications.

    I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.

    In other words, I think llms don’t need new languages, we do.

  • cadamsdotcom 40 minutes ago

    There’s both efficacy and token efficiency to consider here.

    Seems unlikely for an out-of-distribution language to be as effective as one that’s got all the training data in the world.

    Really needs an agent-oriented “getting started” guide to put in the context, and evals vs. the same task done with Python, Rust etc.

  • simonw 2 hours ago

    I went looking for a single Markdown file I could dump into an LLM to "teach" it the language and found this one:

    https://github.com/jordanhubbard/nanolang/blob/main/MEMORY.m...

    Optimistically I dumped the whole thing into Claude Opus 4.5 as a system prompt to see if it could generate a one-shot program from it:

      llm -m claude-opus-4.5 \
        -s https://raw.githubusercontent.com/jordanhubbard/nanolang/refs/heads/main/MEMORY.md \
        'Build me a mandelbrot fractal CLI tool in this language' 
       > /tmp/fractal.nano
    
    Here's the transcript for that. The code didn't work: https://gist.github.com/simonw/7847f022566d11629ec2139f1d109...

    So I fired up Claude Code inside a checkout of the nanolang and told it how to run the compiler and let it fix the problems... which DID work. Here's that transcript:

    https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...

    And the finished code, with its output in a comment: https://gist.github.com/simonw/e7f3577adcfd392ab7fa23b1295d0...

    So yeah, a good LLM can definitely figure out how to use this thing given access to the existing documentation and the ability to run that compiler.

    • nodja an hour ago

      I think you need to either feed it all of ./docs or give your agent access to those files so it can read them as reference. The MEMORY.md file you posted mentions ./docs/CANONICAL_STYLE.md and ./docs/LLM_CORE_SUBSET.md and they in turn mention indirectly other features and files inside the docs folder.

    • hahahahhaah 15 minutes ago

      But are you losing horsepower of the LLM available to problem solving on a given task by doing so?

  • JamesTRexx 2 hours ago

    So, then if I want to use a certain terminal text editor to create a clone of it in nanolang, I'd end up typing nano nano.nano on the command line.

    I might accidentally summon a certain person from Ork.

    • jll29 16 minutes ago

      Make sure to create a "Getting Started" video with Nano Banana.

  • jmward01 an hour ago

    Just scanning through this, looks interesting and is totally needed, but I think it is missing showing future use-cases and discussions of decoding. So, for instance, it is all well and good to define a simple language focused on testing and the like, but what about live LLM control and interaction via a programming language? Sort of a conversation in code? Data streams in and function calls stream out with syntax designed to minimize mistakes in calls and optimize the stream? What I mean by this is special block declarations like:

    ``` #this is where functions are defined and should compile and give syntax errors ```

    :->r = some(param)/connected(param, param, @r)/calls(param)<-:

    (yeah, ugly but the idea is there) The point being that the behavior could change. In the streaming world it may, for instance, have guarantees of what executes and what doesn't in case of errors. Maybe transactional guarantees in the stream blocks compared to pure compile optimization in the other blocks? The point here isn't that this is the golden idea, but that we probably should think about the use cases more. High on my list of use cases to consider (I think)

    - language independence: LLMs are multilingual and this should be multilingual from the start.

    - support streaming vs definition of code.

    - Streaming should consider parallelism/async in the calls.

    - the language should consider cached token states to call back to. (define the 'now' for optimal result management, basically, the language can tap into LLM properties that matter)

    Hmm... That is the top of my head thoughts at least.

  • spicybright 2 hours ago

    One novel part here is every function is required to have tests that run at compile time.

    I'm still skeptical of the value add having to teaching a custom language to an LLM instead of using something like lua or python and applying constraints like test requirements onto that.

  • fizlebit an hour ago

    Looks a bit like Rust. My peeve with Rust is that it makes error handling too much donkey work. In a large class of programs you just care that something failed and you want a good description of that thing:

      context("Loading configuration from {file}")
    
    Then you get a useful error message by unfolding all the errors at some point in the program that is makes sense to talk to a human, e.g. logs, rpc error etc.

    Failed: Loading configuration from .config because: couldn't open file .config because: file .config does not exist.

    It shouldn't be harder than a context command in functions. But somehow Rust conspires to require all this error type conversion and question marks. It it is all just a big uncomfortable donkey game, especially when you have nested closures forced to return errors of a specific type.

    • jll29 11 minutes ago

      I like your "context" proposal, because it adds information about developer intention to error diagnostics, whereas showing e.g. a call stack would just provide information about the "what?", not the "why?" to the end user facing an error at runtime.

      (You should try to get something like that into various language specs; I'd love you to success with it.)

      EDIT: typo fixed.

    • wazzaps an hour ago

      You just described how the popular "anyhow" and "snafu" crates implement error handling

  • sheepscreek an hour ago

    Really clean language where the design decisions have led to fewer traps (cond is a good choice).

    It’s peculiar to see s-expressions mixed together with imperative style. I’ve been experimenting along similar lines - mixing s-expressions with ML style in the same dialect (for a project).

    Having an agentic partner toiling away with the lexer/parser/implementation details is truly liberating. It frees the human to explore crazy ideas that would not have been feasible for a side/toy/hobby project earlier.

  • abraxas 2 hours ago

    It seems that something that does away with human friendly syntax and leans more towards a pure AST representation would be even better? Basically a Lisp but with very strict typing might do the trick. And most LLMs are probably trained on lots of Lisps already.

  • boutell 2 hours ago

    I feel like the time for this was two years ago, and LLMs are now less bothered by remembering syntax than I am. It's a nice lisp-y syntax though.

  • prngl 2 hours ago

    Looks nice!