Types and Neural Networks

(brunogavranovic.com)

46 points | by bgavran 6 hours ago ago

8 comments

  • toolslive 2 hours ago

    > This is what most programmers do. They type raw text into the editor; the compiler either processes it into structured data, or returns an error the programmer has to internalise before resubmitting.

    In the 1980s structural editors were quite popular (fe the basic editor in the ZX81). Using these, it is impossible for the programmer to create text that is not a valid program.

    • jnpnj 27 minutes ago

      I never saw any structured editor on these machines, how did they operate ? grammar guided insertion ?

  • Daffrin 40 minutes ago

    The connection between type systems and neural net structure is underexplored in practice. One thing I'd add: when you're dealing with multi-modal inputs in production — say, mixed structured and unstructured content — the type-safety problem compounds. You end up with implicit contracts at inference boundaries that are very hard to enforce.

    Has the author written anything on how this applies to transformer architectures specifically? The attention mechanism seems like a place where a richer type theory would be genuinely useful.

  • Xmd5a 4 hours ago

    Related:

    https://cybercat.institute/2025/05/07/neural-alchemy/

    https://cybercat.institute/2026/02/20/categorical-semantics-...

    https://cybercat.institute/2025/10/16/dependent-optics-ii/

    > The reason I put off starting the series for so long is one of the same reasons blocking the writing of the paper: some of the introductory material is some of the most difficult to write. It has been such a long time that I no longer know how to adequately explain why the problem is so difficult.

    My sympathies to Jules

  • big-chungus4 5 hours ago

    So the model generates code, and let's say it is wrongly typed, we then take the rightly typed version and use cross entropy between them? Is that right? That just sounds like the typical training, unless you can somehow take arbitrary code that the model generated and automatically find the rightly typed version, so you won't need a dataset for it

    • yorwba 5 hours ago

      Rather than letting the model generate arbitrary code and type-checking it afterward, the author wants to pre-restrict the output with templates that are well-typed by construction and only let the model make choices between valid alternatives in that restricted output space.

  • woolion 4 hours ago

    I'm not sure what to make of TFA (I don't have time right now to investigate in details, but the subject it interesting). It starts with saying you can stop generation as soon as you have an output that can't be completed -- and there's already more advanced techniques that do that. If your language is typed, then you can use a "proof tree with a hole" and check whether there's a possible completion of that tree. References are "Type-Constrained Code Generation with Language Models" and "Statically Contextualizing Large Language Models with Typed Holes".

    Then it switches to using an encoding that would be more semantic, but I think the argument is a bit flimsy: it compares chess to the plethora of languages that LLM can spout somewhat correct code for (which is behind the success of this generally incorrect approach). What I found more dubious is that it brushed off syntactical differences to say "yeah but they're all semantically equivalent". Which, it seems to me, is kind of the main problem about this; basically any proof is an equivalence of two things, but it can be arbitrarily complicated to see it. If we consider this problem solved, then we can get better things, sure...

    I think without some e.g. Haskell PoC showing great results these methods will have a hard time getting traction.

    Please correct any inaccuracies or incomprehension in this comment!

    • bgavran 2 hours ago

      Author here - thanks for engaging.

      On existing techniques - Type-Constrained Generation paper is discussed in the blog post (under Constrained Decoding), and I'd group typed holes in the same bucket.

      The problem with those methods is that they're inference time: they don't update the weights. In this case, constrained decoding prevents the model from saying certain things, without changing what the model wants to say. This is especially problematic the more complex your type systems get, without even taking into account that type inference is for many of these undecidable.

      Meaning, if I give you a starting string, in the presence of polymorphisms and lambdas you might not always be able to tell whether it completes to a term of a particular type.

      On the syntactic difference: I'd gently reframe. The question isn't whether syntactically different programs are semantically equivalent, it's that regardless of which form you pick, the existing methods don't let the model learn the constructor choice.

      That's what the next section is about.