43 comments

  • jbellis 2 days ago

    If you're looking for something more complete and actively maintained, check out https://github.com/GumTreeDiff/gumtree.

    (I evaluated semantic diff tools for use in Brokk but I ultimately went with standard textual diff; the main hangup that I couldn't get past is that semantic diff understandably works very poorly when you have a syntactically invalid file due to an in-progress edit.)

    • affyboi a day ago

      Note that diffsitter isn’t abandoned or anything. I took a year off working and just started a new job so I’ve been busy. I’ve got a laundry list of stuff I want to do with this project that will get done (at some point)

    • ilyagr a day ago

      In case anybody happens to be interested in testing `gumtree` with https://github.com/jj-vcs/jj, I think I got them to work together. See https://github.com/GumTreeDiff/gumtree/wiki/VCS-Integration#... (assumes Docker).

    • aiiizzz 21 hours ago

      This is cool, I really think this kind of thing integrated with LLMs for code editing will be wonderful. Days of manually typing code are coming to an end. I was looking for something better than meld, and this might be it.

    • pfdietz a day ago

      The interesting problem here would be how do you produce a robust parse tree for invalid inputs, in the sense of stably parsing large sections of the text in ways that don't change too much. The tree would have to be an extension of an actual parse tree, with nodes indicating sections that couldn't be fully parsed or had errors. The diff algorithm would have to also be robust in the face of such error nodes.

      For the parsing problem, maybe something like Early's algorithm that tries to minimize an error term?

      You need this kind of robust parser for languages with preprocessors.

      • WorldMaker 14 hours ago

        I think the easiest trick here is to stop thinking about it as a parsing problem and consider it only as a lexing problem. A good lexer either doesn't throw out errors or minimizes error token states, and a good lexer gets back to a regular stream of tokens as quickly as it can. This is why we trust "simple" lexers as our syntax highlighters in most IDEs, they are fast, and they handle unfinished and malformed documents just fine (we write those all the time in our processes in our editors).

        My experience many years back with using just a syntax highlighting tuned lexer to build character-level diffs showed a lot of great promise: https://github.com/WorldMaker/tokdiff

      • o11c a day ago

        Unfortunately, this depends on making good decisions during language design; it's not something you can retrofit with a new lexer and parser.

        One very important rule is: no token can span more than one (possibly backslash-extended) line. This means having neither delimited comments (use multiple single-line comments; if your editor is too dumb for this you really need a new editor) nor multi-line strings (but you can do implicit concatenation of a string literal flavor that implicitly includes the newline; as a side-effect this fixes the indentation problem).

        If you don't follow this rule, you might as well give up on robustness, because how else are you going to ever resynchronize after an error?

        For parsing you can generally just aggressively pop on mismatched parens, unexpected semicolons, or on keywords only allowed in a top-ish level context. Of course, if your language is insane (like C typedefs), you might not be able to parse the next top-level function/class anyway. GNU statement-expressions, by contrast, are an actually useful thing that requires some thought. But again, language design choices can mitigate this (such as making classes values, template argument equivalent to array indexing, and statements expressions).

        • conartist6 18 hours ago

          Error recovery is a dead-end tech for all the reasons you say.

          If people want to move forward they'll look past it. Garbage in, garbage out.

        • pfdietz a day ago

          > how else are you going to ever resynchronize after an error?

          An error-cost-minimizing dynamic programming parser could do this.

          • o11c a day ago

            That fundamentally misunderstands the problem in multiple ways:

            * this is still during lexing, not yet to parsing

            * there are multiple valid token sequences that vary only with a single character at the start of the file. This is very common with Python multi-line strings in particular, since they are widely used as docstrings.

            • pfdietz 14 hours ago

              One could fold lexing into the parsing and do error cost minimization on both.

    • pests a day ago

      I watched a video long ago about how the Roslyn C# compiler handled this but I forget the details.

  • ilyagr a day ago

    https://github.com/Wilfred/difftastic/wiki/Structural-Diffs is a nice list of alternatives.

    Difftastic itself is great as well! The author wrote up nice posts about its design: https://www.wilfred.me.uk/blog/2022/09/06/difftastic-the-fan..., https://difftastic.wilfred.me.uk/diffing.html.

  • modderation 16 hours ago

    This looks interesting! I've been building a similar tool that uses TreeSitter to follow changes to AST contents across git commits, with the addition of tying the node state to items in another codebase. In short, if something changes upstream, the corresponding downstream functionality can be flagged for review.

    The ultimate goal is to simplify the building and maintenance of a port of an actively-maintained codebase or specification by avoiding the need to know how every last upstream change corresponds to the downstream.

    Just from an initial peek at the repo, I might have to take a look at how the author is processing their TreeSitter grammars -- writing the queries by hand is a bit of a slow process. I'm sure there are other good ideas in there too, and Diffsitter looks like it'd be perfect for displaying the actual semantic changes.

    Early prototype, heavily relies on manual annotations in the downstream: https://github.com/NTmatter/rawr

    (yes, it's admittedly a "Rewrite it in Rust" tool at the moment, but I'd like it to be a generic "Rewrite it in $LANG" in the future)

  • esafak a day ago

    Some make a semantic diff splitter please! Break up big commits into small, atomic, meaningful ones.

    • ethan_smith a day ago

      Check out git-imerge or git-absorb which can help with this problem by intelligently splitting or absorbing changes into the right commits.

    • 0x457 a day ago
      • esafak a day ago

        I can't make sense of that link. How many parts was the diff split up into, and along what lines?

        • 0x457 a day ago

          Yeah, I don't know why I linked that as an example. Wanted to show structure of a patch. Each commit of a patch already has everything ready to be processed and chunked IF you keep them - small, atomic, semantically meaningful. As in do smaller commits.

          • mdaniel a day ago

            > > Some make a semantic diff splitter please! Break up big commits into small, atomic, meaningful ones.

            > Each commit of a patch already has everything ready to be processed and chunked IF you keep them - small, atomic, semantically meaningful. As in do smaller commits.

            Reads like:

            User1: I need help with my colleagues who do not make independent, small, semantically intact commits

            User2: well, have you tried making smaller, more independent, semantically intact commits?

            ---

            My interpretation of the wish is to convert this, where they have intermixed two semantically independent changes in one diff:

                +++ a/alpha.py
                --- b/alpha.py
            
                 def doit():
                -    awesome = 3.14
                +    awesome = 4.56
            
                -    print("my dog is fluffy")
                +    print("my cat is fluffy")
            
            into this

                +++ a/alpha.py
                --- b/alpha.py
            
                 def doit():
                -    awesome = 3.14
                +    awesome = 4.56
            
                     print("my dog is fluffy")
            
                +++ a/alpha.py
                --- b/alpha.py
            
                 def doit():
                     awesome = 3.14
            
                -    print("my dog is fluffy")
                +    print("my cat is fluffy")
            
            where each one could be cherry-picked at will because they don't semantically collide

            The semantics part would be knowing that this one could not be split in that manner, because the cherry-pick would change more than just a few lines, it would change the behavior

                +++ a/alpha.py
                --- b/alpha.py
            
                 def doit():
                -    the_weight = 3.14
                +    the_weight = 4.56
            
                -    print("my dog weighs %f", the_weight)
                +    print("my cat weighs %f", the_weight)
            
            I'm sure these are very contrived examples, but it's the smallest one I could whip up offhand
  • vrm a day ago

    This is neat! I think in general there are really deep connections between semantically meaningful diffs (across modalities) and supervision of AI models. You might imagine a human-in-the-loop workflow where the human makes edits to a particular generation and then those edits are used as supervision for a future implementation of that thing. We did some related work here: https://www.tensorzero.com/blog/automatically-evaluating-ai-... on the coding use case but I'm interested in all the different approaches to the problem and especially on less structured domains.

  • 12 hours ago
    [deleted]
  • fjfaase 2 days ago
  • jacobr 19 hours ago

    Could the next-gen version control system just store ASTs? Does this already exist?

    Every user gets their own preferred formatting, and linters and tools could operate on already-parsed trees

    • haradion 18 hours ago

      The Unison programming language is built around that idea: https://www.unison-lang.org/docs/the-big-idea/

      • conartist6 18 hours ago

        BABLR is building that! It's entirely fair to say that BABLR takes Unison's approach and allows it to be used with every programming language.

    • williamdclt 17 hours ago

      This is an idea that comes back often, and has merit of course.

      The thing is that this means sacrificing the enormous advantage of plaintext, which is that it is enormously interoperable: we use a huge quantity of text-based tools to work with source code, including non-code-specific ones (grep, sed…)

      Also, code is meant to be read by humans: things like alignement and line returns really do matter (although opinions often differ about the “right” way)

      And of course the lesser (?) problem of invalid ASTs.

      • WorldMaker 9 hours ago

        I don't think invalid ASTs are a "lesser" problem, it is a pretty big one: we want to be able to source control work in progress and partially complete things. There's a lot of reasons you might not want to or be able to finish a bit of code and yet you still want to record what you've done and where you are (to pick it back up later, to get other developers' eyes on a sketch or an outline, to save it to backup systems, etc). Those are often important steps in development, though it is easy to forget about how common they are when you think about software as finished/buildable artifacts only.

        I know a lot of people think source control should only have buildable code, but that's what CI processes are for and people use source control (and diffs) for a lot of things that don't need to pass CI 100% of the time.

      • conartist6 11 hours ago

        These are all solvable problems, and I know because I have built a solution the demonstrates how they can all be solved at the same time.

    • foo42 12 hours ago

      you might want to check out eyg lang (eat your greens) as I think the idea is explicitly that syntax is user preferences and the ast is the _real_ language

    • tempfile 14 hours ago

      Isn't this one of the basic ideas of Lisp?

  • mertleee a day ago

    This is really cool.

    Although - for more exotic applications parsing structural data I've found langium is far more capable as a platform. Typescript is also a pleasant departure from common AST tools.

  • john_max_1 a day ago

    How does it compare to diffmerge?

  • 1-more a day ago
  • pmkary a day ago

    What a genius idea.

    • affyboi a day ago

      Nah I think most people could make something like this in a weekend

  • dcre a day ago

    See also https://mergiraf.org/ for a tool that uses ASTs to resolve (some) merge conflicts.

  • the__alchemist a day ago

    Is there an anti-tree-sitter version too?

    • davepeck a day ago

      yes, although it's sort of the same as Context-Free-Typing-sitter

  • Iwan-Zotow a day ago

    integration to VSCODE?