109 comments

  • alexchantavy 7 days ago

    Been using this for https://github.com/cartography-cncf/cartography and am very happy, thanks for building this.

    Automated review tools like this are especially important for an open source project because you have to maintain a quality bar to keep yourself sane but if you're too picky then no one from the community will want to contribute. AI tools are like linters and have no feelings, so they will give the feedback that you as a reviewer may have been hesitant to give, and that's awesome.

    Oh, and on the product itself, I think it's super cool that it comes up with rules on its own to check for based on conventions and patterns that you've enforced over time. E.g. we use it to make sure that all function calls that pull from an upstream API are decorated with our standard error handler.

    • pomarie 7 days ago

      Thanks for sharing that Alex! Definitely love having an AI be the strict reviewer so that the human doesn't have to

      • sitkack 7 days ago

        Being able to save the emotional budget over into the creative bucket is the most god damn win-win corporate speak I accidentally ever typed on hn. This is a wonderful strategy.

  • justanotheratom 8 days ago

    This is an awesome direction. Few thoughts:

    It would be awesome if the custom rules were generalized on the fly from ongoing reviewer conversations. Imaging two devs quibble about line length in a PR, and in a future PR, the AI reminds about this convention.

    Would this work seamlessly with AI Engineers like Devin? I imagine so.

    This will be very handy for solo devs as well, even those who don't use Coding CoPilots could benefit from an AI reviewer, if it does not waste their time.

    Maybe there can be multiple AI models review the PR at the same time, and over time, we promote the ones whose feedback is accepted more.

    • allisonee 8 days ago

      Appreciate the feedback! We currently auto-suggest custom rules based on your comment history (and .cursorrules), however continuing to suggest from history is now on the roadmap thanks to your suggestion!

      On working with Devin: Yes, right now we're focused on code review, so whatever AI IDE you use would work. In fact, it might even be better with autonomous tools like Devin since we focus on helping you (as a human) understand the code they've written faster.

      Interesting idea on multiple AI models --we were also separately toying with the idea of having different personas (security, code architecture), will keep this one in mind!

    • 8organicbits 7 days ago

      Line length isn't something I'd want reviewed in a PR. Typically I'd set up a linter with relevant limits and defer to that, ideally using pre-commit testing or directly in my IDE. Line length isn't an AI feature, it's largely a solved problem.

    • pomarie 8 days ago

      These are all amazing ideas. We actually already see a lot of solo devs using mrge precisely because they want something to catch bugs before code goes live—they simply don't have another pair of eyes.

      And I absolutely love your idea of having multiple AI models review PRs simultaneously. Benchmarking LLMs can be notoriously tricky, so a "wisdom of the crowds" approach across a large user base could genuinely help identify which models perform best for specific codebases or even languages. We could even imagine certain models emerging as specialists for particular types of issues.

      Really appreciate these suggestions!

  • eqvinox 7 days ago

    Threw a random PR at it… of the 11 issues it flagged, only 1 was appropriate, and that one was also caught by pylint :(

    (mixture of 400 lines of C and 100 lines of Python)

    It also didn't flag the one SNAFU that really broke things (which to be fair wasn't caught by human review either, it showed in an ASAN fault in tests)

    • allisonee 7 days ago

      sorry to hear that it didn't catch all the issues! if you downvote/upvote or reply directly to the bot comment @mrge-io <feedback>, we can improve it for your team.

      We take all these into consideration when improving our AI, and your direct reply will fine tune comments for your repository-only.

      • eqvinox 7 days ago

        That's good to know, but —assuming my sample of size 1 isn't a bad outlier, I should really try a few more— there's another problem: I don't think we'd be willing to sink time into tuning a currently-free subscription service that can be yanked at any time. And I'm in a position to say it is highly unlikely that we'd pay for the service.

        (We already have problems with our human review being too superficial; we've recently come to a consensus that we're letting too much technical debt slip in, in the sense of unnoticed design problems.)

        Now the funny part is that I'm talking about a FOSS project with nVidia involvement ;D

        But also: this being a FOSS project, people have opened AI-generated PRs. Poor AI-generated PRs. This is indirectly hurting the prospects of your product (by reputation). Might I suggest adding an AI generated PR detector, if possible? (It's not in our guidelines yet but I expect we'll be prohibiting AI generated contributions soon.)

        • allisonee 7 days ago

          totally get where you're coming from--many big open source repos have also been using it for a while and have seen some FP but have generally felt that the quality overall was worth it. would love to continue having you try it out, but also understand that maintaining a FOSS project is a ton of work!

          if you have specific feedback on the pr--feel free to email at contact@mrge.io and i'll take a look personally and see if we can adjust anything for your repo.

          nice idea on the fully AI-generated PRs! something in our roadmap is to better highlight PRs or chunks that were likely auto-gened. stay tuned !

  • bryanlarsen 8 days ago

    It looks like graphite.dev has pivoted into this space too. Which is annoying, because I'm interested in graphite.dev's core non-AI product. Which appears to be stagnating from my perspective -- they still don't have gitlab support after several years.

    • pomarie 8 days ago

      Yeah, noticed that too—what's the core graphite.dev feature you're interested in? PR stacking, by chance?

      If that's it, we actually support stacked PRs (currently in beta, via CLI and native integrations). My co-founder, Allis, used stacked PRs extensively at her previous company and loved it, so we've built it into our workflow too. It's definitely early-stage, but already quite useful.

      Docs if you're curious: https://docs.mrge.io/overview

      • bryanlarsen 8 days ago

        Yes, stacked PR's and a rebase-only flow. Unfortunately we're a GitLab shop. Today's task is a particularly hairy review; it's too bad I can't try you out.

        • pomarie 8 days ago

          Ah, totally get it—that’s frustrating. GitLab support is on our roadmap, so hopefully we can help you out soon.

          In the meantime, good luck with that hairy review—hope it goes smoothly! If you're open to it, I'd love to reach out directly once GitLab support is ready.

          • bryanlarsen 8 days ago

            Email is in profile. You're welcome to add me to your list.

    • atombender 6 days ago

      Same. I'm not at all impressed with Graphite as a code stacking tool; Aviator looks much nicer. But I recently started using Graphite's AI review tool, and it's also really poor. Out of all the suggested corrections so far, all were wrong except one that fixed an obvious typo in a comment.

    • catlover76 8 days ago

      [dead]

  • pyfon 7 days ago

    There are a few of these already. Is this a land grab play, i.e. with investment get the big accounts then all the compliance ticks then dominate?

    AI or conventional bots for PRs are neat though. Where I work we have loads of them checking all sorts of criteria. Most are rules based. E.g. someone from this list must review if this folder changes. Kinda annoying when getting the PR in but overall great for quality control. We are using an LLM AI for commenting on potential issues too. (Sorry I don't have any influence to help them to consider yours)

  • dimal 7 days ago

    Looks interesting. I’m a bit confused about how it knows the codebase and the custom rules interface. I generally have coding standards docs in the repo. Can it simply be made aware of those docs instead of requiring me to maintain two sets of instructions (one written one for humans, and one in the mrge interface for AI)? I could imagine that without being highly aware of a team’s standards, the usefulness of its review would be pretty poor. Getting general “best practices” type stuff wouldn’t be helpful.

  • mdaniel 8 days ago

    I see on your website that you claim the subprocessors are SOC2 type 2 certified, but it doesn't appear that you claim anything about your SOC2 status (in progress, certified, not interested). I mention this because I would suspect the breach risk is not that OpenAI gets popped but rather that a place which gathers continuously updated mirrors of source code does. The sandbox idea only protects the projects from one another, not from a malicious actor injecting some bad dep into your supply chain

    • pomarie 8 days ago

      That's a very good point. We actually just kicked off our own SOC 2 certification process last week—I hadn't updated the website yet, but I'll go ahead and do that now. Thanks for raising this!

      Appreciate the feedback around security as well; protecting against supply-chain attacks is definitely top of mind for us as we build this out.

      • mdaniel 8 days ago

        I know I'm not supposed to mention website issues here, but since you brought it up I wanted to bring to your attention that the "fade in on scroll" isn't doing you any favors for getting the information out of your head and into the heads of your audience. That observation then went to 11 when I scrolled back up and the entire page was solid black, not even showing me the things it had previously swooshed into visibility. It's your site, do what makes you happy, but I just wanted to ensure you were aware of the tradeoff you were making

        • pomarie 8 days ago

          Hey, thanks again—really appreciate the heads-up! Could you point me to the specific section where you're seeing the fade in on scroll? Also, what browser are you using?

          I don't remember adding that feature so it might be a bug

          • mdaniel 7 days ago

            All of them? Firefox 137 on macOS Intel

            After watching it sit there for 5-10 seconds before loading the section 'data-framer-name="Join"' I decided to inspect the element after it did load to see what it was doing. That's when I spotted all the JS and data attributes implying it was likely built with one of those drag-and-drop site builders, which explains why it may be behaving in an unexpected way for you. It also explains why it may default to "fade in on scroll" behavior, if my experience is any indication, because marketing folks _love_ that shit

            • pomarie 7 days ago

              That's so weird – I can't repro at all! I'll keep digging.. If anyone else reading this is also experiencing this, please shout!

  • thuanao 7 days ago

    It's been useful at our company. My only gripe is I'd like to run it locally. I don't want the feedback after I open a PR.

    • pomarie 7 days ago

      Super useful, thanks for the feedback! We're definitely thinking of building something that would run the reviews in your IDE directly, before you push the code.

  • dyeje 8 days ago

    I've been evaluating AI code review vendors for my org. We've trialed a couple so far. For me, taking the workflow out of GitHub is a deal breaker. I'm trying to speed things along, not upend my whole team's workflow. What's your take on that?

    • pomarie 7 days ago

      Yeah, that's a totally legit point!

      The good news with mrge is that it works just like any other AI code reviewer out there (CodeRabbit, Copilot for PRs, etc.). All AI-generated review comments sync directly back to GitHub, and interacting with the platform itself is entirely optional. In fact, several people in this thread mentioned they switched from Copilot or CodeRabbit because they found mrge's reviews more accurate.

      If you prefer, you never need to leave GitHub at all.

    • berrazuriz 7 days ago

      maybe blar.io works. Worth a try

  • gslepak 7 days ago

    Looked at it, but as a security person, I have to recommend against it as it requires permissions to act on behalf of repository maintainers. That is asking for trouble, and represents a backdoor into every project that signs up for it.

    • allisonee 7 days ago

      thanks for bringing this up, and totally understand the concern. we are committed to security, and we never write/access your code without your action--the only reason that setting is necessary is so that you can merge/1-click commit suggestions from the AI directly from the code suggestions it's posted.

      • rsavage 7 days ago

        Agree with the above commenter.

        We would be happy to try except when it has write/merge permissions .

        One click and auto merge are nice to have. Having the bot (and your company) able to deploy any code changes to production (by accident, via hack, etc) is a no go.

        Suggest making them optional features and just having code comments/repo read version.

        Not sure if it’s possible - but if the permissions could exclude specific branches that would be ok as well.

        But needs to be no way a malicious actor could write/merge to main.

  • kerryritter 8 days ago

    This looks like a cool solve for this problem. Some of the other tools I tried didn't seem to contextualize the app, so the comments were surface level and trite.

    I'm on Bitbucket so will have to wait :)

    • pomarie 8 days ago

      Thanks, really appreciate that! Yeah, giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window)

      And totally hear you on Bitbucket—it's definitely on our roadmap. Would love to loop back with you once we get closer on that front!

  • ukuina 8 days ago

    How does this work for large monorepos?

    If the repo is several GB, will you clone the whole thing for every review?

    • allisonee 8 days ago

      good q! today, we'd clone the whole thing, but we're actively looking into solutions about that atm (ie: only cloning the relevant subdirs)

      for custom rules, we do handle large monorepos by allowing you to add an allowlist (or exclude list) via glob patterns.

  • bilalq 7 days ago

    This looks really cool. We've been using Graphite for a long time now, and been pretty happy with it. It was a huge step up from base Github review workflows, and the AI reviewer does point out real issues from time to time.

    I watched your demo vid and the two things that stuck out to me were the summarizing of changes, grouping of file changes by concept, and the diagram generation. Graphite does generate summaries of PRs if you ask it to, but it's an extra step that replaces the user authored PR description. I see that you have stacked diff support too.

    I probably don't want to spend the time/energy to migrate my team off Graphite anytime soon, but would be interested in evaluating mrge. Is the billing per reviewer of PRs or by author of PRs? And how long is the free trial? I'm always reluctant to sign up for limited time free trials because I don't know if I'll actually have time to commit to assessing the tool in that time window.

    • allisonee 7 days ago

      thanks for the feedback and glad to hear that parts of our platform resonate. let me know if we can help onboard the team in the future if that makes it easier, it should be quick to switch as we also have our own cli. right now, our billing will be per author. our free trial is 2 weeks--but if you start it and don't trigger any/do any reviews we're happy to restart it later for you. just contact us at contact@mrge.io!

  • timfsu 8 days ago

    Happy mrge user here - congrats on the launch! It’s encouraged our team to do more stacked PRs and made every review a bit nicer

    • allisonee 8 days ago

      thanks Tim! So glad it's been helping your team move faster

    • pomarie 8 days ago

      Really appreciate the feedback, really happy it's helping you :)

  • C-Pec 5 days ago

    Huge fan of this direction. Code review is one of those critical bottlenecks that hasn’t seen much UX innovation in years; excited to see a tool rethink it from first principles. The logical grouping of diffs and the ephemeral sandbox approach both sound super thoughtful. Also love the idea of making the review experience feel more like linear. I will suggest that my dev team gives it a try!

    • allisonee 5 days ago

      Glad to hear the product resonates! Hope your team likes it, and please share any feedback as you try it out. If helpful, I can also start a direct slack channel between our teams, just reach out to contact@mrge.io with your slack emails!

  • KyleForness 8 days ago

    happy user here—our team moved from coderabbit to mrge, and everyone seems to love how much more useful the AI comments are

    • pomarie 8 days ago

      Really happy to hear mrge is useful! :) Thanks for sharing

    • allisonee 8 days ago

      thanks for the feedback! Glad that our ai reviewer has been useful to your team!

  • LinearEntropy 7 days ago

    The call to action button says "Get Started for Free", while the pricing page lists $20/month.

    Clicking the get started button immediately wants me to sign up with github.

    Could you explain on the pricing page (or just to me) what the 'free' is? I'm assuming a trial of 1 month or 1 PR?

    I'm somewhat hesitant to add any AI tooling to my workflows, however this is one of the use cases that makes sense to me. I'm definitely interested in trying it out, I just think its odd that this isn't explained anywhere I could find.

    • allisonee 7 days ago

      thanks for bringing this up! we're currently free (unlimited PRs) and will soon bill $20-$30/active user (has committed a PR) per month.

      We'll try to make this clearer!

  • bilekas 8 days ago

    > We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.

    I feel like that’s being overlooked here a bit too briefly. Is your target market not primarily larger teams who are most likely to have some security and privacy concerns?

    I guess is there something on the roadmap to maybe offer something later ?

    • pomarie 8 days ago

      Definitely—larger teams do typically have more stringent security and privacy requirements, especially if they're already using self-hosted GitHub. Self-hosted or hybrid deployment is definitely on our radar, and as we grow, it's likely we'll offer a self-hosted version specifically to support those larger teams.

      If that's something your team might need, I'd love to chat more and keep you posted as we explore this!

  • polskibus 7 days ago

    I got Claude Desktop to perform code review using mcp in Jetbrains IDE. I don’t know why would you prefer a cloud based pipeline with separate CR tool to it. This way I can extend the review process the way I want for example adding feature specs etc. There are other issues with llm based CR but I think mcp (or similar protocol) is the way to go.

  • justinl33 7 days ago

    This is good. PR review has been completed neglected basically from day 0.

    Did some self-research on Reddit about why (https://www.reddit.com/r/github/comments/1gtxqy6/comment/lxv...)

  • frabona 7 days ago

    This is super well done - love the approach with cloud-based LSP and the focus on making reviews actually faster for humans.

    • pomarie 7 days ago

      Thanks for the encouragement!

  • rushingcreek 7 days ago

    I love this idea. We experimented with building an AI coding agent that we showed to a small set of users and the most common feedback was confusion over what exactly the agent did. And so, I think that something like this can solve that problem, especially as AI performs increasingly complicated edits.

  • auscompgeek 8 days ago

    I wanted to check this out, so I installed the GitHub app on my account, with access to all my personal repos. However when I went looking for one of my repos (auscompgeek/sphinxify) I couldn't find it. It looks like I can only see the first 100 repos in the dashboard? I have a lot of forks under my account…

    • pomarie 8 days ago

      Quick update – we've merged a fix which should be live in ~15 mins! Thanks for reporting this :)

    • allisonee 8 days ago

      sorry about that! we're looking into this now--if you go back to https://github.com/apps/mrge-io-dev/installations/select_tar... and just add repos you want to use us with under the "select repositories", that should unblock you until we fix it in the next hour or so.

      • allisonee 8 days ago

        just to follow up--the fix for this is landing! thanks for surfacing

  • ggarnhart 7 days ago

    Heyo your launch video is unlisted on youtube. Maybe intentional, but you might benefit from having it be public :)

  • mushufasa 8 days ago

    Honest initial reaction to your pitch: > Cursor for code review

    Isn't cursor already the "cursor for code review?"

    • allisonee 8 days ago

      appreciate the honest reaction! We'll think about this more, what we were trying to get at is that cursor is more about code writing, and we're tackling the review/collaboration side :) curious if anything else would have immediately stuck out to you more?

      • mushufasa 8 days ago

        I think I got the pitch meaning immediately: this is a specialized ai tool for code review.

        That said, that doesn't sound like something very useful when I already use an ai code editor for code review. And github already supports automations for ci/ci for ai tools for code review. Maybe I just don't see value in an extra tool for this.

  • jFriedensreich 8 days ago

    Great that AI seemingly revives the stalled PR / review space. I just hope that human and local workflows will not be an afterthought or even made harder by these tools. Its also a great chance for stacked PRs and jujutsu to shake up the market.

    • pomarie 8 days ago

      Definitely! As AIs write a lot more code, I think that the PR/review space is going to become way more important.

      If you're interested in Stack PRs, you should definitely check them out on Mrge. By the way, we natively support them (in beta atm): https://docs.mrge.io/ai-review/overview

      • jFriedensreich 7 days ago

        The beta setting of stacked PRs seems to have no effect for me. Reading the mention of a cli in the docs for PR stacks gives me shivers. Please don't say you are implement it like graphite, which is the absolute worst way to do it and makes graphite useless for every sapling and jujutsu user that would need it most. You can also reach me at mrge@ntr.io would be happy to chat!

  • _insu6 8 days ago

    I've tried something similar in the past. The concept is cool, but so far the solutions I've seen are not so useful in terms of comments quality and ability to catch bugs.

    Hope this is the right time, as this would be a huge time-saver for me

    • allisonee 8 days ago

      We had heard the same from a few early users, but they've commented that our AI is a more context aware/useful. Of course, that's just anecdotal. We'd love to give you a free trial (https://mrge.io/invite?=hn) and get your feedback on quality/bug catching. Feel free to reach out at contact@mrge.io if you have any questions too!

  • william_stokes 8 days ago

    I was wondering if it has information about previous commits with deleted code? Sometimes we make a change and later realize that the previous code worked better, would mrge be able to understand that?

    • allisonee 8 days ago

      that's a good question! today, we don't look at previous commits--but thats something that we'll consider for future roadmap. curious if this happens often to your team? and if so, how you general gauge "better" (on the prev commits)

  • Nijikokun 7 days ago

    the biggest issue i've had for things like this is that ai doesn't understand context very well, anything that is beyond a context window creates hallucinations and it starts making up things that may exist in one location but it tries to apply it to a completely unrelated scenario, would be curious if this does understand the connected pieces appropriately and catches things that break those connections, otherwise it's just another linter?

    • pomarie 7 days ago

      Definitely! Giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window) – it's not perfect yet, but the tools it has does give it a remarkable amount of insight into the overall codebase

  • victorbjorklund 8 days ago

    Would be great to have support for GitLab also (have a project there that I would love to try this on and I can't switch it to GitHub)

    • allisonee 8 days ago

      On the roadmap! If you're happy to share your email for an early link when we do support it, send to contact@mrge.io

  • mw3155 8 days ago

    in the demo video i see that you can apply a recommended code change with one click. how do you make sure that the code still works after the AI changes?

    also, i tried some other ai review tools before. one big issue was always that they are too nice and even miss obvious bad changes. did you encounter these problems? did you mitigate this via prompting techniques or finetuning?

    • pomarie 8 days ago

      Great questions!

      For applying code changes with one-click: we keep suggestions deliberately conservative (usually obvious one-line fixes like typos) precisely to minimize risks of breaking things. Of course, you should confirm suggestions first.

      Regarding AI reviewers being "too nice" and missing obvious mistakes—yes, that's a common issue and not easy to solve! We've approached it partly via prompt-tuning, and partly by equipping the AI with additional tools to better spot genuine mistakes without nitpicking unnecessarily. Lastly, we've added functionality allowing human reviewers to give immediate feedback directly to the AI—so it can continuously learn to pay attention to what's important to your team.

      • mw3155 8 days ago

        thanks for answering! will definitly check out the tool when i have the chance. best of luck building this!

  • mandeepj 6 days ago

    from the video: devs spend 30% to 50% of time on code reviews, with AI writing code, it's only going to get worse.

    So, buy this AI tool (Merge) to review code written by another AI tool?

    Instead of a Code review tool, why not have it instead as a static analyzer? Overall, the whole process will take much less time.

  • deveshanand18 8 days ago

    As far as I can see, this doesn't directly integrate with github (we currently use coderabbit on github)? Is it on your timeline?

    • allisonee 8 days ago

      good question! we currently support a direct integration with github via a github app. we'll make that clearer in the post.

  • yoavz 8 days ago

    Excellent product, congrats on the launch guys!

  • mmmeff 7 days ago

    Any plans to support github enterprise on different URLs? Would love to give this a try with my team.

  • Arindam1729 8 days ago

    I've used CodeRabbit for Code Review. It does pretty cool work.

    How different it is from that?

    • pomarie 8 days ago

      Great question!

      We've heard from users who've tried both that our AI reviewer tends to catch more meaningful issues with less noise, that's really something you should try for yourself and find out! (The great thing is that it's really easy to start using)

      Beyond the AI agent itself (which is somewhat similar to CodeRabbit), our biggest differentiation comes from the human review experience we've built. Our goal was to create a Linear-like review workflow designed to help human reviewers understand and merge code faster.

  • manmal 7 days ago

    Is that the four letter domain PG recently tweeted about? Congrats!

    • pomarie 7 days ago

      It's possible! What was the tweet?

  • stitched2gethr 7 days ago

    > subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.

    I'm not even going to add to this.

    • allisonee 7 days ago

      glad to see this resonates - seems like many of us are experiencing the same challenges with AI codegen!

  • _jayhack_ 8 days ago

    If you are looking for an alternative that can also chat with you in Slack, create PRs, edit/create/search tickets and Linear, search the web and more, check out codegen.com

  • benjbrooks 6 days ago

    would love to use this but def need soc2 to justify it to our security folks

    • allisonee 5 days ago

      we're in the process of getting our soc2 approved, stay tuned!

  • decide1000 7 days ago

    Any plans for supporting Gitlab?

    • allisonee 7 days ago

      on our roadmap! please reach out at contact@mrge.io if you want to get notified on launch :)

  • tomasen9987 8 days ago

    This looks interesting!

  • landkittipak 8 days ago

    This looks incredible!

  • JofArnold 8 days ago

    Congrats on the launch. Another happy user here. (Caught a really sneaky issue too!)

    • pomarie 8 days ago

      Thanks for sharing that Jof! Glad it's helpful :)

  • nikolayasdf123 8 days ago

    why not GitHub Copilot?

    • pomarie 8 days ago

      Great question!

      We've heard from users who've tried both that our AI reviewer tends to catch more meaningful issues with less noise, that's really something you should try for yourself and find out! (The great thing is that it's really easy to start using)

      Beyond the AI agent itself (which is somewhat similar to Copilot), our biggest differentiation comes from the human review experience we've built. Our goal was to create a Linear-like review workflow designed to help human reviewers understand and merge code faster.

      • nikolayasdf123 7 days ago

        you are in rough competition. competing with GitHub (Microsoft) for model quality, inference cost, and GitHub UI integration (one button click, comment replies, code diff, reset of GitHub UI ecosystem), don't start me about training of LLMs... and likely they will not break down Microsoft anytime soon. it is going to be tough!

  • axelb78 7 days ago

    Looks awesome!

  • thefourthchime 8 days ago

    One personal niggle: "Code Review For The AI Era". I hate when people say era in relation to AI because it reminds me of Google's tasteless Gemini era thing.

    • allisonee 8 days ago

      that makes total sense, thanks for the feedback! we debated this for a bit--will keep in mind for the next design pass on the site :)