28 comments

  • senko 6 hours ago

    Apparently that's Codium, who have recently renamed themselves to Qodo: https://www.qodo.ai/blog/introducing-qodo-a-new-name-the-sam... (TIL)

    • SquareWheel 4 hours ago

      It seems like their new name is literally "Qodo (formerly Codium)", parenthetical included. At first I thought they were just including it in the blog post for clarity, but they literally write it out that way a dozen times. It's also included as part of their new logo and in the site title.

      I've never seen anything like that before. It feels like a search+replace operation gone awry.

  • decide1000 2 hours ago

    I use Tabnine. It supports many models, including Claude. I find the output better compared to CoPilot. My IDE's are from Jetbrains and I work in Python and PHP mainly.

  • gaze 5 hours ago

    I guess this is as good a place as any to ask -- what's everyone's favorite AI code assist tool?

    • mattnewton 5 hours ago

      cursor. First one I've tried that seems like it's more than a neat demo.

      - but I'm weird and I usually disable tab completion, I find having generations popping up while I'm typing slows me down, I gotta read them and think about it, feels like it's giving me ADD. So I've always kinda been a copilot hater. Lots of people find this more productive, and a fancy version of it is on by default in Cursor. However, Cursor implemented a bunch of different interfaces well not just the copilot one, and I find the chat window in your editor for churning out boilerplate or refactors is a huge productivity win personally. There are a lot of one-off refactors that are annoying enough that I wouldn't want to dedicate an afternoon to them but now they are taking me just a few minutes of reviewing AI changes.

      • written-beyond 5 hours ago

        Exactly why I never went with getting copilot, I instead got a chatgpt subscription and prompt for stuff I need.

        I do sort of regret it too, sometimes you just want to give more context and it's a hassle at that time. Figuring out what is it you need to paste to ensure the model has adequate context to generate something valid. Also, Claude is magnitudes superior to ChatGPT anything. Both are terrible at implementing abstract completely unique code blocks, however ChatGPT is significantly more "markov-y" when it comes to generating any code. When Claude gets things wrong it feels like a more human mistake.

        Anyway, with 50% of HN obsessing over Cursor, is it worth it? I couldn't get it to open projects I have in WSL2 and I kind of gave up at that point. I've gotten far with Claude's free tier and $20 for just cursor seems steep for something that's not as stable.

        Have you assessed Zed auto completion or read about others experience of it? Zed seems like something with a more stable foundation than any of these VSCode forks.

        • mattnewton an hour ago

          My trajectory was sublime -> vscode -> cursor. I tried Zed, but didn't need any of the collaboration features, didn't notice any speed increases vs normal vscode usage, and was generally just less productive than vscode with all the extensions I had configured. Cursor imported all of those, and my keyboard shortcut muscle memory still worked right after install. For me at least, moving over to it was completely seamless.

          Like you, I started using LLMs in a chat window copy pasting code back and forth, (but I think Claude 3.5 sonnet was the first model I felt it worth the hassle). The cursor workflow is basically indexing all your files in your codebase to figure out what to copy over into the LLM, and then scraping the LLM output to figure out where to paste it in your code. It works like 95% of the time, which is way more than I was expecting, and falls back to the manual copy paste workflow pretty easily. Also it comes at a time where models are good enough to be handed gobs of context and trusted with more than a handful of lines at a time (Claude Sonnet 3.5 really shines here).

          The whole experience just feels very polished and well thought out. One great (not really well documented?) feature is a .cursorrules file that gets invisibly pasted into all the context windows where you can say things like "use double quotes in javascript" or "prefer functional paradigms and always include type annotations in python" to avoid having to make those edits for consistency and style if the LLM fails to pick them up from the surrounding code. You can commit this file and have teammates get the same part of the prompt. Now everyone's autocomplete consistency and style is improved.

          It's difficult to state how nice having it automatically do this copy paste back and forth just a keyboard shortcut away feels, it really is much better than I expected. So yes, I would try Cursor.

        • infecto 3 hours ago

          Cursor has been my favorite so far but I also have never tried Codium. Copilot was the winner prior but honestly its just tab completion. I tried Jetbrains but it felt janky and slow. Cursor tab completion feels nicer, its super fast and will do updates based code changes. I like being able to quickly get it to write some code updates and it returns in a green/red line like a github PR. The flow is really nice for me and I am looking forward to the future.

    • ghawkescs 3 hours ago

      Same question, but for VSCode plugins. Besides copilot what is everyone using? Claude support is a huge plus.

      • Y_Y 2 hours ago

        Emacs.

    • emmanueloga_ 4 hours ago
    • edm0nd 2 hours ago

      I've been loving Claude Sonnet for python

    • jonathaneunice 5 hours ago

      Cursor.

      All in on tab completion and its other UI/UX advances (generate, chat, composer, ...)

    • nicce 4 hours ago

      Zed's integrated tools have been more than enough for me.

    • sunaookami 3 hours ago

      Cody

    • victorbjorklund 4 hours ago

      Aider AI.

    • Alifatisk 5 hours ago

      Cursor.

    • aberoham 4 hours ago

      aider-chat

  • gronky_ 5 hours ago

    I tried generating the same test with all 5 models in Qodo Gen.

    o1 is very slow - like, you can go get a coffee while it generates a single test (if it doesn’t time out in middle).

    o1-mini thought worked really well. It generated a good test and wasn’t noticeably slower than the other models.

    My feeling is that o1-mini will end up being more useful for coding than o1, except for maybe some specific instances where you need very deep analysis

    • superfrank 5 hours ago

      How well did it work for generating tests? I was looking for an AI test generation tool yesterday and I came across this and it wasn't clear how good it is.

      (before I get a bunch of comments about not letting AI write tests, this is for a hobby side project that I have a few hours a week to work on. I'm looking into AI test generation because the alternative is no tests)

  • haliliceylan 5 hours ago

    How is that free ???

    • rtsil 5 hours ago

      Presumably free refers to the users on their free plan, which does not include code generation/autocomplete except for tests.