I'd much, much prefer Aide to continue as a CLI tool or as a VSCode plugin. Every fork of VSCode ends up with IDE maintenance bugs that never get addressed and slowly the effort implodes as the bug surface becomes too wide.
Do you want to spend 90% of your time on AI or troubleshooting odd Linux VSCode bugs in your fork? I'd highly recommend the team to evaluate a different direction for growth to maximize sustainable future growth.
Thats a fair point, a significant part of our 4 person team had to skill up on the VSCode codebase to be able to meaningfully make changes to it.
I would love to know your workflow, you mention CLI tool or VSCode plugin, which one of them work for you? Whats missing from them where Aide can fill in the gap
Why is a fork required? I use the cline plugin for VS Code and it seems to be able to be able to more things, like update code directly, create new files, etc.
fork was necessary for the UX we wanted to go for. I do agree that an extension can also satisfy your needs (and it clearly is in your case)
Having a deeper integration with the editor allows for some really nice paradigms:
- Rollbacks feel more native, in the sense that I do not loose my undo or redo stack
- cmd+k is more in line with what you would expect with a floating widget for input instead of it being shown at the very top of your screen which is the case with any extension for now.
Going further, the changes which Microsoft are making to enable copilot editing features are only open to "copilot-chat" and no other extension (fair game for Microsoft IMHO)
So keeping these things in mind, we designed the architecture in a way that we can go towards any interface (editor/extension). We did put energy into making this work deeply with the VSCode ecosystem of APIs and also added our own.
If the editor does not work to our benefit, we will take a call on moving to a different interface and thats where an extension or cloud based solution might also make sense
After using Cursor (another AI focused fork) I'm 100% on the fork train. AI built natively into the IDE presents another layer of speed and isn't subject to the limitations of the extension system (which is awesome in its own right, not a knock on it).
I was on the fork train for awhile but cursors keeps having weird issues with indexing, intellisense, not being able to save files when format on save is enabled I wound up going back to vscode with cline and use openrouter to save money via prompt caching. To my knowledge cursor doesn't have Claude sonnets computer use enabled yet which is a total game changer and cline does I'll check back in a few months but instead of paying 20 a month for cursor pro I can put 20 in credits in openrouter and fully leverage the latest Claude model and features
I've been using it recently to have cline check my local dev server to review it's changes and iterate if there is anything off with the design changes it has made. Example prompt:
I have attached a screenshot of an updated design for the Navbar component. My local dev server is running at localhost:3000. Update the component to match the new designs and double check your changes after save.
There are quite a few things! VSCode's direction (I am making my own assumptions from the learnings I have)
- VSCode is working on the working set direction of making multi-file edits work
- Their idea of bringing in other extension is via the provider API which only copilot has access to (so you can't use them if you are not a copilot subscriber)
So just taking these things for face value, I think there is lots to innovate.
No editor (bias view of mine) has really captured the idea of a pair programmer working alongside you. Even now the most beloved feature is copilot or cursor tab with the inline completions.
So we are ways further from a saturated market or even a feature set level saturation. Until we get there, I do think forks have a way to work towards their own version of an ideal AI native editor, I do think the editors of the future will look different given the upwards trend of AI abilities.
> No editor (bias view of mine) has really captured the idea of a pair programmer working alongside you.
I had this feeling for the first time with Cline. It adjusted the code, accessed the terminal, rebuilt the image, ran the container, saw an error, suggested new edits, ran again... All while only asking for a few confirmations. And it's very verbose: it tells you with details what it's doing every step of the way.
But I migrated to the new Copilot a few days ago because I was easily spending $5 in a day.
> There's such a difference in feel that may be rooted in a philosophy, but boils down to how much the creator's vision aligns with my own.
Hard agree! I do think AI will find its way into our productivity tool kit in different ways.
There are still so many ways we can go about doing this, A:B comparison aside I do feel the giving people to power to mold the tool to work for themselves is the right way.
Just an FYI, there is extant code called Aide[0][1] (Advanced Intrusion Detection Environment) as well that's under active development/maintenance.
Since they perform vastly different functions (the above is a replacement for tripwire[2]), it's unlikely anyone will be confused, but be aware that it exists. There's always someone who will mistake one for the other.
VSCode forks are not new, there are many companies out there building towards this vision. What sets us apart is partly our philosophy (deeply integrating into the editor) and also the tech stack (running everything to the dot locally) and giving developers control over the LLM usage and also other niceness (like rollbacks which I think are paramount and important)
Fair point! We are not taking a stab at cursor in any way (its a great product)
In terms of features I do believe we are differentiated enough, the workflows we came up with are different and we are all about giving back the data (prompt+responses) back to the user.
The sidecar is not the killer feature, its one of the many thing which ticks the whole experience together.
Good callout on the codestoryai account being suspended, we are at @aide_dev
Can be a cultural idiomatic thing. For me (BrEng) "I'm sorry but..." doesn't explicitly mean "I regret/apologise for what I'm about to say" - in fact it's an intensifier.
Open Source; giving full ownership of the data to the users; running completely locally; we want to make sure you can use Aide no matter the environment you are in.
In my previous life at Facebook, I worked on the infra team and worked on a cluster manager similar to kubernetes, thats where I first heard the term sidecar. Something about the concept of a binary running alongside the pod powering other related things felt strong.
In most parts this is the inspiration for naming the AI brain: sidecar
Unfortunately, there are a distinctly limited number of words that can communicate a particular concept - the pigeon-hole principle suggests duplication will be inevitable.
Ambiguity's only really a problem when the same term is used multiple ways in similar contexts. I think it's very unlikely that anyone will get confused between these two usages.
In a world that prioritizes precise user control over the output text, how do you justify the value of relinquishing such control even to provide actions the user would have made already? It only takes a single bad edit to make the user lose all trust.
yeah bad edits are especially worse when we keep building on top of it.
Our first take on solving this is with rollbacks .. which allows you to delete edits up until a point in the conversation.. so if you do notice a bad edit you can do that.
after this, there is the proactive agent ..which checks it's work again and suggests more changes which it needs to do .. you can give feedback and guide it.
With llms we do loose a bit of control but I think the editor should work to solve this
Oh for sure! I do want to talk about how rust really helped us so many times when doing refactors or building new features, part of the reason why we were able to iterate so quickly on the AI side of things and ship features
Any tips for using Aide with another text editor? Ie i'm not going to work outside of my preferred text editor (Helix atm), so i'm curious about software which has a workflow around this. Rather than trying to move me to a new text editor
The binary is fairly agnostic to the environment, so there is a possibility to make it work elsewhere.
Its a bit non trivial but I would be happy to brainstorm and talk more about this
I have just downloaded and set it up to talk to Anthropic. I have some unused API credits that I've been trying to find a use for. Haha.
To be honest, I was very confused by the the @'ing. Is it possible to always pin the current open file to the context, perhaps in a FIFO fashion? (Other than the manually pinned files). The UX around pinning / @'ing is not super clear to me.
Otherwise still getting the hang of it, and generally pretty neat.
There is pinned context on the very top where you can pin the files which you frequently use.
We will start including the open file by default in the context very soon (one of the gotchas here, is that the open file could not be related to the question you have)
I would take that as a compliment, big fan of Zed (I hope their extension ecosystem allows for us to plugin sidecar into Zed soon)
Tbh I did try out their implementation and it still feels early, one of the key difference we went for was to allow the user to move freely between chat and editing mode.
I was able to successfully install the appimage. Perhaps it was just me but I did not find it as easy to work with as I imagined. Better, easier to follow, tutorials with images might help. I was hoping, for example, to highlite code and have Aide just intuitively injest that piece of code and work with it, but the process is not that easy, however. It's probably me but at least I was able to get it to install :)
Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)
yup! feel free to add the client support, you are on the right track with the changes.
To test the whole flow out here are a few things you will want to do:
- https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here)
- add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186...
- after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section)
- In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)
If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!
Hmm looks like this is still pretty early project for me. :)
My experience:
1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).
2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)
> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.
> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.
Thanks for the feedback tho, I appreciate it and we will do better
Just general coding, mostly python. Seems to me that Qwen 2.5, especially the upcoming bigger coder model might be the best performing coding model for 24GB VRAM setups
I know... we could have built something free form ground up (like zed did) but we had to pick a battle between building a new editor from the grounds up or building from a solid foundation (VSCode)
We are a small team right now (4 of us) and have been users of VSCode, so instead of building something new, putting energy into building from VSCode made a lot more sense to us.
Been using Cursor since launch. Really frustrating how they charge per message (500/mo) instead of by token usage. Like, why should a one-line code suggestion cost the same as refactoring an entire class? Plus it's been losing context lately and making rookie mistakes.
Tried Zed AI but couldn't get into it - just doesn't feel as smooth as Cursor used to. GitHub Copilot is even worse. Feels like they're so obsessed with keeping that $10/month price point that they've gutted everything. Context window is tiny and the responses are barebones.
I used Cursor for many months, but found that I couldn't deal with how slow and workflow-interrupting VSCode feels, so I went back to Zed.
I tried out and abandoned Zed AI, but I've found that Zed + aider is a really excellent setup – for me, at least.
For smaller things, Zed's inline Copilot setup works (nowadays) just as well as Cursor's and for things that are even a little bigger than tiny, I pull up aider and prompt the change the exact same way that I did with Cursor's Composer before.
I'm an odd choice because I'm a pretty expert-level programmer, but I find that aider with o1 is helpful for things hairy, expansive and tedious things that would otherwise irritate.
I do remember the Nexus 4 (jelly bean OS). I was fascinated at the time that you could play games on Android and ran the emulator for android on my desktop at that point (I was young and needed the games haha)
I tried Cursor and found it annoying. I don’t really like talking to AI in IDE chat windows. For whatever reason, I really prefer a web browser. I also didn’t like the overall experience.
I’m still using Copilot in VS Code every day. I recently switched from OpenAI to Claude for the browser-based chat stuff and I really like it. The UI for coding assistance in Claude is excellent. Very well thought out.
Claude also has a nice feature called Projects where you can upload a bunch of stuff to build context which is great - so for instance if you are doing an API integration you can dump all the API docs into the project and then every chat you have has that context available.
As with all the AI tools you have to be quite careful. I do find that errors slip into my code more easily when I am not writing it all myself. Reading (or worse, skimming) source code is just different than writing it. However, between type safety and unit testing, I find I get rid of the bugs pretty quickly and overall my productivity is multiples of what it was before.
I am on day 8 of Cursor's 14-day trial. If things continue to go well, I will be switching from Webstorm to Cursor for my Typescript projects.
The AI integrations are a huge productivity boost. There is a substantial difference in the quality of the AI suggestions between using Claude on the side, and having Claude be deeply integrated in the codebase.
I think I accepted about 60-70% of the suggestions Cursor provided.
Some highlights of Cursor:
- Wrote about 80% of a Vite plugin for consolidating articles in my blog (built on remix.run)
- Wrote a Github Action for automated deployments. Using Cursor to write automation scripts is a tangible productivity boost.
- Made meaningful alterations to a libpg_query fork that allowed it to be cross-compiled to iOS. I have very little experience with C compilation, it would have taken me a substantially long time to figure this out.
There are some downsides to using Cursor though:
- Cursor can get too eager with its suggestions, and I'm not seeing any easy way to temporarily or conditionally turn them off. This was especially bad when I was writing blog posts.
- Cursor does really well with Bash and Typescript, but does not work very well with Kotlin or Swift.
- This is a personal thing, but I'm still not used to some of the shortcuts that Cursor uses (Cursor is built on top of VSCode).
I would not be able to leave a Jetbrains product for Kotlin, or XCode for Swift
Overall it's so unfortunate that Jetbrains doesn't have a Cursor-level AI plugin* because Jetbrains IDEs by themselves are so much more powerful than base level VS Code it actually erases some small portion of the gains from AI...
(* people will link many Jetbrains AI plugins, but none are polished enough)
I probably would switch to Cursor for Swift projects too if it weren't for the fact that I will still need Xcode to compile the app.
I also agree with the non-AI parts of JetBrains stuff being much better than the non-AI parts of Cursor. Jetbrain's refactoring tools is still very unmatched.
That said, I think the AI part is compelling enough to warrant the switch. There are code rewrite tasks that JetBrains would struggle with, that LLMs can do fairly easily.
JetBrains is very interesting, what are the best performing extensions out there for it?
I do wonder what api level access do we get over there as well. For sidecar to run, we need LSP + a web/panel for the ux part (deeper editor layer like undo and redo stack access will also be cool but not totally necessary)
Its great that cursor is working for you. I do think LLMs in general are far far better on Typescript and Python compared to other languages (reflects from the training data)
What features of cursor were the most compelling to you? I know their autocomplete experience is elite but wondering if there are other features which you use often!
Their autocomplete experience is decent, but I've gotten the most value out of Cursor's "chat + codebase context" (no idea what it's called). The feature where you feed it the entire codebase as part of the context, and let Cursor suggest changes to any parts of the codebase.
ohh inserting.. I tried it on couple of big repos and it was a bit of a miss to me. How large are the codebases on which you work? I want to get a sense check on where the behavior detoriates with embedding + gpt3.5 based reranker search (not sure if they are doing more now!)
that's a good metric to aim for... creating a full local index for 600k lines is pretty expensive but there are a bunch of huristics which can take us pretty far
- looking at git commits
- making use of recently accesses files
- keyword search
If I set these constraints and allow for maybe around 2 LLM round trips we can get pretty far in terms of performance.
It really depends on what you're doing. AI is great for generating a ton of text at once but only a small subset of programming tasks clearly benefit from this.
Outside of this it's an autocomplete that's generally 2/3rds incorrect. If you keep coding as you normally do and accept correct solutions as they appear you'll see a few percentage productivity increase.
For highly regular patterns you'll see a drastically improved productivity increase. Sadly this is also a small subset of the programming space.
One exception might be translating user stories into unit tests, but I'm waiting for positive feedback to declare this.
I can give my broader feedback:
- Codegen tools today are still not great:
The lack of context and not using LSP really burns down the quality of the generated code.
- Autocomplete is great
Autocomplete is pretty nice, IMHO it helps finish your thoughts and code faster, its like intellisense but better.
If you are working on a greenfield project, AI codegen really shines today and there are many tools in the market for that.
With Aide, we wanted it to work for engineers who spend >= 6 months on the same project and there are deep dependencies between classes/files and the project overall.
For quick answers, I have a renewed habit of going to o1-preview or sonnet3.5 and then fact checking that with google (not been to stack overflow in a long while now)
Do give AI coding a chance, I think you will be excited to say the least for the coming future and develop habits on how to best use the tool.
What we found was that its not just about having access to these tools but to smartly perform the `go-to-definition` `go-to-reference` etc to grab the right context as and when required.
Every LLM call in between slows down the response time so there are a fair bit of heuristics which we use today to sidestep that process.
I'm using Copilot in VScode every day, it works fine, but I mostly use it as glorified one-line autocomplete. I almost never accept multi-line suggestions, don't even look at them.
I tried to use AI deeper, like using aider, but so far I just don't like it. I'm very sensitive to the tiny details of code and AI almost never got it right. I guess actually the main reason that I don't like AI is that I love to write code, simple as that. I don't want to automate that part of my work. I'm fine with trivial autocompletes, but I'm not fine with releasing control over the entire code.
What I would love is to automate interaction with other humans. I don't want to talk to colleagues, boss or other people. I want AI to do so and present me some short extracts.
GitHub Copilot in either VS Code or JetBrains IDEs. Having more or less the same experience across multiple tools is lovely and meets me where I am, instead of making me get a new tool.
The chat is okay, the autocomplete is also really pleasant for snippets and anything boilerplate heavy. The context awareness also helps. No advanced features like creating entirely new structures of files, though.
Of course, I’ll probably explore additional tools in the future, but for now LLMs are useful in my coding and also sometimes help me figure out what I should Google, because nowadays seemingly accurate search terms return trash.
Cursor works amazing day to day. Copilot is not even comparable there. I like but rarely use aider and plandex. I'd use them more if the interface didn't take me completely away from the ide. Currently they're closer to "work on this while I'm taking a break".
I was deep into AI coding experiments since last December before all the VS Code Extensions and IDEs came out.
I wrote a few scripts to get to a semi-automated workflow where I have control over the source code context and code editing portion, because I believe I can do a better than AI in those areas.
I'm still copy/pasting between VS.Code and ChatGPT. I just don't want to invest/commit yet because this workflow is good enough for me. It lets me chat design, architecture, UX, product in the same context as the code which I find helpful.
Pros
- Only one subscription needed
- Very simple
- Highly flexible/adaptive to what part of workflow I'm in
Cons
- More legwork
- Copy/pasting sometimes results in errors due to incomplete context
I've been building and using these tools for well more than a year now, so here's my journey on building and using them (ORDER BY DESC datetime).
(1) My view now (Nov 2024) is that code building is very conversational and iterative. You need to be able to tweak aspects of generated code by talking to the LLM. For example: "Can you use a params object instead of individual parameters in addToCart?". You also need the ability to sync generated code into your project, run it, and pipe any errors back into the model for refinement. So basically, a very incremental approach to writing it.
For this I made a Chrome plugin, which allowed ChatGPT and Claude to edit source code (using Chrome's File System APIs). You can see a video here: https://www.youtube.com/watch?v=HHzqlI6LLp8
(2) Earlier this year, I thought I should build a VS Code plugin. It actually works quite well, allows you to edit code without leaving VSCode. It does stuff like adding dependencies, model selection, prompt histories, sharing git diffs etc. Towards the end, I was convinced that edits need to be conversations, and hence I don't use it as much these days.
(3) Prior to that (2023), I built this same thing in CLI. The idea was that you'd include prompt files in your project, and say something like `my-magical-tool gen prompt.md`. Code would be mostly written as markdown prompt files, and almost never edited directly. In the end, I felt that some form of IDE integration is required - which led to the VSCode extension above.
All of these tools were primarily built with AI. So these are not hypotheticals. In addition, I've built half a dozen projects with it; some of it code running in production and hobby stuff like webjsx.org.
Basically, my takeaway is this: code editing is conversational. You need to design a project to be AI-friendly, which means smaller, modular code which can be easily understood by LLMs. Also, my way of using AI is not auto-complete based; I prefer generating from higher level inputs spanning multiple files.
thats a great way to build a tool which solves your need.
In Aide as well, we realised that the major missing loop was the self-correction one, it needs to iteratively expand and do more
Our proactive agent is our first stab at that, and we also realised that the flow from chat -> edit needs to be very free form and the edits are a bit more high level.
I do think you will find value in Aide, do let me know if you got a chance to try it out
1) Cost. More people have ChatGPT/Claude than CoPilot. And it's cheaper to load large contexts into ChatGPT than into the API. For example, o1-preview is $15/million tokens. And it's a fixed $20/m that someone can use for everything else as well.
Of course, there are times when I just use the VS Code plugin via API as well.
2) I want to stay in VS Code. So that excludes some of the options you mentioned
3) I don't find tiled VSCode + ChatGPT much of a hindrance.
4) Things might have improved a bit, but previously the Web-based chat interface was more refined/mature than the integrated interface.
Besides Claude.vim for "AI pair programming"? :)
(tbh it works well only for small things)
I'm using Codeium and it's pretty decent at picking up the right context automatically, usually it autocompletes within ~100kLoC project quite flawlessly. (So far I haven't been using the chat much, just autocomplete.)
I never got the appeal of having the AI directly in your editor, I've tried Copilot and whatever JetBrains are calling their assistant and I found it mostly just got in the way. So for me it's no AI in editor and ChatGPT in a browser for when I do need some help.
VS Code plugins. Codeium at home. GitHub Copilot at work. Botb are good. Probably equivalent.
Codeium recently pushed annoying update that limits your ctrl-I prompt to one line and is lost if you lose focus eg to check another file. There is a GH issue for that.
It kept truncating files only about 600 lines long. It also seems to rewrite the entire file each time instead of just sending diffs like aider making it super slow.
oh, I see your point now. Its weird that they are not doing the search and replace style editing.
Altho now that OpenAI also has Predicted Output, I think this will improve and it won't make mistakes while rewriting longer files.
The 600 line limit might be due to the output token limit on the LLM (not sure what they are using for the code rewriting)
It's not nearly as helpful as Claude.ai - it seems to only want to do the minimum required. On top of that it will quite regularly ignore what you've asked, give you back the exact code you gave it, or even generate syntactically invalid code.
It's amazing how much difference the prompt must make because using it is like going back to gpt3.5 yet it's the same model.
This is literally a totally different piece of software with a completely unrelated use case. Changing the name would make as much sense as renaming a hammer because someone invented a screwdriver.
You have just woken up from the cryosleep you entered in 2024. The year is 2237. GPT-64 and its predecessors have been around for nigh on 100 years. But there has been no civilizational upheaval. Your confusion is cleared when you check the inter-agent high-speed data bus. You expect this to be utterly incomprehensible, but both the human and AI data is clearly visible. It is a repeating pattern. The agents are mimicking human behavior perfectly and you can’t tell which is which. All data transmitted has the same form:
$word is already a name for a project. Stop copying it. Change your name.
Mankind and His Machine Children have met The Great Filter.
I don't think it's going to take us 200 years to kick the habit of using global namespaces for friendly names, maybe 80. Recognizing a name and rendering it as a disambiguation based on my location in the trust graph should be a feature of the text box, not something that I have to think about.
I'd much, much prefer Aide to continue as a CLI tool or as a VSCode plugin. Every fork of VSCode ends up with IDE maintenance bugs that never get addressed and slowly the effort implodes as the bug surface becomes too wide.
Do you want to spend 90% of your time on AI or troubleshooting odd Linux VSCode bugs in your fork? I'd highly recommend the team to evaluate a different direction for growth to maximize sustainable future growth.
Thats a fair point, a significant part of our 4 person team had to skill up on the VSCode codebase to be able to meaningfully make changes to it.
I would love to know your workflow, you mention CLI tool or VSCode plugin, which one of them work for you? Whats missing from them where Aide can fill in the gap
Plugin, which gives you access to the UI.
Make a plugin that interfaces with a CLI tool! Best of both worlds, I think!
Why is a fork required? I use the cline plugin for VS Code and it seems to be able to be able to more things, like update code directly, create new files, etc.
fork was necessary for the UX we wanted to go for. I do agree that an extension can also satisfy your needs (and it clearly is in your case)
Having a deeper integration with the editor allows for some really nice paradigms: - Rollbacks feel more native, in the sense that I do not loose my undo or redo stack - cmd+k is more in line with what you would expect with a floating widget for input instead of it being shown at the very top of your screen which is the case with any extension for now.
Going further, the changes which Microsoft are making to enable copilot editing features are only open to "copilot-chat" and no other extension (fair game for Microsoft IMHO) So keeping these things in mind, we designed the architecture in a way that we can go towards any interface (editor/extension). We did put energy into making this work deeply with the VSCode ecosystem of APIs and also added our own.
If the editor does not work to our benefit, we will take a call on moving to a different interface and thats where an extension or cloud based solution might also make sense
After using Cursor (another AI focused fork) I'm 100% on the fork train. AI built natively into the IDE presents another layer of speed and isn't subject to the limitations of the extension system (which is awesome in its own right, not a knock on it).
I was on the fork train for awhile but cursors keeps having weird issues with indexing, intellisense, not being able to save files when format on save is enabled I wound up going back to vscode with cline and use openrouter to save money via prompt caching. To my knowledge cursor doesn't have Claude sonnets computer use enabled yet which is a total game changer and cline does I'll check back in a few months but instead of paying 20 a month for cursor pro I can put 20 in credits in openrouter and fully leverage the latest Claude model and features
how do you use the computer usage.. I do find it a very interesting API layer to play around with
I've been using it recently to have cline check my local dev server to review it's changes and iterate if there is anything off with the design changes it has made. Example prompt:
I have attached a screenshot of an updated design for the Navbar component. My local dev server is running at localhost:3000. Update the component to match the new designs and double check your changes after save.
But doesn't Cline consume lots of tokens?
Links to the project, I'm guessing these :)
https://github.com/codestoryai/aide
https://aide.dev/
you missed this one https://github.com/codestoryai/sidecar Sidecar: The AI brains Aide: https://github.com/codestoryai/aide the editor
What is the privacy policy? Do you get to see my code and secrets? Does anybody else? I don't want you to. Nothing personal.
we don't .. in your user settings, type in disable all telemetry and we won't see a thing
What telemetry are you sending without this setting?
nothing https://github.com/codestoryai/ide/blob/0eb311b7e4d7d63676ad... we get back an undefined PosthogClient instead of populating it
Genuine question, with vscode going all-in in this direction, what's left for forks like this?
There are quite a few things! VSCode's direction (I am making my own assumptions from the learnings I have) - VSCode is working on the working set direction of making multi-file edits work - Their idea of bringing in other extension is via the provider API which only copilot has access to (so you can't use them if you are not a copilot subscriber)
So just taking these things for face value, I think there is lots to innovate. No editor (bias view of mine) has really captured the idea of a pair programmer working alongside you. Even now the most beloved feature is copilot or cursor tab with the inline completions.
So we are ways further from a saturated market or even a feature set level saturation. Until we get there, I do think forks have a way to work towards their own version of an ideal AI native editor, I do think the editors of the future will look different given the upwards trend of AI abilities.
> No editor (bias view of mine) has really captured the idea of a pair programmer working alongside you.
I had this feeling for the first time with Cline. It adjusted the code, accessed the terminal, rebuilt the image, ran the container, saw an error, suggested new edits, ran again... All while only asking for a few confirmations. And it's very verbose: it tells you with details what it's doing every step of the way.
But I migrated to the new Copilot a few days ago because I was easily spending $5 in a day.
Betting on Microsoft messing up on UX side, as always.
Looks interesting, is there a binary for Mac OS? I'd rather not build from scratch just to demo.
For the people comparing to Cursor on features, I suspect the winner is going to be hard to articulate in an A:B comparison.
There's such a difference in feel that may be rooted in a philosophy, but boils down to how much the creator's vision aligns with my own.
Yes there is, we have the binary link on our website but putting it here:
- arm64 build: https://github.com/codestoryai/binaries/releases/download/1....
- x86 build: https://github.com/codestoryai/binaries/releases/download/1....
> There's such a difference in feel that may be rooted in a philosophy, but boils down to how much the creator's vision aligns with my own.
Hard agree! I do think AI will find its way into our productivity tool kit in different ways. There are still so many ways we can go about doing this, A:B comparison aside I do feel the giving people to power to mold the tool to work for themselves is the right way.
Just an FYI, there is extant code called Aide[0][1] (Advanced Intrusion Detection Environment) as well that's under active development/maintenance.
Since they perform vastly different functions (the above is a replacement for tripwire[2]), it's unlikely anyone will be confused, but be aware that it exists. There's always someone who will mistake one for the other.
[0] https://aide.github.io/
[1] https://en.wikipedia.org/wiki/Advanced_Intrusion_Detection_E...
[2] https://en.wikipedia.org/wiki/Open_Source_Tripwire
Is there a comparison with Cursor I can read?
What differentiates Aide from all the existing tools in this space like Cursor?
VSCode forks are not new, there are many companies out there building towards this vision. What sets us apart is partly our philosophy (deeply integrating into the editor) and also the tech stack (running everything to the dot locally) and giving developers control over the LLM usage and also other niceness (like rollbacks which I think are paramount and important)
> What sets us apart is partly our philosophy (deeply integrating into the editor)
i'm so sorry but what do you think cursor's philosophy is
> also other niceness (like rollbacks
yep in cursor too
i know youre new so just being gentle but try to focus on some kind of killer feature (which i guess is sidecar?)
also https://x.com/codestoryai seems suspended fyi
Fair point! We are not taking a stab at cursor in any way (its a great product)
In terms of features I do believe we are differentiated enough, the workflows we came up with are different and we are all about giving back the data (prompt+responses) back to the user.
The sidecar is not the killer feature, its one of the many thing which ticks the whole experience together.
Good callout on the codestoryai account being suspended, we are at @aide_dev
you link to it still on your home page
great catch! Thank you for pointing this out
> I'm so sorry but what do you think cursor's philosophy is
I've never understood why people say sorry for cases like these?
Because without the sorry the comment may be interpreted in an aggressive/dismissive tone. For this issue, adding a sorry is an easy fix.
Can be a cultural idiomatic thing. For me (BrEng) "I'm sorry but..." doesn't explicitly mean "I regret/apologise for what I'm about to say" - in fact it's an intensifier.
It can definitely be used that way in American English as well, though maybe less common?
Yeah I am with you on this one, for me, it worsens the statement even more. Maybe because I've heard people use it in a way to be passive-aggressive.
Feature wise, what are you offering I wouldn’t get in Cursor today?
Aide seems to have a good open source license (Cursor is proprietary)
Open Source; giving full ownership of the data to the users; running completely locally; we want to make sure you can use Aide no matter the environment you are in.
Is the "sidecar" open source too?
Yes, it is! https://github.com/codestoryai/sidecar
Confusingly, "Sidecar" is the name Apple uses for their feature of having an iPad serve as a second screen/touch interface for a Mac:
https://support.apple.com/en-us/102597
It's also the name of a k8s thing: https://kubernetes.io/docs/concepts/workloads/pods/sidecar-c...
And AIDE is also the name for an Android IDE or an Advanced Intrusion Detection Environment
In my previous life at Facebook, I worked on the infra team and worked on a cluster manager similar to kubernetes, thats where I first heard the term sidecar. Something about the concept of a binary running alongside the pod powering other related things felt strong. In most parts this is the inspiration for naming the AI brain: sidecar
I believe I get the metaphor. Why is it confusing?
Overloading the term with a second technological meaning.
Unfortunately, there are a distinctly limited number of words that can communicate a particular concept - the pigeon-hole principle suggests duplication will be inevitable.
Ambiguity's only really a problem when the same term is used multiple ways in similar contexts. I think it's very unlikely that anyone will get confused between these two usages.
So what? Just because Apple calls something retina "display" means that others cannot call stuff displays.
read it as sAIDEcar
In a world that prioritizes precise user control over the output text, how do you justify the value of relinquishing such control even to provide actions the user would have made already? It only takes a single bad edit to make the user lose all trust.
yeah bad edits are especially worse when we keep building on top of it.
Our first take on solving this is with rollbacks .. which allows you to delete edits up until a point in the conversation.. so if you do notice a bad edit you can do that.
after this, there is the proactive agent ..which checks it's work again and suggests more changes which it needs to do .. you can give feedback and guide it.
With llms we do loose a bit of control but I think the editor should work to solve this
I appreciate the earnest replies; these do in fact address my concerns.
This looks great. Would love some blog posts about your experience building this our with rust!
Oh for sure! I do want to talk about how rust really helped us so many times when doing refactors or building new features, part of the reason why we were able to iterate so quickly on the AI side of things and ship features
Any tips for using Aide with another text editor? Ie i'm not going to work outside of my preferred text editor (Helix atm), so i'm curious about software which has a workflow around this. Rather than trying to move me to a new text editor
I also use helix and i've been getting some mileage out of aider, the cli tool. Confusing name, as I don't believe aider is affiliated with aide
do you know of helix exposes the LSP APIs all the way to the editor .. if it does doing the integration should be trivial
im fairly sure helix-gpt does this, though i haven't tried it
reading the code for helix-gpt over here https://github.com/leona/helix-gpt/blob/2a047347968e63ca55e2... looks like the architecture for the extension is based around getting the diagnostic event and then passing that along to the chat.
The readme also talks about how LSP services are not exposed properly yet, my takeaway is that its not complete yet..but surely doable
hmm.... I do think it can be extended to work outside of just the VSCode environment.
If you look at the sidecar side of things: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... these are the main APIs we use
On the editor side: https://github.com/codestoryai/ide/blob/0eb311b7e4d7d63676ad...
These are the access points we need
The binary is fairly agnostic to the environment, so there is a possibility to make it work elsewhere. Its a bit non trivial but I would be happy to brainstorm and talk more about this
first editor I've seen recently that defaults to turn of minimap.
I won't shut up for about this, I don't understand how such an useless "feature" becomes the norm in modern IDEs.
I have just downloaded and set it up to talk to Anthropic. I have some unused API credits that I've been trying to find a use for. Haha.
To be honest, I was very confused by the the @'ing. Is it possible to always pin the current open file to the context, perhaps in a FIFO fashion? (Other than the manually pinned files). The UX around pinning / @'ing is not super clear to me.
Otherwise still getting the hang of it, and generally pretty neat.
There is pinned context on the very top where you can pin the files which you frequently use.
We will start including the open file by default in the context very soon (one of the gotchas here, is that the open file could not be related to the question you have)
FYI the youtube embed on https://docs.codestory.ai/features is broken (both Firefox and Chrome, MacOS).
https://support.mozilla.org/1/firefox/132.0.1/Darwin/en-US/x...
RIP, didn't expect that to happen. This is the embedded video btw https://www.youtube.com/watch?v=i8ZXMgnFSo8 putting it here for prosperity
You mean "posterity", not "prosperity" :)
This is very similar to the Zed editor. How much did you get inspired by them? And what are the differences between yours and their implementations?
I would take that as a compliment, big fan of Zed (I hope their extension ecosystem allows for us to plugin sidecar into Zed soon)
Tbh I did try out their implementation and it still feels early, one of the key difference we went for was to allow the user to move freely between chat and editing mode.
There's feels much more detailed and thoughtful than yours.
Yours for example doesn't allow one to insert diagnostics or allow for example for things like inserting all tabs open and all files open at once.
They also allow jumping from editing to chatting mode by simply doing command enter.
hmmm you are right, the ergonomics of providing context are more powerful in zed. Feedback taken, we will work on it.
We implicitly take in all the diagnostics on the files https://github.com/codestoryai/sidecar/blob/e5408782a3bfa461...
This is a fork of VScode, which means people can’t use the extension store anymore right?
They can from the openvsx store https://open-vsx.org/
We also import your extensions automatically (safe guarding against the ones with Microsoft's licensed)
You can also just download in from the vscode marketplace webpage and drag and drop it in
Hmm, any time frame for when Linux (.deb,flatpak) binaries will be available?
you should be able to use this: https://github.com/codestoryai/binaries/releases/download/1.... let me know if that does not work.
All our binaries are listed out here: https://github.com/codestoryai/binaries/releases/tag/1.94.2....
You could also use this script to setup everything: curl -sL https://raw.githubusercontent.com/codestoryai/binaries/main/... | bash (you can see the source of the script too)
I was able to successfully install the appimage. Perhaps it was just me but I did not find it as easy to work with as I imagined. Better, easier to follow, tutorials with images might help. I was hoping, for example, to highlite code and have Aide just intuitively injest that piece of code and work with it, but the process is not that easy, however. It's probably me but at least I was able to get it to install :)
Looks like the download links from your landing page are broken?
wooops... on it (we got rate limited by Github) in the meanwhile https://github.com/codestoryai/binaries/releases/tag/1.94.2.... check this out
its fixed!
Any short-term plans for Claude via AWS Bedrock? (That's for me personally a blocker for trying it on our main codebase.)
Thanks for your interest in Aide!
If I understood that correctly, it would mean supporting Claude via the AWS Bedrock endpoint, we will make that happen.
If the underlying LLM does not change then adding more connectors is pretty easy, I will ping the thread with updates on this.
Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)
Is it ~just about adding another file to https://github.com/codestoryai/sidecar/blob/main/llm_client/... ?
I could take a look too - another way for me to test Aide by working with it to implement this. :-)
(https://github.com/pasky/claude.vim/blob/main/plugin/claude_... is sample code with basic wrapper emulating Claude streaming API with AWS Bedrock backend.)
yup! feel free to add the client support, you are on the right track with the changes.
To test the whole flow out here are a few things you will want to do: - https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here) - add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... - after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section) - In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)
If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!
Hmm looks like this is still pretty early project for me. :)
My experience: 1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).
2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)
One more thought is maybe you got DDos'd by HN?
> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.
> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.
Thanks for the feedback tho, I appreciate it and we will do better
I see Qwen 2.5 is not listed on your website, is plugging in different llms supported as well?
Honestly we can, I haven't prompted it enough what do you want to use the model for?
Just general coding, mostly python. Seems to me that Qwen 2.5, especially the upcoming bigger coder model might be the best performing coding model for 24GB VRAM setups
Aide.dev is similar to aider.chat except Aide being an IDE while Aider is a CLI
AIDE == AI + IDE (that was our take on the name)
Ohhh
sigh more Electron
I know... we could have built something free form ground up (like zed did) but we had to pick a battle between building a new editor from the grounds up or building from a solid foundation (VSCode) We are a small team right now (4 of us) and have been users of VSCode, so instead of building something new, putting energy into building from VSCode made a lot more sense to us.
Is this "VC less" AI IDE?
Been using Cursor since launch. Really frustrating how they charge per message (500/mo) instead of by token usage. Like, why should a one-line code suggestion cost the same as refactoring an entire class? Plus it's been losing context lately and making rookie mistakes.
Tried Zed AI but couldn't get into it - just doesn't feel as smooth as Cursor used to. GitHub Copilot is even worse. Feels like they're so obsessed with keeping that $10/month price point that they've gutted everything. Context window is tiny and the responses are barebones.
I used Cursor for many months, but found that I couldn't deal with how slow and workflow-interrupting VSCode feels, so I went back to Zed.
I tried out and abandoned Zed AI, but I've found that Zed + aider is a really excellent setup – for me, at least.
For smaller things, Zed's inline Copilot setup works (nowadays) just as well as Cursor's and for things that are even a little bigger than tiny, I pull up aider and prompt the change the exact same way that I did with Cursor's Composer before.
I'm an odd choice because I'm a pretty expert-level programmer, but I find that aider with o1 is helpful for things hairy, expansive and tedious things that would otherwise irritate.
You can try Cline and Aider.
Not only is the name Aide already used by another project, it’s even also an IDE.
https://www.android-ide.com/
TIL, I thought we covered the ground when grepping for Aide. Funny that its also an editor
It is a pretty well established IDE. I used it back on a Nexus 4 when that phone was actually "recent" to give you some context.
I do remember the Nexus 4 (jelly bean OS). I was fascinated at the time that you could play games on Android and ran the emulator for android on my desktop at that point (I was young and needed the games haha)
Looks rather dead.
I'm curious - what does the AI coding setup of the HN community look like, and how has your experience been so far?
I want to get some broader feedback before completely switching my workflow to Aide or Cursor.
I tried Cursor and found it annoying. I don’t really like talking to AI in IDE chat windows. For whatever reason, I really prefer a web browser. I also didn’t like the overall experience.
I’m still using Copilot in VS Code every day. I recently switched from OpenAI to Claude for the browser-based chat stuff and I really like it. The UI for coding assistance in Claude is excellent. Very well thought out.
Claude also has a nice feature called Projects where you can upload a bunch of stuff to build context which is great - so for instance if you are doing an API integration you can dump all the API docs into the project and then every chat you have has that context available.
As with all the AI tools you have to be quite careful. I do find that errors slip into my code more easily when I am not writing it all myself. Reading (or worse, skimming) source code is just different than writing it. However, between type safety and unit testing, I find I get rid of the bugs pretty quickly and overall my productivity is multiples of what it was before.
This is me also, I don't like the UX/DX of Cursor and such just yet.
I can't tell if it is a UX thing or if it also doesn't suit my mental model.
I religiously use Copilot, and then paste stuff into Claude or ChatGPT (both pro) when needed.
I am on day 8 of Cursor's 14-day trial. If things continue to go well, I will be switching from Webstorm to Cursor for my Typescript projects.
The AI integrations are a huge productivity boost. There is a substantial difference in the quality of the AI suggestions between using Claude on the side, and having Claude be deeply integrated in the codebase.
I think I accepted about 60-70% of the suggestions Cursor provided.
Some highlights of Cursor:
- Wrote about 80% of a Vite plugin for consolidating articles in my blog (built on remix.run)
- Wrote a Github Action for automated deployments. Using Cursor to write automation scripts is a tangible productivity boost.
- Made meaningful alterations to a libpg_query fork that allowed it to be cross-compiled to iOS. I have very little experience with C compilation, it would have taken me a substantially long time to figure this out.
There are some downsides to using Cursor though:
- Cursor can get too eager with its suggestions, and I'm not seeing any easy way to temporarily or conditionally turn them off. This was especially bad when I was writing blog posts.
- Cursor does really well with Bash and Typescript, but does not work very well with Kotlin or Swift.
- This is a personal thing, but I'm still not used to some of the shortcuts that Cursor uses (Cursor is built on top of VSCode).
I would not be able to leave a Jetbrains product for Kotlin, or XCode for Swift
Overall it's so unfortunate that Jetbrains doesn't have a Cursor-level AI plugin* because Jetbrains IDEs by themselves are so much more powerful than base level VS Code it actually erases some small portion of the gains from AI...
(* people will link many Jetbrains AI plugins, but none are polished enough)
I probably would switch to Cursor for Swift projects too if it weren't for the fact that I will still need Xcode to compile the app.
I also agree with the non-AI parts of JetBrains stuff being much better than the non-AI parts of Cursor. Jetbrain's refactoring tools is still very unmatched.
That said, I think the AI part is compelling enough to warrant the switch. There are code rewrite tasks that JetBrains would struggle with, that LLMs can do fairly easily.
JetBrains is very interesting, what are the best performing extensions out there for it?
I do wonder what api level access do we get over there as well. For sidecar to run, we need LSP + a web/panel for the ux part (deeper editor layer like undo and redo stack access will also be cool but not totally necessary)
You can get both by using Aider (yes, confusingly similar name). https://aider.chat
It does the multi-file editing with asking to add files etc, but as a CLI/local web app tool.
No way I'd ever use a CLI tool to augment my work in an IDE. Completely backwards.
Its great that cursor is working for you. I do think LLMs in general are far far better on Typescript and Python compared to other languages (reflects from the training data)
What features of cursor were the most compelling to you? I know their autocomplete experience is elite but wondering if there are other features which you use often!
Their autocomplete experience is decent, but I've gotten the most value out of Cursor's "chat + codebase context" (no idea what it's called). The feature where you feed it the entire codebase as part of the context, and let Cursor suggest changes to any parts of the codebase.
ohh inserting.. I tried it on couple of big repos and it was a bit of a miss to me. How large are the codebases on which you work? I want to get a sense check on where the behavior detoriates with embedding + gpt3.5 based reranker search (not sure if they are doing more now!)
Largest repo I used with Cursor was about 600,000 lines long
that's a good metric to aim for... creating a full local index for 600k lines is pretty expensive but there are a bunch of huristics which can take us pretty far
- looking at git commits - making use of recently accesses files - keyword search
If I set these constraints and allow for maybe around 2 LLM round trips we can get pretty far in terms of performance.
It really depends on what you're doing. AI is great for generating a ton of text at once but only a small subset of programming tasks clearly benefit from this.
Outside of this it's an autocomplete that's generally 2/3rds incorrect. If you keep coding as you normally do and accept correct solutions as they appear you'll see a few percentage productivity increase.
For highly regular patterns you'll see a drastically improved productivity increase. Sadly this is also a small subset of the programming space.
One exception might be translating user stories into unit tests, but I'm waiting for positive feedback to declare this.
I can give my broader feedback: - Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code. - Autocomplete is great Autocomplete is pretty nice, IMHO it helps finish your thoughts and code faster, its like intellisense but better.
If you are working on a greenfield project, AI codegen really shines today and there are many tools in the market for that.
With Aide, we wanted it to work for engineers who spend >= 6 months on the same project and there are deep dependencies between classes/files and the project overall.
For quick answers, I have a renewed habit of going to o1-preview or sonnet3.5 and then fact checking that with google (not been to stack overflow in a long while now)
Do give AI coding a chance, I think you will be excited to say the least for the coming future and develop habits on how to best use the tool.
> Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code
Have you tried Aider?
They've done some discovery on this subject, and it's currently using tree-sitter.
Yup, I have.
We also use tree-sitter for the smartness of understanding symbols https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... and also the editor for talking to the Language Server.
What we found was that its not just about having access to these tools but to smartly perform the `go-to-definition` `go-to-reference` etc to grab the right context as and when required.
Every LLM call in between slows down the response time so there are a fair bit of heuristics which we use today to sidestep that process.
I'm using Copilot in VScode every day, it works fine, but I mostly use it as glorified one-line autocomplete. I almost never accept multi-line suggestions, don't even look at them.
I tried to use AI deeper, like using aider, but so far I just don't like it. I'm very sensitive to the tiny details of code and AI almost never got it right. I guess actually the main reason that I don't like AI is that I love to write code, simple as that. I don't want to automate that part of my work. I'm fine with trivial autocompletes, but I'm not fine with releasing control over the entire code.
What I would love is to automate interaction with other humans. I don't want to talk to colleagues, boss or other people. I want AI to do so and present me some short extracts.
GitHub Copilot in either VS Code or JetBrains IDEs. Having more or less the same experience across multiple tools is lovely and meets me where I am, instead of making me get a new tool.
The chat is okay, the autocomplete is also really pleasant for snippets and anything boilerplate heavy. The context awareness also helps. No advanced features like creating entirely new structures of files, though.
Of course, I’ll probably explore additional tools in the future, but for now LLMs are useful in my coding and also sometimes help me figure out what I should Google, because nowadays seemingly accurate search terms return trash.
yeah I am also getting the sense that people want tooling which meets them in their preferred environment.
Do you use any of the AI features which go for editing multiple files or doing a lot more in the same instruction?
Cursor works amazing day to day. Copilot is not even comparable there. I like but rarely use aider and plandex. I'd use them more if the interface didn't take me completely away from the ide. Currently they're closer to "work on this while I'm taking a break".
Have you tried latest Copilot with workspace where you can use Claude and add files to the context?
I was deep into AI coding experiments since last December before all the VS Code Extensions and IDEs came out.
I wrote a few scripts to get to a semi-automated workflow where I have control over the source code context and code editing portion, because I believe I can do a better than AI in those areas.
Eventually I built my own desktop app which is 16x Prompt: https://prompt.16x.engineer/
I'm still copy/pasting between VS.Code and ChatGPT. I just don't want to invest/commit yet because this workflow is good enough for me. It lets me chat design, architecture, UX, product in the same context as the code which I find helpful.
Pros
- Only one subscription needed
- Very simple
- Highly flexible/adaptive to what part of workflow I'm in
Cons
- More legwork
- Copy/pasting sometimes results in errors due to incomplete context
I've been building and using these tools for well more than a year now, so here's my journey on building and using them (ORDER BY DESC datetime).
(1) My view now (Nov 2024) is that code building is very conversational and iterative. You need to be able to tweak aspects of generated code by talking to the LLM. For example: "Can you use a params object instead of individual parameters in addToCart?". You also need the ability to sync generated code into your project, run it, and pipe any errors back into the model for refinement. So basically, a very incremental approach to writing it.
For this I made a Chrome plugin, which allowed ChatGPT and Claude to edit source code (using Chrome's File System APIs). You can see a video here: https://www.youtube.com/watch?v=HHzqlI6LLp8
The code is here, but its WIP and for very early users; so please don't give negative reviews yet: https://github.com/codespin-ai/codespin-chrome-extension
(2) Earlier this year, I thought I should build a VS Code plugin. It actually works quite well, allows you to edit code without leaving VSCode. It does stuff like adding dependencies, model selection, prompt histories, sharing git diffs etc. Towards the end, I was convinced that edits need to be conversations, and hence I don't use it as much these days.
Link: https://github.com/codespin-ai/codespin-vscode-extension
(3) Prior to that (2023), I built this same thing in CLI. The idea was that you'd include prompt files in your project, and say something like `my-magical-tool gen prompt.md`. Code would be mostly written as markdown prompt files, and almost never edited directly. In the end, I felt that some form of IDE integration is required - which led to the VSCode extension above.
Link: https://github.com/codespin-ai/codespin
All of these tools were primarily built with AI. So these are not hypotheticals. In addition, I've built half a dozen projects with it; some of it code running in production and hobby stuff like webjsx.org.
Basically, my takeaway is this: code editing is conversational. You need to design a project to be AI-friendly, which means smaller, modular code which can be easily understood by LLMs. Also, my way of using AI is not auto-complete based; I prefer generating from higher level inputs spanning multiple files.
thats a great way to build a tool which solves your need.
In Aide as well, we realised that the major missing loop was the self-correction one, it needs to iteratively expand and do more
Our proactive agent is our first stab at that, and we also realised that the flow from chat -> edit needs to be very free form and the edits are a bit more high level.
I do think you will find value in Aide, do let me know if you got a chance to try it out
>I do think you will find value in Aide, do let me know if you got a chance to try it out
Is there a relationship between Aide and Aider or it's just a name resemblance?
just a name resemblance, no relationship otherwise
> I do think you will find value in Aide, do let me know if you got a chance to try it out
Absolutely, will do it over the weekend. Best of luck with the launch.
But why use the web interface instead of Copilot, Cursor, Zed, Cline, Aider?
There are some advantages.
1) Cost. More people have ChatGPT/Claude than CoPilot. And it's cheaper to load large contexts into ChatGPT than into the API. For example, o1-preview is $15/million tokens. And it's a fixed $20/m that someone can use for everything else as well.
Of course, there are times when I just use the VS Code plugin via API as well.
2) I want to stay in VS Code. So that excludes some of the options you mentioned
3) I don't find tiled VSCode + ChatGPT much of a hindrance.
4) Things might have improved a bit, but previously the Web-based chat interface was more refined/mature than the integrated interface.
Besides Claude.vim for "AI pair programming"? :) (tbh it works well only for small things)
I'm using Codeium and it's pretty decent at picking up the right context automatically, usually it autocompletes within ~100kLoC project quite flawlessly. (So far I haven't been using the chat much, just autocomplete.)
any reason you don't use the chat often, or maybe it's not your usecase?
I'm not the parent poster, but in my case I very rarely use it because it's not in the Neovim UI; it opens in a browser.
I've also had some issues where it doesn't seem to work reliably, but that could be related to my setup.
yeah I am learning that on neovim you can own a buffer region and instead use that for ai back and forth.. it's a very interesting space
I never got the appeal of having the AI directly in your editor, I've tried Copilot and whatever JetBrains are calling their assistant and I found it mostly just got in the way. So for me it's no AI in editor and ChatGPT in a browser for when I do need some help.
VS Code plugins. Codeium at home. GitHub Copilot at work. Botb are good. Probably equivalent.
Codeium recently pushed annoying update that limits your ctrl-I prompt to one line and is lost if you lose focus eg to check another file. There is a GH issue for that.
cursor works well - uses RAG on your code to give context, can directly reference latest docs of whatever you're using
not perfect but good to incrementally build things/find bugs
Neovim + CopilotChat + CopilotLSP + Copilot subscription. You can dump your context, autocomplete, chat and select Claude or O1. Best deal so far.
what does the AI coding setup of the HN community look like
GitHub copilot and copilot chat in Jetbrains IDE. Cut and paste to Claude for anything else.
Using cursor and it’s been great !
Founders care about development experience a lot and it shows.
Yet to try others, but already satisfied so not required.
Vscode + cline + openrouter using claude sonnet 3.5 20241022 model it's unreal the shit it can do
I tried GH copilot again recently with Claude. It was complete shit. Dog slow and gave incomplete responses. Back to aider.
what was so bad about it? genuinely curious cause they did make quite a bit of noise about the integration.
It kept truncating files only about 600 lines long. It also seems to rewrite the entire file each time instead of just sending diffs like aider making it super slow.
Interestingly, I had this problem with Claude (their web chat) and not Copilot. However, there were times where it was unresponsive.
oh, I see your point now. Its weird that they are not doing the search and replace style editing. Altho now that OpenAI also has Predicted Output, I think this will improve and it won't make mistakes while rewriting longer files.
The 600 line limit might be due to the output token limit on the LLM (not sure what they are using for the code rewriting)
Yeah I guess it's a response limit. It makes it a deal breaker though.
It's not nearly as helpful as Claude.ai - it seems to only want to do the minimum required. On top of that it will quite regularly ignore what you've asked, give you back the exact code you gave it, or even generate syntactically invalid code.
It's amazing how much difference the prompt must make because using it is like going back to gpt3.5 yet it's the same model.
AIDE has been around for 25 years: https://aide.github.io/
IMHO the right thing would be to use another name.
I ... did not know that.
We should probably pick another name then
There's also https://aider.chat/, which is... close.
This is literally a totally different piece of software with a completely unrelated use case. Changing the name would make as much sense as renaming a hammer because someone invented a screwdriver.
The name is perfect, AI + IDE = Aide. You should keep it.
Maybe add a hyphen; ai-ide
You probably shouldn’t.
[dead]
[flagged]
You have just woken up from the cryosleep you entered in 2024. The year is 2237. GPT-64 and its predecessors have been around for nigh on 100 years. But there has been no civilizational upheaval. Your confusion is cleared when you check the inter-agent high-speed data bus. You expect this to be utterly incomprehensible, but both the human and AI data is clearly visible. It is a repeating pattern. The agents are mimicking human behavior perfectly and you can’t tell which is which. All data transmitted has the same form:
Mankind and His Machine Children have met The Great Filter.I don't think it's going to take us 200 years to kick the habit of using global namespaces for friendly names, maybe 80. Recognizing a name and rendering it as a disambiguation based on my location in the trust graph should be a feature of the text box, not something that I have to think about.
sounds like the scene in movie Idiocracy where roomba is stuck in corner and keeps repeating 'floor is now clean'.
hahaha
If it's a fork of VS code it should be trivial to also support Linux and Windows. Why is it MacOs only?
There are download links for other platforms at the footer, and below the main CTA button says "Looking for other platforms?" which takes you there.
It supports all platforms, we took inspiration from the macos spotlight search widget for inline editing.
I think it's confusing for people that only one big green button shows "Download for mac" and the other download links are at the footer.
ugh.. in which case our platform detection code is not working as expected. We will look into that