An AI assistant has become an invaluable part of my development workflow over the past year, especially over the last several months. I started with Copilot, switched to Cody for a few months, and am now using Continue.dev.
I was really happy using Cody (made by Sourcegraph), and was happily recommending it to friends, so I wanted to lay out why I stopped using them and switched to Continue.dev.
Background on AI Assistants Link to heading
If you aren’t familiar with AI assistants, they are frontends to usually the same underlying LLMs.
They may use different models for various components, run some things locally or remotely, etc., but they more or less have access to the same backend models.
AI assistants bring two distinct pieces of value:
- User experience, by integrating into your coding workflow.
- Smart usage of context to feed to the underlying models.
The second part is very important in how useful any queries to the LLMs can be. It’s all about getting the right context for any given query.
When you ask a question or want autocompletion, the model needs to be fed some subset of your codebase or docs to give the best response. Knowing exactly what context to provide is a pretty difficult problem, as you need a good understanding of the codebase to know which parts are actually relevant to any query.
Sourcegraph’s entire business is in understanding and searching codebases. They should have a huge advantage in gathering the right context for queries.
I can work around UX issues, but I am willing to pay real money for a perceivable improvement in LLM answer quality, and Sourcegraph is best positioned to do that.
And until I hit this particularly problematic bug, I was an extremely happy customer.
The Problem Link to heading
My Bug Report Link to heading
Cody would consistently refuse to answer questions in two of my repositories when allowing it to choose context from the codebases. After hitting this issue multiple times, I reached out to their support team to understand what was happening.
The errors were essentially, “Your request was blocked, here’s an ID to use with customer service.”
All customer service could tell me about my specific request was that it contained the word “Reset”, which caused the request to be blocked.
Here was my response, which I think sums up the situation well, so I am including it as is:
Thank you for the information. With that I was able to find the problem, though I would suggest that this is forwarded to your development team for triage.
Cody is including context from the Changelog file. In both repos, the changelog file contains the line “- Add a clear command to reset the session”.
Removing that line allows the request to go through.
I would suggest that Cody is updated to handle that situation more elegantly, as Cody is the one that decided to include that line as context. I was not even directly asking a question about the Changelog file.
The infringing lines that break requests for these repos:
I’m…not going to edit my auto-generated Changelog so that I can use Cody in those repos, and there seems to be no way for me to otherwise fix this on my side without maintaining a local diff to that file.
Thanks for the help :)
It doesn’t actually matter, but I do have some guesses as to why the word “Reset” was actually blocked that I included down below.
No Workarounds Exist… Unless You Pay for Enterprise Link to heading
The only way I could fix this would be to include context filtering into Cody, to tell it, “Hey, never include the changelog file.” This is not ideal. What if I used the phrase “reset the session” in a code comment too?
But even then, Cody doesn’t make that very easy.
Cody apparently did have a way to filter individual files like that, but IIRC it was always behind an ’experimental flag’ and is now restricted to their Enterprise plan.
Which, like, come on, that’s a crazy thing to put behind a paywall.
Continue just has a .continueignore file that works exactly like .gitignore.
It’s Just a Bug. Why Leave? Link to heading
Yes, and it’s fixable, but:
- I was only using Cody for finding the right context. They had one job.
- Apparently, they try to block the word “reset” (or more likely “reset the session”, see below). I use that word occasionally, and banning the word “reset” is insane.
- Their context filtering is apparently so bad that it doesn’t filter out context that they themselves block.
I literally paid them to do a single job for me, and they were so bad at it that they didn’t take into account their own requirements.
The Solution Link to heading
I switched to Continue.dev. In the spirit of all good OSS projects, they have worse UX but are way more configurable. I want to pay them to improve their UX, but it’s free (bring your own keys) and I haven’t figured out where I can give them money.
I use a mix of a local LLM for autocompletion and bring my own API keys to use my preferred chat/embed/re-ranking remote models. For some weird reason (ahem… money), you have to buy the Enterprise plan from Cody to bring your own API keys (which actually would save Sourcegraph money, as you’d be paying Sourcegraph just to be a plugin and would also be paying a different provider for the actual models).
I also highly recommend that everyone using an assistant should go through Continue.dev’s documentation and configure it. It was fantastic for helping me understand what each of the AI assistant components actually do. Even if you use something else, it can be really helpful. Links to the User Guide and the instructions to Customize the plugin.
Appendix Link to heading
Why Does Cody Hate “Reset”? Link to heading
So, this is their full explanation:
Checking the reference ID you provided, the request seems to have been blocked because it included the word or phrase “Reset,” which our system flagged. For your privacy, I am unfortunately not able to see the full prompt, so I can’t verify exactly what you sent, but I suggest you check the contents of your prompt.
This is just a precaution to avoid any potentially harmful actions. The prompt_prefix I shared above gives us a glimpse of the user’s input and the code referenced, which may have triggered the block.
I’m guessing they actually block “reset the session” or a similar intent, not just the word “Reset.”
What they’re probably blocking is an attempt to steal the full prompt of the request. Basically, if you managed to “reset the session,” you could figure out what Cody was including as instructions or as context. The only reason to block such a thing would be to protect their business in some way, as knowing what the prompt contains can be used to compete with them.
So yes, I was kind of annoyed that Cody’s one job was to include good context, and it included context that broke its own filtering to protect itself.