The last week or so, Gemini Code Assist has become completely unusable. The vast majority of prompts fail either with "There was a problem getting a response" or truncating the output. I've tried everything I can think of. Reinstalling the Gemini Code Assist extension in VS Code, reinstalling VS Code itself. Restarting. Logging out and back in. Using an entirely different computer. Switching to the insiders branch. Limiting context to small files / very simple requests. It's essentially useless at this point. Am I missing something obvious or doing something wrong?
I am wondering the same thing. I have been experiencing this especially since 1pm today, July 10th.
My current work around: tell it explicitly to break up diff files of no more than 20 lines. And to break all explanation of what it is about to do into small 4 paragraph chunks.
I'm getting work done, but if this continues, the best AI coding co-worker is going to die very quickly.
Thanks, that's helpful. I'd tried a lot of things with prompts, but not an explicit instruction regarding the length of the diff. That's getting some output at least. Hopefully this will let me limp through finishing the refactoring gemini got completely stuck on. I really don't want to do it all by hand. And hopefully they fix the model soon, it's close to useless in the current state.
"Continue with the implementation. Provide the next diff with no more than 50 lines." seems to be getting me somewhere at least. It's painful, but something. Very much appreciate that tip.
Here's the latest version of my chat with Gemini 2.5 Pro in a browser about the situation. https://g.co/gemini/share/468e913a3251 Contains an amusing hallucination that the situation has been acknowledged by Google and posted as an incident that is being worked on. Sadly, nope.
I actually have a file full of canned responses as I work thru this mess. That you are getting 50 diffs done at a time is great. I'm having chokes with diffs of more than 5 or 6 changes.
Some of my canned responses, which I have to repeat over and over to GCA, since it cannot seem to remember the workaround rules:
-------------------------------------------
Same. Indexing workspace takes 100 CPU on my M4 Mac. Also tried insiders version no luck. Wasnt always like this but probably for the last 2 weeks. I have to just disable it.
You're lucky if every other prompt gives a usable output. And let's not even start on the major temporary file bloat!!! In 1 day it took up just under 20GB of temporary folder space!! AND then it's blatant gaslighting and lying, and analyzing the wrong files. Pretty bad experience in the last few days. Before that it was working ok. The only thing it has going for it, is it's context window. But I've spent the last 20 minutes having to clean up it's mess before I can even start working.
Things seem to have improved somewhat today. I'm still running into errors, and bizarrely can't now seem to get gemini to ignore my .git folder even when using an .aiexclude file, so it's giving errors saying some context was ignored or I exceeded my token limit. But diffs are coming through more reliably. Hopefully the worst of this is behind us and things will continue to improve. Fingers crossed.
having the same issue, it gets stuck on indexing workspace
I decided to try it again last night and got some joy, it lagged but it worked and didnt crash. But then this morning it crashed almost instantly while indexing workspace.
I've had some luck today with it but have to be extremely specific of when I would allow diffs or code blocks in the output. Lagging still, but the pay-off is the great feedback if it doesn't crash
Things are very slow today (10-15 mins to process a request), but there appear to have been some significant changes. It seem to be working again for the most part. cloudcode_cli is using a lot of processor capacity locally (so much so I wasn't able to stay on a zoom call while a GCA request was still working away in the background) but it is at least getting through complex requests and completing output. It's also displaying notes on what it's "thinking" about real time, so there do appear to have been major changes. Hopefully the worst of this experience is behind us now and response times and reliability will improve.