Constant reanalysing of entire project in VSCode with diagnosticMode workspace #7000
Replies: 3 comments 15 replies
-
Based on the symptoms you're reporting, it sounds like pyright is running short on heap space and is discarding all of its internal cache (including cached file contents, cached diagnostics, and cached type information) to avoid crashing and to make forward progress. There will naturally be some project size beyond which a type checker will run out of memory, and it sounds like you've gone past that. If you're able to keep the entire working set in memory (which is easy for a more typical project), then you shouldn't see all files reanalyzed after a change. However, if the internal memory caches are being purged repeatedly, new edits will force pyright to do much more repeated work. You can verify my theory by enabling Since your computer has 16 GB, you could try running pyright or pylance with a larger heap size. Then again, based on the last paragraph you appended to your post above, you might not have sufficient memory to do this. The memory usage for the command-line version of pyright should be much lower than the language server version, especially if you're setting |
Beta Was this translation helpful? Give feedback.
-
That does make sense, from logs I see:
So between those two logs the In the meantime I've tried the following:
I also wanted to ask, what would be the recommended approach to work with Pyright on a large codebase? I am mostly worried about a use case when an engineer starts to work on a feature branch, pivots for a bit to another and then goes back to his work. How can they see all the type errors that their feature branch introduces if they have to work with
Given that |
Beta Was this translation helpful? Give feedback.
-
so, we discussed in the scrum, and we decided to gather some more info before we decide how to address this. |
Beta Was this translation helpful? Give feedback.
-
I am not sure how much of this falls into Pylance vs Pyright, so sorry if this is the wrong place to ask.
I am working on migrating a relatively large monorepo (Pylance reports 5.5k source files found, with some
bash
I estimated it to be 1.5M lines; venv is 14.5k files and 4.4M lines) from usingmypy
topyright
. One important feature for me is being able to see all errors in the workspace, so that I can focus on a single module, check all the errors on it, fix them, mark module as done and go to the next one. For that I've set the"python.analysis.diagnosticMode"
to"workspace"
. Since I also have"python.analysis.userFileIndexingLimit": 8000
(also tried with-1
) I have hoped that after initial indexing if I change only one thing in a single file the analysis time won't be very horrible, but it does take noticable amount of time to update inlay hints or show description of a new symbol when hovering. It gets much worse when you want to change several things, then the editor feels like it starts getting clogged on the amount of processing it has to perform.I've decided to make a test: set log levels to tracing, open a file that I've though wouldn't be imported in too many places, wait for the log lines to finnish logging (entire codebase was analised-parsed-binded-checked), cleared the output and added a space in a comment in that file. The logs exploded again, I've waited until they are done, saved the logs, redacted them and here they are:
https://gist.github.com/lukaspiatkowski/6098003d2c3c4f68a71995528748490e
Some pieces of my configs that might be relevant:
pyrightconfig.json:
Things to also point out:
/Users/lukas/m5a/ma1/s0e/tdd.py
) that often causesLong operation
warnings which is a ~5k line long list of all available actions and it imports a very substantial portion of the codebase from various places.I've tried running- EDIT: I just didn't know thatpyright -w <submodule>
, but after printingWatching for file changes...
it never reacts to my changes.pyright -w
outputs all the type errors on every file change and didn't notice the change1.1.394
from command line and pylance2025.2.101
in VSCode1.97.2
running M1 MacBook Pro with 16GB RAM.So my questions are:
workspace
diagnosticMode
the entire codebase is reanalysed on every change?diagnosticMode
set toopenFilesOnly
and open hundreds of files at once (a single module) or some other workaround?P.S.
While writing this message I've tried my line of testing (opening a single file, adding a space to a comment and watching logs explode) few times and after one of the attempts I've got harrased by MacOS with message: "Your system has run out of application memory" and has asked me to force quit some apps. VSCode was reported to use 42GB, so I've killed it. I have 16GB of RAM and 39GB of Swap. After starting VSCode again I see in
htop
that Swap is almost completely used, but RAM is around 5GB. I've tried to bleed pyright by changing few things and I see CPU maxed, but memory is still around 5.5GB max... So not sure what was the issue with MacOS or what they mean by "application memory" exactly.Beta Was this translation helpful? Give feedback.
All reactions