Open vaclavHala opened 5 months ago
I am also quite puzzled by this issue. The vscode.window.showTextDocument is not being displayed, and the document does not synchronize with file updates made by the operating system. If you absolutely need to change the document in VSCode, it will inevitably cause the document to be displayed on the interface. The official should provide an API that allows updating vscode.TextDocument in the background (without displaying it) or a feature that enables synchronization of content in the background (without displaying it) as the operating system updates the file.
Does this issue occur when all extensions are disabled?: Yes
VSCode seems to fail to clear cached content of a
TextDocument
when the underlying file is deleted and recreated with different content. This makes it impossible for me to write automated tests for refactoring in a file using theTextDocument
andTextEditor
APIs as subsequent tests observe stale content left over by previous tests interacting with the same file. Note I'm trying to tell VSCode to drop all cached info about the file by invoking theworkbench.action.revertAndCloseActiveEditor
while the file in question is open in editor before removing and recreating it, but there seems to still be some info about the document left over.What I'm really trying to achieve is setup a predictable initial state for each of my tests which use the document and editor APIs. I've read in many places that VSCode does not expose the lifecycle of a
TextDocument
to the extension by design, but here this abstraction seems to leak as I'm unable to isolate individual tests short of restarting the whole VSCode processSteps to Reproduce:
git clone -b stale-text-document git@github.com:vaclavHala/vscode-workspace-race.git
cd vscode-workspace-race/stale-text-document
npm i
npm run test
Expected result: The test passes
Actual result:
What the reproduction code shows would normally be two subsequent tests and their setup/teardown hooks
Also putting a
sleep
seems to help but is not deterministic and needs to be pretty long which makes the whole suite run much longer than it shouldUsing different file for each test is possible but would complicate the suite significantly as some of the file names are "magic" in the domain (meaning name of the file determines its meaning) so I'd have to also instrument the actual code being tested to expect different "magic" name (computed dynamically e.g. using global number sequence) in each test case