matepek / vscode-catch2-test-adapter

Catch2, Google Test and doctest Adapter for the VSCode
https://marketplace.visualstudio.com/items?itemName=matepek.vscode-catch2-test-adapter
MIT License
210 stars 52 forks source link

Missing additionnal context from GoogleTest failures #436

Closed philippewarren closed 1 month ago

philippewarren commented 6 months ago

Checklist

Describe the bug

Using the extension, additionnal context passed to GoogleTest is lost in the in-line report. image It is shown in test results: image And when running the test executable directly: image

When using a macro that already prints some more information, I can get it to show by adding at least two spaces before: image This does not work with the FAIL() macro, however many spaces I add before.

This is probably an improvement to make to the output parser that determines what is to be shown and what is not.

Another case is with exceptions. With this test, no overlay can even be shown: image Information is displayed in the test results panel: image

Ideally, this stuff should be displayed to help debug failed tests. It is currently possible to look for it in the test results panel, but the overlay is easier to use, and therefore it would be nice to have it there.

To Reproduce

  1. On Ubuntu 20.04, with libgtest-dev installed from apt:
    
    // tests.cpp

include <gtest/gtest.h>

TEST(ThisTest, FailsWithContext) { FAIL() << "This test fails with context"; }

TEST(ThisTest, AlsoFailsWithContext) { ASSERT_EQ(1, 2) << "Value of [" << 1 << "] is not equal to [" << 2 << "]"; }

TEST(ThisTest, Throws) { throw true; }


2. Compile with:
```bash
g++ -pthread tests.cpp -lgtest_main -lgtest -lpthread
  1. Configure the extension to look for a.out.
  2. Run all tests

Desktop

(The reproductible example above was tested on the host. My own project is more complex, but gives the same results. This one runs in Docker).

Regression bug?

No, tried 4.1.0 and it did not work. Before that, the overlay did not work (4.0.0) or the test explorer did not show anything < 4.0.0).

**Log** (optional but recommended) The log was taken in a fresh Ubuntu 20.04 Docker container, using the exemple above. ```js [2024-05-15 19:53:44.389] [INFO] proc starting /root/test_cpp_testmate_issue/a.out [ '--gtest_color=no', '--gtest_filter=ThisTest.FailsWithContext:ThisTest.AlsoFailsWithContext:ThisTest.Throws', '--gtest_also_run_disabled_tests' ] /root/test_cpp_testmate_issue/a.out [2024-05-15 19:53:44.402] [INFO] proc started 4426 /root/test_cpp_testmate_issue/a.out { shared: { workspaceFolder: { uri: f { scheme: 'file', authority: '', path: '/root/test_cpp_testmate_issue', query: '', fragment: '', _formatted: 'file:///root/test_cpp_testmate_issue', _fsPath: '/root/test_cpp_testmate_issue' }, name: 'test_cpp_testmate_issue', index: 0 }, log: { _logger: { configSection: 'testMate.cpp.log', workspaceFolder: undefined, outputChannelName: 'C++ TestMate', inspectOptions: [Object], includeLocation: false, targets: [Array], nextInspectOptions: undefined, configChangeSubscription: [Object] } }, testController: { controller: { items: [Object], label: [Getter/Setter], refreshHandler: [Getter/Setter], id: [Getter], createRunProfile: [Function: createRunProfile], createTestItem: [Function: createTestItem], createTestRun: [Function: createTestRun], invalidateTestResults: [Function: invalidateTestResults], resolveHandler: [Getter/Setter], dispose: [Function: dispose] }, testItem2test: WeakMap { } }, executeTask: [Function (anonymous)], sendRetireEvent: [Function: d], varToValue: [ { resolve: /\$\{workspaceFolder(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{workspaceDirectory(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: '${workspaceName}', rule: 'test_cpp_testmate_issue' }, { resolve: /\$\{assert(?::([^}]+))?\}/, rule: [Function: rule] }, { resolve: '${osPathSep}', rule: '/' }, { resolve: '${osPathEnvSep}', rule: ':' }, { resolve: /\$\{config:([^}]+)\}/, rule: [Function: rule] }, { resolve: /\$\{command:([^}]+)\}/, rule: [AsyncFunction: rule] } ], rngSeed: null, execWatchTimeout: 10000, _execRunningTimeout: 600000, execParsingTimeout: 5000, isNoThrow: false, enabledTestListCaching: false, enabledStrictPattern: null, googleTestTreatGMockWarningAs: 'nothing', googleTestGMockVerbose: 'default', hideUninterestingOutput: true, _execRunningTimeoutChangeEmitter: o { w: 0, f: undefined, g: undefined, j: undefined, u: undefined, q: [Function (anonymous)], t: undefined }, _cancellationTokenSource: N { f: R { a: false, b: null }, g: undefined }, cancellationToken: R { a: false, b: null }, onDidChangeExecRunningTimeout: [Function (anonymous)], taskPool: { _maxTaskCount: 1, _runningTaskCount: 1, _waitingTasks: [] }, buildProcessChecker: { _log: { _logger: [Object] }, _checkIntervalMillis: 2000, _defaultPattern: /(^|[/\\])(bazel|cmake|make|ninja|cl|c\+\+|ld|clang|clang\+\+|gcc|g\+\+|link|icc|armcc|armclang)(-[^/\\]+)?(\.exe)?$/, _lastChecked: 0, _finishedP: Promise { undefined }, _finishedResolver: [Function (anonymous)], _timerId: undefined } }, name: undefined, description: undefined, varToValue: [ { resolve: '${filename}', rule: 'a.out' }, { resolve: '${relDirpath}', rule: '.' }, { resolve: /\$\{filename(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{relPath(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{absPath(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{relDirpath(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{absDirpath(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: '${extFilename}', rule: '.out' }, { resolve: '${baseFilename}', rule: 'a' }, { resolve: /\$\{workspaceFolder(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: /\$\{workspaceDirectory(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] }, { resolve: '${workspaceName}', rule: 'test_cpp_testmate_issue' }, { resolve: /\$\{assert(?::([^}]+))?\}/, rule: [Function: rule] }, { resolve: '${osPathSep}', rule: '/' }, { resolve: '${osPathEnvSep}', rule: ':' }, { resolve: /\$\{config:([^}]+)\}/, rule: [Function: rule] }, { resolve: /\$\{command:([^}]+)\}/, rule: [AsyncFunction: rule] }, { resolve: /\$\{cwd(?:\[(?:(-?[0-9]+)|(-?[0-9]+)?:(-?[0-9]+)?)\])?\}/, rule: [Function: rule] } ], path: '/root/test_cpp_testmate_issue/a.out', options: { cwd: '/root/test_cpp_testmate_issue', env: { HOSTNAME: '9644d2087a41', HOME: '/root', PATH: '/root/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685/bin/remote-cli:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', VSCODE_AGENT_FOLDER: '/root/.vscode-server', SHELL: '/bin/bash', PWD: '/root/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685', VSCODE_HANDLES_SIGPIPE: 'true', VSCODE_AMD_ENTRYPOINT: 'vs/workbench/api/node/extensionHostProcess', VSCODE_HANDLES_UNCAUGHT_ERRORS: 'true', VSCODE_NLS_CONFIG: '{"locale":"en","osLocale":"en","availableLanguages":{}}', REMOTE_CONTAINERS_IPC: '/tmp/vscode-remote-containers-ipc-98cbc6e5-0003-4575-91a8-c707a29ab12d.sock', LS_COLORS: '', REMOTE_CONTAINERS_SOCKETS: '["/tmp/vscode-ssh-auth-98cbc6e5-0003-4575-91a8-c707a29ab12d.sock","/tmp/.X11-unix/X2"]', SHLVL: '1', _: '/usr/bin/cat', SSH_AUTH_SOCK: '/tmp/vscode-ssh-auth-98cbc6e5-0003-4575-91a8-c707a29ab12d.sock', DISPLAY: ':2', REMOTE_CONTAINERS_DISPLAY_SOCK: '/tmp/.X11-unix/X2', REMOTE_CONTAINERS: 'true', BROWSER: '/root/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685/bin/helpers/browser.sh', VSCODE_CWD: '/root/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685', ELECTRON_RUN_AS_NODE: '1', VSCODE_IPC_HOOK_CLI: '/tmp/vscode-ipc-0d7dcfb9-95a6-4778-8304-adc5be2f6268.sock' } }, _frameworkSpecific: {}, markAsSkipped: false, executableCloning: false, debugConfigData: undefined, runTask: { before: [], beforeEach: [], after: [], afterEach: [] }, spawner: s {}, resolvedSourceFileMap: {}, log: { _logger: { configSection: 'testMate.cpp.log', workspaceFolder: undefined, outputChannelName: 'C++ TestMate', inspectOptions: { depth: 3 }, includeLocation: false, targets: [ [a] ], nextInspectOptions: undefined, configChangeSubscription: { dispose: [Function (anonymous)] } } }, workspaceFolder: { uri: f { scheme: 'file', authority: '', path: '/root/test_cpp_testmate_issue', query: '', fragment: '', _formatted: 'file:///root/test_cpp_testmate_issue', _fsPath: '/root/test_cpp_testmate_issue' }, name: 'test_cpp_testmate_issue', index: 0 }, workspacePath: '/root/test_cpp_testmate_issue', testController: { controller: { items: { size: [Getter], forEach: [Function: forEach], replace: [Function: replace], add: [Function: add], delete: [Function: delete], get: [Function: get], toJSON: [Function: toJSON], [Symbol(Symbol.iterator)]: [Function: [Symbol.iterator]] }, label: [Getter/Setter], refreshHandler: [Getter/Setter], id: [Getter], createRunProfile: [Function: createRunProfile], createTestItem: [Function: createTestItem], createTestRun: [Function: createTestRun], invalidateTestResults: [Function: invalidateTestResults], resolveHandler: [Getter/Setter], dispose: [Function: dispose] }, testItem2test: WeakMap { } }, cancellationToken: R { a: false, b: null }, taskPool: { _maxTaskCount: 1, _runningTaskCount: 1, _waitingTasks: [] }, executeTask: [Function (anonymous)], parallelizationPool: { _maxTaskCount: 1, _runningTaskCount: 1, _waitingTasks: [] }, optionsHash: '89cf4b' } [ '--gtest_color=no', '--gtest_filter=ThisTest.FailsWithContext:ThisTest.AlsoFailsWithContext:ThisTest.Throws', '--gtest_also_run_disabled_tests' ]; [2024-05-15 19:53:44.405] [INFO] Test ThisTest.FailsWithContext has started. [2024-05-15 19:53:44.405] [DEBUG] findSourceFilePath: tests.cpp => /root/test_cpp_testmate_issue/tests.cpp [2024-05-15 19:53:44.405] [DEBUG] findSourceFilePath: /root/test_cpp_testmate_issue/tests.cpp => /root/test_cpp_testmate_issue/tests.cpp [2024-05-15 19:53:44.405] [INFO] Test ThisTest.FailsWithContext has stopped. [2024-05-15 19:53:44.406] [INFO] Test ThisTest.AlsoFailsWithContext has started. [2024-05-15 19:53:44.406] [DEBUG] findSourceFilePath: tests.cpp => /root/test_cpp_testmate_issue/tests.cpp [2024-05-15 19:53:44.406] [DEBUG] findSourceFilePath: /root/test_cpp_testmate_issue/tests.cpp => /root/test_cpp_testmate_issue/tests.cpp [2024-05-15 19:53:44.406] [INFO] Test ThisTest.AlsoFailsWithContext has stopped. [2024-05-15 19:53:44.406] [INFO] Test ThisTest.Throws has started. [2024-05-15 19:53:44.406] [INFO] Test ThisTest.Throws has stopped. [2024-05-15 19:53:44.407] [INFO] proc close: /root/test_cpp_testmate_issue/a.out [ 1, null ] [2024-05-15 19:53:44.407] [INFO] proc finished: /root/test_cpp_testmate_issue/a.out ```
matepek commented 6 months ago

You seed the problem correctly, how could this extension know that the next line is related to the previous error. gtest output is just a text output, not well defined like xml or json.

But this might special in the sense that after a failure we just expect the test to finish. But note that of a destructor would print something to the stdout than that would be there too.

philippewarren commented 6 months ago

Maybe everything before the [ FAILED ] marker could be displayed? Even if it is not directly related to the failure, having too much unrelated context seems better than missing context.

Google Test has an option for structured output:

  --gtest_output=(json|xml)[:DIRECTORY_PATH/|:FILE_PATH]
      Generate a JSON or XML report in the given directory or with the given
      file name. FILE_PATH defaults to test_detail.xml.

Not sure since when, and it only outputs to a file, but it could be an alternative.

matepek commented 6 months ago

If I remember correctly the problem with that it is not continuous but just spits out the result at once at the end so cannot be parsed during running the exec. They might changed it since. idk.

Parsing until the end would for for "assert" but not for "except". I don't think it's possible to tell which one is happening.

gavinltomra commented 5 months ago

If I remember correctly the problem with that it is not continuous but just spits out the result at once at the end so cannot be parsed during running the exec.

There's a --gtest_stream_result_to option specifically intended to send test progress to IDEs, but sadly it isn't yet available on Windows (see google/googletest#3989).

Though that doesn't help much with output capturing; for that probably still the best way is to report everything between the [RUN] and [OK]/[FAILED] markers for the test as a whole.

Parsing until the end would for for "assert" but not for "except". I don't think it's possible to tell which one is happening.

At least for output solely generated by the assertions themselves, anything between one failure report and the next is all related to the former failure. Unfortunately this gets muddied if the test also generates additional output to stdout/stderr directly, as that usually precedes the failure they relate to instead, and as you said there isn't a good way to tell purely from stdout when output of the previous failure ends and manual stdout for the next failure begins.

Parsing the XML output does give you only the specific failure messages; perhaps some combination of all three would be the ideal? (streaming output for live progress, XML parsing for failure overlays, stdout for the Test Results -- perhaps comparing against the XML messages to decide which failure each stdout line "belongs" to?)

The XML output is generated in addition to regular stdout, so they can both be used together, and until streaming output is more widely supported you could continue to guess progress from stdout until the tests finish and the XML is available.

matepek commented 1 month ago
Fixed in v4.12.1. This issue was mentioned in [CHANGELOG.md](./CHANGELOG.md) under a released entry so it is assumed to be fixed. User verifications are always welcome.