Closed philippewarren closed 1 month ago
You seed the problem correctly, how could this extension know that the next line is related to the previous error. gtest output is just a text output, not well defined like xml or json.
But this might special in the sense that after a failure we just expect the test to finish. But note that of a destructor would print something to the stdout than that would be there too.
Maybe everything before the [ FAILED ]
marker could be displayed? Even if it is not directly related to the failure, having too much unrelated context seems better than missing context.
Google Test has an option for structured output:
--gtest_output=(json|xml)[:DIRECTORY_PATH/|:FILE_PATH]
Generate a JSON or XML report in the given directory or with the given
file name. FILE_PATH defaults to test_detail.xml.
Not sure since when, and it only outputs to a file, but it could be an alternative.
If I remember correctly the problem with that it is not continuous but just spits out the result at once at the end so cannot be parsed during running the exec. They might changed it since. idk.
Parsing until the end would for for "assert" but not for "except". I don't think it's possible to tell which one is happening.
If I remember correctly the problem with that it is not continuous but just spits out the result at once at the end so cannot be parsed during running the exec.
There's a --gtest_stream_result_to
option specifically intended to send test progress to IDEs, but sadly it isn't yet available on Windows (see google/googletest#3989).
Though that doesn't help much with output capturing; for that probably still the best way is to report everything between the [RUN] and [OK]/[FAILED] markers for the test as a whole.
Parsing until the end would for for "assert" but not for "except". I don't think it's possible to tell which one is happening.
At least for output solely generated by the assertions themselves, anything between one failure report and the next is all related to the former failure. Unfortunately this gets muddied if the test also generates additional output to stdout/stderr directly, as that usually precedes the failure they relate to instead, and as you said there isn't a good way to tell purely from stdout when output of the previous failure ends and manual stdout for the next failure begins.
Parsing the XML output does give you only the specific failure messages; perhaps some combination of all three would be the ideal? (streaming output for live progress, XML parsing for failure overlays, stdout for the Test Results -- perhaps comparing against the XML messages to decide which failure each stdout line "belongs" to?)
The XML output is generated in addition to regular stdout, so they can both be used together, and until streaming output is more widely supported you could continue to guess progress from stdout until the tests finish and the XML is available.
Checklist
Describe the bug
Using the extension, additionnal context passed to GoogleTest is lost in the in-line report. It is shown in test results: And when running the test executable directly:
When using a macro that already prints some more information, I can get it to show by adding at least two spaces before: This does not work with the
FAIL()
macro, however many spaces I add before.This is probably an improvement to make to the output parser that determines what is to be shown and what is not.
Another case is with exceptions. With this test, no overlay can even be shown: Information is displayed in the test results panel:
Ideally, this stuff should be displayed to help debug failed tests. It is currently possible to look for it in the test results panel, but the overlay is easier to use, and therefore it would be nice to have it there.
To Reproduce
libgtest-dev
installed fromapt
:include <gtest/gtest.h>
TEST(ThisTest, FailsWithContext) { FAIL() << "This test fails with context"; }
TEST(ThisTest, AlsoFailsWithContext) { ASSERT_EQ(1, 2) << "Value of [" << 1 << "] is not equal to [" << 2 << "]"; }
TEST(ThisTest, Throws) { throw true; }
a.out
.Desktop
apt
) and 1.14.0 (vcpkg
)(The reproductible example above was tested on the host. My own project is more complex, but gives the same results. This one runs in Docker).
Regression bug?
No, tried 4.1.0 and it did not work. Before that, the overlay did not work (4.0.0) or the test explorer did not show anything < 4.0.0).
**Log** (optional but recommended)
The log was taken in a fresh Ubuntu 20.04 Docker container, using the exemple above. ```js [2024-05-15 19:53:44.389] [INFO] proc starting /root/test_cpp_testmate_issue/a.out [ '--gtest_color=no', '--gtest_filter=ThisTest.FailsWithContext:ThisTest.AlsoFailsWithContext:ThisTest.Throws', '--gtest_also_run_disabled_tests' ] /root/test_cpp_testmate_issue/a.out [2024-05-15 19:53:44.402] [INFO] proc started 4426 /root/test_cpp_testmate_issue/a.out { shared: { workspaceFolder: { uri: f { scheme: 'file', authority: '', path: '/root/test_cpp_testmate_issue', query: '', fragment: '', _formatted: 'file:///root/test_cpp_testmate_issue', _fsPath: '/root/test_cpp_testmate_issue' }, name: 'test_cpp_testmate_issue', index: 0 }, log: { _logger: { configSection: 'testMate.cpp.log', workspaceFolder: undefined, outputChannelName: 'C++ TestMate', inspectOptions: [Object], includeLocation: false, targets: [Array], nextInspectOptions: undefined, configChangeSubscription: [Object] } }, testController: { controller: { items: [Object], label: [Getter/Setter], refreshHandler: [Getter/Setter], id: [Getter], createRunProfile: [Function: createRunProfile], createTestItem: [Function: createTestItem], createTestRun: [Function: createTestRun], invalidateTestResults: [Function: invalidateTestResults], resolveHandler: [Getter/Setter], dispose: [Function: dispose] }, testItem2test: WeakMap {