Open GoogleCodeExporter opened 9 years ago
Thank you for taking an interest in the open-vcdiff package and for sending
this
detailed bug report.
> The following unit tests report errors to stderr:
> vcencoder_test
> addrcache_test
> vcdecoder_test
> codetable_test
>
> However, all these tests output "[ PASSED ]" to stdout
The ERROR lines of output are expected as part of the unit tests.
When the open-vcdiff encoder or decoder encounters an error, it logs an error
message to stderr and returns "false" from the interface -- for example, from
VCDiffStreamingDecoder::DecodeChunk(), defined in vcdecoder.h.
As an example, in the following three lines of output from the build:
[ RUN ] CodeTableTest.MissingAdd
ERROR: VCDiff: Bad code table; there is no opcode for inst ADD, size 0, mode 0
[ OK ] CodeTableTest.MissingAdd
... the first and third line are produced by the unit test framework, while the
middle ERROR line is the error message reported by open-vcdiff. The unit test
CodeTableTest.MissingAdd makes sure that if open-vcdiff is presented with an
invalid
code table, it reports an error condition. So the error is the correct,
expected
result of the test.
If a unit test fails, the unit test executable will return a non-zero exit
code,
which will cause Visual Studio to report a failure condition and stop the
build.
For open-vcdiff 0.2, if none of the unit tests fails, you will see the
following
line at the end of a complete rebuild:
========== Build: 18 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
This is enough to assure you that none of the unit tests has failed.
If you are integrating open-vcdiff into an environment in which you don't want
any
error logging information sent to stderr, then you can change the definition of
LogMessage in logging.cc (part of the vcdcom project) to take some other action
when
a message is logged. Your replacement definition must return a class that has
the
<< operator defined for strings, integers, etc., because the LOG macro has the
same
syntax as a C++ output stream like cerr.
> vcdecoder_test error output mentions "VCDIFF delta file"; there's no such
> file provided nor generated by another test.
The term "delta file" has a particular meaning in the context of VCDIFF: it
refers
to the output of the encoder, or the input of the decoder. It does not
necessarily
refer to a file on disk. See Section 4 of RFC 3284
(http://www.ietf.org/rfc/rfc3284.txt), which describes the expected contents of
a "delta file". In the case of vcdecoder_test, the "delta file" is actually a
static series of hard-coded bytes, which are used to confirm that the decoder
produces the correct output (or the proper error message) given the delta file
as
input.
Please don't hesitate to add more comments, or post to the open-vcdiff
discussion
group (http://groups.google.com/group/open-vcdiff) if you have any more
questions or
concerns.
Saludos,
Lincoln Smith
Software Engineer
Google
Original comment by openvcd...@gmail.com
on 10 Sep 2008 at 11:31
P.S. I thought about suppressing the ERROR messages to make the unit test
output
cleaner, but in most cases it is important to verify not just that the encoder
or
decoder produces some sort of error, but that the error message is the correct
one.
Saludos,
lincoln
Original comment by openvcd...@gmail.com
on 10 Sep 2008 at 11:56
All you say is perfectly sound. Agree. Unit tests must test as much execution
paths
as possible, including error handling.
> I thought about suppressing the ERROR messages to make the
> unit test output cleaner, but in most cases it is important
> to verify not just that the encoder or decoder produces
> some sort of error, but that the error message is the correct one
I think that makes the task less automated, because it involves looking through
the
output in order to verify the error messages. It somewhat counters the
philosophy of
automated batch testing. The strictest approach would be to have the test
environment capture and verify the error messages, thus fully encapsulating the
tested code. And this would make the output cleaner, too. Though I do not know
if
googletest facilitates that.
Original comment by s...@sl.iae.nsk.su
on 11 Sep 2008 at 1:25
> I think that makes the task less automated, because it involves looking
> through the output in order to verify the error messages. It somewhat
> counters the philosophy of automated batch testing. The strictest
> approach would be to have the test environment capture and verify the
> error messages, thus fully encapsulating the tested code. And this
> would make the output cleaner, too. Though I do not know if googletest
> facilitates that.
I agree with you. Verifying the test output by hand is not ideal and can
cause regressions to go unnoticed. Currently we only have automated checks
for the output of the death tests (those that are expected to crash or exit.)
I'll change this defect to an enhancement request and keep it open.
The goal will be to print encoder/decoder errors only if they differ from
the expected output.
Original comment by openvcd...@gmail.com
on 11 Sep 2008 at 7:21
Original comment by openvcd...@gmail.com
on 6 Aug 2010 at 10:57
Original issue reported on code.google.com by
s...@sl.iae.nsk.su
on 8 Sep 2008 at 11:01Attachments: