Open BurntSushi opened 9 years ago
It seems to me like the right solution here is to stop implicitly using a buffer in the implementation because it makes for surprising behavior in cases where a buffer isn't needed (or is harmful). In particular, it seems strange to have methods called from_writer
and from_reader
that implicitly wrap the given reader/writer in a buffer. Instead, we should require the caller to do that explicitly, or have constructors with different names that make it clear it's going to use a buffer internally (and therefore signal to the caller that the encoder must be flushed).
@BurntSushi I agree that it's better to require caller to use buffer explicitly. It will also remove overhead of buffering when writing to a Vec or any other in-memory thing.
When using a fixed size buffer, the encoder should fail if it needs to write more data than will fit in the buffer. Currently, the encoder writes as much as it can and stops.
It seems like this can be attributed to using
io::BufWriter
internally. Here's a small reproduction of the core bug:This succeeds presumably because the bytes were indeed successfully written to the buffer. However, if one calls
buf.flush()
, then an error is reported because the entire contents of the buffer could not be written.If we look at an example from #6, this finishes with output
len: 1, contents: [161]
. Namely, no error is reported:This is because the data from the buffer is flushed when the encoder is dropped. This should produce an error, but errors are ignored in destructors.
If we change the code above to (same, but with a call to
enc.flush()
):Then it will indeed fail because the underlying buffer flush will fail.
cc @tailhook