Closed onepiecefreak3 closed 3 years ago
Thanks for taking the reins on this. I was thinking about the stream a little bit, and here's my proposal if you want to go forwards with this:
The stream is created with parameters baseStream, CompressionFormat, streamMode and stride. StreamMode would dictate whether this is an encoding stream or a decoding stream. Stride would be the amount of bytes per image line (4*Width), but it would need to be divisible by 16. An internal buffer of 4*stride would be created when the stream is initialized to store uncompressed data, and another internal buffer would be created to accomodate the encoded blocks. Also the current internal position within the buffer would be saved, but not exposed as a public variable. The Position property can still denote the position in the base stream.
If the BcStream is a decoding stream, CanWrite returns false and CanRead returns true. For encoding it's vice versa.
Decoding in the read method is done by reading the compressed buffer's length amount of bytes from the underlying stream to the compressed data buffer, and decoding it all to the uncompressed buffer. Then we return the requested amount of bytes from the uncompressed buffer, and increment the internal position. When the end of the internal buffer is reached, a new batch is read from the base Stream and decoded to the internal uncompressed buffer. When no more bytes can be read from the underlying stream, a zero is returned.
Writing is done similarily in batches. When written to, the uncompressed data is written to the internal buffer. Once the buffer is full, it's encoded to the compressed buffer and written to the underlying stream. Flush should not write any data to the underlying stream, but should call the base stream flush instead.
This kind of design would be able to handle writing large amounts of rgba data to a compressed file, without the need of a large memory footprint. This would also limit the size of the image to being divisible by four in both dimensions, which should be documented.
What are your thoughts on this?
I don't quite understand the reason behind StreamMode. As for the current implementation, BcStream is meant to take in a stream of compressed data. So Read decodes the data from the base stream, and Write encodes the data to the baseStream. Why split that behaviour? That is rather unstream-like.
A batching of sorts however is a possibility. Maybe we can utilize the BufferedStream for this in some capacity.
That's how GZipStream and DeflateStream work, which fulfill a similar role. When you open one of them you say if you're going to use it for compressing or uncompressing.
The buffers should probably be in the BcStream to quarantee correct behaviour.
Those streams only do that, because those compressions don't have random access. A BC encoded image however does. So if your data base is not able to randomly access its data, you need such a mode switch. Otherwise you don't.
Basically, if you can absolutely determine the position of the requested data in both encoded and decoded state, then a mode switch is not necessary.
We can leave it out then, but it still does not make sense to me to encode and decode from the same stream. At least the BcStream implementation would have to be programmed to protect from messing things up.
It is, you decode FROM the base stream and encode TO the base stream. That's also how cryptostreams work. A cryptostream decrypts data FROM the base stream and encrypts data INTO it.
So in the end this BCStream is a wrapper to produce valid BC data in the baseStream at all times.
So we are thinking the same then. I just wanted to make sure that the same instance of a stream is not used for both, since that would mess up the internal buffers.
I mean, the same stream instance IS used in both, just in different directions.
Wouldn't you use it like this though:
// Encode
var fs = FileStream.OpenWrite("filename");
var bcs = new BcStream(fs, CompressionFormat.Bc1);
bcs.Write(raw_rgba_data);
// Decode
var fs = FileStream.OpenRead("filename");
var bcs = new BcStream(fs, CompressionFormat.Bc1);
var data = new byte[length_of_data];
int bytesRead = bcs.Read(data);
Not like:
var fs = FileStream.Open("filename");
var bcs = new BcStream(fs, CompressionFormat.Bc1);
var data = new byte[raw_rgba_data.length];
bcs.Write(raw_rgba_data);
int bytesRead = bcs.Read(data);
The second one doesn't make sense.
I don't exactly know what the second code snippet even achieves, so that might be the part that makes no sense in general. But your first snippet is exactly how it would and should be done. And also how I already do implement it.
So a new instance of the stream is created. We were talking about the same thing then.
Oh, your first snippet was meant to actually open 2 separate streams? Then no, it isn't correct. It is the same baseStream for both operations, as already mentioned. Maybe we can discuss this in more detail on some social platform with less delay? I have discord.
This will be redone after more discussion and closed for now.
This implements my current idea for a BcStream as discussed in issue #27 .