Closed ultraq closed 4 years ago
Gonna close this one - many of the encoding/decoding methods can't know the resulting size of the buffer ahead of time, and so I'd end up needing to make something that grows/shrinks accordingly, and that doesn't sound efficient.
Gonna retry this - some things I've learned make it a bit easier, eg: the IMA ADPCM decoder always has 4x compression, or I can pass a "decompressed size hint" var as parameter if I need to?
Gave it another shot, and found another reason to keep it as is: the decoder doesn't really have the context to know what is best. eg: if we wanted to save allocations and the number of times we have GCs, then it might be best to reuse an existing buffer rather than creating a new one all the time. But the decoder doesn't know this, so it's only real option is to always create new buffers to fit the data. Sure it could try reuse a single buffer, but again it'd have to be controlled from the calling code which has more context as to what is good to do or not.
To make the methods more "function-ish", have these interfaces return the resulting data rather than expecting them to be passed in.