zesinger / libserum

Cross platform dynamic library code for Serum file format
GNU General Public License v2.0
11 stars 4 forks source link

compression per frame #39

Open mkalkbrenner opened 9 months ago

mkalkbrenner commented 9 months ago

@zesinger @jsm174 At the moment, a cRZ file is compressed and gets decompressed before a table starts. Having all frames decompressed requires a lot of memory, especially if we consider micro controllers for real pinball machines. Therefore, I suggest to compress single frames individually using miniz within the Serum file. If a frame is matched, the colorized frame could be decompressed on the fly. That would save a huge amount of memory!

Regarding ZeDMD, this step would also save time, because in some modes, we compress frames with miniz to reduce the data to be sent. So libzedmd should have an option to directly return the compressed frame.

zesinger commented 9 months ago

It could be a real pin version of the Serum file (still open source, of course). As the per frame compression is by far less efficient (and certainly less optimized than decompressing once at start), I prefer to keep the overall compression for virtual pin versions. Some people are already complaining that they have some slowdowns, adding compression for every frame wouldn't help. So if you post an Issue for ColorizingDMD, I could add a rpin save button.

mkalkbrenner commented 9 months ago

First of all it is great that you will support the proposal. But I don't think that it is just an issue for ColorizingDMD. For sure, libserum needs to be able to handle the compression. And instead an entirely separate format, I prefer to define an extension of the Serum format in a backward compatible way. For example using the Serum version and a meta data block at the end of the file. One of these meta data could be a flag if frames are compressed. In general I don't agree with the assumtion that decompression will always slow down a system. We're not interested in the highest compression rate that would require significant CPU time. Even the lowest and fastest compression will reduce the memory consumption by aprox. 70% to 80%. And also on a low end VPin, memory consumption could hurt performance more than a bit of CPU time for decompression. Especially if you consider, that most of the players will never see some of the frames. We could also implement a caching within libserum that keeps an uncompressed frame in memory after its first usage. That cache could be configurable regarding its size and its cleaning strategy. And finally we could add a configuration option to decompress all frames to that cache at loading time. The result would be the same as the current implementaton, so we have all possibilities in one format! Another advantage would be that we don't need unzipping to the file system anymore which would be great for VPX Standalone running from USB drives. I know that all of this might sound complicated, but I'm convinced that there's not much code needed to implement that. And I can work on it. @jsm174 what is your opinion?

mkalkbrenner commented 9 months ago

Let me explain the approach a bit more detailed.

zesinger commented 9 months ago

For the moment, I stick to an uncompress file, but I'll add this possibility in the next gens. Thanks

zesinger commented 9 months ago

The problem with your Issue is that I must regroup all the data for a same frame instead of having a common background ID array, a common frame definition array, a common dynamic mask array, a common background mask array, a common palette array... because if not: this would be ridiculous compression-ratio-wise if you compress separately all the frames in all the arrays, there would be many needed pointer tables and you would need, for a same frame, to decompress all of them So, for me, there will definitely be another release file optimized for memory-lacking systems but really not for others.