Zeenobit / moonshine_save

A save/load framework for Bevy game engine.
MIT License
81 stars 9 forks source link

Allow saving to any output stream and loading from any input stream #14

Open kleinesfilmroellchen opened 1 month ago

kleinesfilmroellchen commented 1 month ago

moonshine_save is currently hardcoded to save to a file system path. There are a few reasons why saving to any Write stream would be useful:

In the same vein (and especially for something like compression), loading from any Read stream would be useful.

On platforms that have file system access, the current workaround is to use a dynamic path provider that sets up an OS temp file to write into, then read that file back. On platforms that don't have file system access (web, and limitations also exist on Android/iOS), moonshine_save is not usable.

API design for this seems very straightforward, instead of having .into_file*, we'd have .into_stream*, and analogously for loading.

Zeenobit commented 1 month ago

I specifically designed this crate to be able to do this! :D

That's why the save pipelines are the way they are. Like you said, it should be relatively straightforward to modify them to change the input/output streams. I'm also currently thinking of adding a "processor" to the save pipeline builder (to handle compression).

There is 2 main issues blocking me:

  1. Time ⏱️ -- I'll take any help I can get! (Remember to fork from moonshine_core if you'd like to open a PR)
  2. I want to try and make the Save/Load pipelines more symmetric before they start diverging too much.

Specifically regarding the second issue, right now the save pipeline uses a SavePipelineBuilder and DynamicSavePipelineBuilder. These structs aren't as modular as I was hoping them to be, mainly because I've been struggling in trying to store a "partial" pipeline to allow the user to inject other steps inside the pipeline.

I don't think this feature requires an injection of an extra step (like you said, we can add a .into_stream* as a pipeline "finisher"); but the problem is more on the load side.

There is no Load Pipeline. The API when loading is a bit different which makes the builder logic a bit more tricky. And without a builder, the different permutations of load configurations would introduce very messy API.

For example, now we have load_from_file_on_request and load_from_file_on_request_with_mapper. I don't wanna add load_from_network_stream_on_request_with_mapper_and_decompressor. So we need a load pipeline.

Zeenobit commented 1 month ago

As of d830dedb86458eb5471742859b1589b6fee18688, there is now a LoadPipelineBuilder.

I finally managed to crack it by using a different format for load pipeline. This changes the syntax of load pipeline entirely, so there is some documentation and clean up work to be done.

Zeenobit commented 2 weeks ago

Stream support for Save/Load pipelines: 8fe5def887a68c2c5700c444f4193a4db6a42597 25721ceff6fbe8db042cf46f6219af5617d22c1b

Documentation is still WIP.

The pipelines could also probably be implemented in terms of each other (since a File is just a stream). But that's an optimization/clean up pass.

kleinesfilmroellchen commented 1 week ago

Sorry for being absent here, I haven't looked at my project in a while. I will try and see how this works versus my tempfile approach and report back.