Closed shalom-aviv closed 3 years ago
It wasn't an explicit design goal, but I guess it indeed only works with objects in memory. However, files (even the big ones) in any case should eventually be loaded into memory to work with them.
What exactly are you suggesting should be done instead?
Read only part of data that you needs at current moment All ready data remove from memory or save immediately to disk
I say about something like this: https://stackoverflow.com/questions/63237807/does-ziparchive-load-entire-zip-file-into-memory https://referencesource.microsoft.com/#WindowsBase/Base/MS/Internal/IO/Zip/ZipArchive.cs,040dfbf7c78aba7a
OK, I see.
I think, there are two questions being asked:
The ability to interact with files on the filesystem directly. This functionality is not present by design, since SWCompression is intended to be abstract over the source of the data.
The ability to process data in the streaming manner while still not being tied to the actual source. This, on the other hand, is a reasonable feature to have, which should allow to do what you're suggesting (don't keep all the data in memory).
I considered the second issue at various points in time, and I think it can be resolved, for example, by providing APIs that take Foundation's InputStream
or maybe even FileHandle
as an input argument. However, at this point this is all purely theoretical speculations, so I am not even completely sure if it will work in the intended way.
So, to summarize, I guess there are some plans to try to do something along the lines of what you want, but I can't say when it will happen.
In All your tests you work with objects in memory What about big files as example 6 or 7 gb?