Closed ahoarau closed 8 months ago
Hello Antoine,
I apologise for the very late reply. I was busy lately and forgot about my hobby project.
The principal design goal for rapidobj is to provide fast parsing. The way it works is, a file in divided into chunks parsed in parallel and asynchronously loaded using low-level OS functions. Not that I am an expert on C++ streams, but the word is that they are slow and inflexible; I am concerned this interface would violate the performance goal.
However, if all you need it to load a file from memory, perhaps a simpler interface (e.g. std::string_view) would work fine?
Could you tell me about your use case, so I can understand the issue you are trying to solve better?
Cheers, Slobo
Hello @guybrush77 . I understand your point, and indeed it would break performance. Sometimes I can load from an actual file, but I have the use case of having another system that loads the data from somewhere, and what I get is a std::istream
as an input.
Cannot save to a temporary file either.
I understand. I am a bit reluctant to introduce an API call that's guaranteed to be slow. Of course, I can document it, but sometimes people don't read documentation.
How big are the files you are dealing with? Is it an option for you to read the file from the stream and write it to a memory buffer? I could introduce a function like ParseText(const char*, size_t)
. The downside of this approach is that, for large files, your peak memory usage will be pretty bad.
If another system loads an .obj file and passes you a std::istream, what if that .obj contained a reference to a material library file (mtllib)? How is this handled? Or maybe it is not a concern in your use case?
Definitely not a concern here.
I have implemented rapidobj::ParseStream(std::istream&)
new API function on the ParseStream branch. Please let me know if it works for you.
Right now, this is considered as an experimental feature. In particular, if your stream happens to contain mtllib
keyword, the function will fail with a "Material file not found." error. If I decide to merge this branch into the main branch, I will first need to make the handling of material files more flexible and robust.
Hello ! I'm ready to test the branch, could you rebase it ? Thanks a bunch !
We second this workflow. For our CI, we usually want to avoid reading from disk and favor the approach of loading data from memory, instead. What's the current status here?
These are actually two different requests.
@ahoarau wants to load the file from std::istream
. This is going to be much slower for large files because reading from a sequential source precludes parsing parallelization.
@K3rn1n4tor would like to load from memory. The API could look like rapidobj::ParseStream(std::string_view)
or something to that effect. In this case, parallelization is possible but it still won't be quite as fast as the regular ParseStream call. rapidobj is optimized for parsing large files stored on block devices.
I will implement rapidobj::ParseStream(std::string_view)
first, because @ahoarau was first to make the API request. The ParseStream branch already exists, but it has diverged from the main branch. It might take me a few days for merging. Please stay tuned.
@guybrush77 no worries on being much slower for istreams, as soon as it's still fast on file loading that's fine - as a user point of view.
@ahoarau I have added ParseStream()
function. Could you try the latest master? Let me know if it works for you.
@K3rn1n4tor Is ParseStream()
sufficient for your purposes? If the files you are loading from memory aren't too large, you should be able to use istrstream. But if they are large, I could implement ParseString()
or something similar.
I'm currently working on other items but will test this within the next two weeks. Maybe it's enough to have this. I'll let you know. Otherwise, we need something to operate on char or byte arrays. :)
The new API is implemented, tested and released (v1.1).
Hello, This library seems very nice. Would it be possible to add the ability to open files from
std::istream
? Thanks !