Closed virtualritz closed 3 years ago
Good point! this would be a useful thing to add
I have this ready for a PR. But ... because this added two variants to the finalizer method I refactored the API a tad.
Looks like this:
let device = oidn::Device::new();
oidn::RayTracing::new(&device)
.image_dimensions(width, height)
.albedo_normal(&albedo, &normal)
.hdr(true)
.filter_in_place(&mut color)
.expect("Error denoising image.");
There is ofc. albedo()
. This API mirrors the previous execute(_with_albedo(_normal))
as well as the previous previous one that still had set_albedo()
& set_normal()
. But without having to now expand this to six finalizer variants when adding the option to filter in-place.
One could fully revert this to the previous way of using separate albedo()
and normal()
methods with compile time checks without killing themselves using this crate once they implemented this ticket. ๐
The user can finalize with either filter(&input, &mut output)
or the above filter_in_place(&mut)
Commonly the methods finalizing builders in the wild seem to be a single verb. So instead of copying the overly verbose C++ API I suggest we either use the descriptive filter
or the more martial execute
but not both. ๐
I also took the liberty to remove the set_
prefix on the composition methods.
From my experience using various builders in Rust the setter methods have no prefix if there are no getters. For an example with getters see e.g. std::net::SocketAddr
. For one without, which is very much like oids-rs
, e.g. std::process::Command
.
Sometimes they also start with with_
but when I asked about that on the Rust Discord the old hands said they have no idea why โ other than personal preference and an unfortunate absence of guidelines on naming for builder methods. ๐คจ
Let me know how that sits with you.
Hey, if you don't like my change proposal I'm happy to find another way of adding this or go with a suggestion from you.
I think this seems decent, my one thought though would be on how this would work with re-using the same filter setup again. There does seem to be a bit of first-run overhead in OIDN, so if every time we want to run a filter we have to build a new one that will add some avoidable overhead to the filtering cost. However, I'm realizing the way the current API is written it may also force this, since we re-commit the filter each time when you call the current filter methods. But doing something that keeps the pointers around would put us back to having the lifetime issues from before :(
I'm not sure if there's a good way to provide both options somehow.
Could you open a PR for this? I think this would work well for the API or we can work on it a bit in the PR, but there's few users of this crate so I think going back and forth a bit on the API isn't a big deal
I think this seems decent, my one thought though would be on how this would work with re-using the same filter setup again. There does seem to be a bit of first-run overhead in OIDN, so if every time we want to run a filter we have to build a new one that will add some avoidable overhead to the filtering cost.
Hmm, not sure I understand. I mean this should work:
let device = oidn::Device::new();
let mut denoiser = oidn::RayTracing::new(&device)
.image_dimensions(width, height)
.albedo_normal(&albedo, &normal)
.hdr(true);
denoiser
.filter_in_place(&mut color)
.expect("Error denoising image.");
denoiser
.albedo_normal(&albedo2, &normal2)
.filter(&color2, &mut denoised_color2);
.expect("Error denoising image.");
I didn't know there was a first run overhead. Is the overhead in setting up the device or the filter? Is OIDN thread safe?
Ah so maybe I'm missing something from not having looked at the code yet, but I'd think this line:
let mut denoiser = oidn::RayTracing::new(&device)
.image_dimensions(width, height)
.albedo_normal(&albedo, &normal)
.hdr(true);
Would have to hang on to albedo and normal, similar to what I had before in the library and removed for https://github.com/Twinklebear/oidn-rs/issues/5 ?
It seems like some first run overhead for the filter, it could be some memory allocation or just caching effects. In a different test I found the first run of the filter at 1080p took ~176ms, while the 2nd and on were more around 120ms. For thread-safety, yes, but see https://github.com/OpenImageDenoise/oidn#intel-open-image-denoise-api about thread safety and serialization. It sounds like if you want to really make parallel API calls you should use different devices.
Could you open a PR for this?
Would have to hang on to albedo and normal, similar to what I had before in the library and removed for #5 ?
Yes. At least until you call drop
on the builder struct.
See the denoise_exr
example on how to change definition order of those slices vs the builder to pacify the borrow checker. ๐
If that were to become an issue (that someone reported) I'd suggest to add two extra versions of albedo()
and alebedo_normal()
that move the resp. slice(s) into the builder. I.e. take ownership.
Implemented in this PR.
The IOID PI docs states that any of the input buffers can be reused for output w/o causing issues. It would be great to have at least the option of re-using the color input as the output.
With the current Rust API this is not possible because you can't pass in a
&color
and&mut color
at the same time without upsetting the borrow checkered god.Basically we would need an
execute_mut()
or something like that.