Closed steven850 closed 3 years ago
Hi Steven,
The Blender plugin is not directly compatible but you can always modify the .xml
file produced by the plugin manually. See some of the various test scenes in this repository for inspiration on how to do that.
Best,
Tizian
hi tizian, how can I export that .xml? when I specified the compiled build to the addon and tried exporting a scene it says this Maybe I compiled something wrong? I just followed instructions exactly as in docs. Tried using an unofficial exporter found on github, it creates an xml and folders with obj and textures. Tried rendering that through the command line and got an error saying some plugin wasn't found. I'm gonna try that again this time I'm gonna take a look at your examples folder and see what I can do.
I'm really really interested in playing around with this.
Ok, I tried rendering the teaser scene and this is what it said meanwhile I went to the plugins folder to find it empty
So I tried building the project once again to show you the output if it helps I went by the specified path in src folder and there was no _normalmap_toflakes.exe
Hi Zachary, I've added a PR with two small changes to the cmake setup. Can you try to build on that branch maybe? https://github.com/tizian/specular-manifold-sampling/pull/4
Thanks! Best, Tizian
Regarding the Blender plugin: You might need to compile the main Mitsuba 2 repository before following these instructions https://github.com/mitsuba-renderer/mitsuba2-blender
Like pointed out in one of the errors on the screenshot above, using consistent Python versions (3.7x for Blender 2.82) is very important.
Build successful, but is this okay?
Ok, it says it wants to render it for 4 days xD
Great, so I'll go ahead then and merge the fixes into the main branch. I guess there might be very rare NaNs, I don't think you have to worry about that.
Regarding the long render times: The default settings in the scripts correspond to what quality was rendered for the paper -- and many of them render for some fixed time (e.g. 5 minutes). The calculated "remaining time" does not reflect that properly. Also, as mentioned in the "Results" section, we rendered on a cluster node with 2 Xeon 6132 processors, each with 14 2.6 GHz cores, so you might in any case want to reduce time/samples/resolution.
EDIT: Oh, and I see you're trying to render the teaser scene. That one took a really long time, even on that compute node.
here's my result with 64 samples and 720p is there any way to denoise the image using different passes etc?
Looks like it's working, yay! :)
We don't have any denoising support in Mitsuba currently and I'm not 100% sure what would be required for that.
You might be able to get some additional output buffers that are useful for that with the aov
integrator, but we've never tested this. I'm afraid you're on your own for that one.
(And actually for this specific scene, I don't think you would get any meaningful output from that because the first hit in the scene will always be the glass plane in the front.)
Then... How to make the render seed random for each render output? Maybe I could get something if I used a temporal denoiser... Or rendered several times with a new seed and then averaged them.
The independent
sampler takes a seed
parameter. But at that point it's not so different from just setting a higher sample count.
With temporal video denoiser, I could try and render an animation and denoise that in video software. But I need each frame to have a new random seed for that.
Is there any way to not to set samples and instead set a timer halt condition, to stop rendering once it reaches let's say 5 minutes?
Yes, inside the <integrator>
tags:
<integer name="samples_per_pass" value="1"/>
<float name="timeout" value="300"/>
while setting the sample_count
to something very high that won't be reached before the timeout.
In your pool scene, I can see that the noise is changing each frame, which means that the seed is updating. But in the independent declaration for the pool scene, there's nothing to specify this. Does that mean, it's doing that by default?
What do you mean each frame? Each frame of the animation? The normal map is animated and changes each frame. This of course affects the sampling and you get different noise.
Is there a variable indicating frame number that I can plug into seed then?
Sorry, at this point I'm a bit confused about what you'd like to achieve. I don't think the normal map frames for the animated sequence are in the repository. Is that what you're interested in?
If you'd like to render the same image with different noise, just changing the seed
parameter of the sampler should do it.
I'm trying to figure out what I need to setup in the scene file to make it change seed with every rendered frame. It does support rendering animations, right? At least you guys did it for the paper.
There's gotta be a frame number variable, which I could theoretically use as a new seed every frame.
There is nothing special for animations. Just a series of individual renderings. You can do this
<sampler type="independent">
<integer name="sample_count" value="64"/>
<integer name="seed" value="X"/>
</sampler>
and increment X
by one after each rendering.
Ok I attach the image rendered in LuxCore engine and the one rendered for the same amount of time in Mitsuba 2 SMS.
What could be wrong with the caustics? Here's my scene file
`
<integrator type="path_sms_teaser">
<integer name="max_depth" value="6"/>
<integer name="rr_depth" value="5"/>
<boolean name="hide_emitters" value="false"/>
<integer name="samples_per_pass" value="1"/>
<float name="timeout" value="300"/>
<boolean name="caustics_enabled" value="true"/>
<boolean name="caustics_biased" value="false"/>
<boolean name="caustics_twostage" value="true"/>
<integer name="caustics_max_iterations" value="20"/>
<float name="caustics_solver_threshold" value="0.00001"/>
<float name="caustics_uniqueness_threshold" value="0.00002"/>
<integer name="caustics_max_trials" value="500"/>
<integer name="caustics_bounces" value="2"/>
<boolean name="non_sms_paths_enabled" value="true"/>
</integrator>
<sensor type="thinlens">
<film type="hdrfilm">
<integer name="width" value="640"/>
<integer name="height" value="480"/>
<string name="file_format" value = "openexr"/>
</film>
<float name="fov" value="39.597755335771296"/>
<float name="aperture_radius" value="0.0"/>
<sampler type="independent">
<integer name="sample_count" value="9999"/>
</sampler>
<transform name="to_world">
<lookat
origin="3.326305389404297, -12.600282669067383, 9.00119400024414"
target="3.023153305053711, -11.808876991271973, 8.470369338989258"
up="-0.1898798793554306, 0.49570244550704956, 0.8474814295768738"
/>
</transform>
</sensor>
<emitter type = "envmap" >
<string name="filename" value="textures/cape_hill_2k.hdr"/>
<float name="scale" value="1.0"/>
</emitter>
<emitter type = "point" >
<spectrum name="intensity" value="1000.0"/>
<transform name="to_world">
<translate value="1.8793362379074097, 1.9558740854263306, 3.1696720123291016"/>
</transform>
</emitter>
<bsdf type="diffuse" id="Material.003">
<rgb name="reflectance" value="0.800000011920929 0.800000011920929 0.800000011920929"/>
</bsdf>
<shape type="obj">
<string name="filename" value="meshes00001/Plane.obj"/>
<ref id="Material.003"/>
</shape>
<bsdf type="dielectric" id="Material.001">
<string name="int_ior" value="water"/>
<string name="ext_ior" value="air"/>
<rgb name="specular_reflectance" value="1.0 1.0 1.0"/>
<rgb name="specular_transmittance" value="1.0, 1.0, 1.0"/>
</bsdf>
<shape type="obj">
<string name="filename" value="meshes00001/Circle.002.obj"/>
<ref id="Material.001"/>
</shape>
`
seems like you hardcoded integrator types for each scene... how do I work with these or create my own?
Please keep in mind that we try to be very clear in the paper that we don't present a complete rendering algorithm that can be thrown at any sort of scene. Instead we introduce a new sampling technique that can be used as a building block when designing new algorithms. How to do that in a good way is still very much an open question (e.g. what light source / specular shape(s) do you want to sample as a seed path, how to combine it with existing sampling techniques, ...). Especially for cases with multiple specular interactions (such as the one you sent above) these things are really unclear, so the integrators we have that do this should be treated as a proof of concept are not expected to translate to arbitrary scenes.
If you'd like to experiment with scenes with one specular reflection/refraction (such as most of the scenes in our paper) I'd suggest you take a look at maybe one of the simpler scenes (e.g. the "Ring" ). There you can see that we flag objects in the scene that will participate in SMS:
A surface/shape that can receive caustics from SMS is tagged with
<boolean name="caustic_receiver" value="true"/>
A light source that can be sampled at the start of SMS is tagged with
<boolean name="caustic_emitter_single" value="true"/>
A specular surface that will be sampled during SMS is tagged with
<boolean name="caustic_caster_single" value="true"/>
Please have a look at the path_sms_ss
to see how these are then used. It's mostly a normal path tracer, but in specific cases where we know that SMS will perform better, it replaces the default brute-force sampling. There are a lot of comments that I hope are helpful.
Best, Tizian
Oh I'm sorry, I didn't read the paper, only the videos you sent and the video by Two Minute Papers on YT. When I saw that you modified mitsuba I assumed you integrated your new technique into the render engine. You made it very clear now that it's just a proof of concept and it doesn't translate to a custom scene setup without additional work from the render engine developers.
I'm going to take my time and investigate opportunity of integrating sms into cycles.
Can this work with the mitsuba blender plugin?