Autodesk / sitoa

Arnold plugin for Softimage
Apache License 2.0
33 stars 16 forks source link

Softimage can't handle the bucket rate #67

Closed JenusL closed 4 years ago

JenusL commented 5 years ago

I first mentioned this in #63 but I thought a new ticket is better for it. Problem is Softimage doesn't update the Render Region (maybe render window as well) quickly enough for Arnold. This leads to a delay in display of the rendering. I have experimented with different bucket sizes and really large buckets seems to make things better. I just tried largest bucket size (256) with GPU renderer and it updates OK. At default bucket size of 64 the render region is never updated for me with GPU. So I would really appreciate if someone could brainstorm with me why Softimage can't handle the smaller buckets. I don't think it's the bucket size that matters, but smaller buckets = more buckets and I think Softimage does get them to the framebuffer but just can't show them (redraw call?) quickly enough. I will try older versions of SItoA to see when this started to happen or if it's always been like this. It could also be something in later Windows 10 versions or newer Nvidia drivers causing it. Anyway, we need to find a fix for it.

JenusL commented 5 years ago

One ugly thing I can think of, is increasing the bucket size automatically on lower IPR and if rendering progressive / GPU. But that's only if we can't come up with something better.

JenusL commented 5 years ago

I found the Spectre and Meltdown patches in Windows 10 to be a major slowdown of Softimage with my particular CPU. I disabled the patches on my system and Softimage is orders of magnitudes more responsive, like it used to be a year or so ago. Over all everything in Windows became much more responsive. WARNING! Disable the patches on your own risk and make sure you know what you're doing.

Even after this speedup, Softimage still can't really handle the bucket rate, so I need to try some more things.

@sjannuz When rendering progressive and GPU, what role does the bucket size play except for setting the size of the buckets written by driver_exr? My idea right now would be to override the bucket size when rendering progressive or GPU in the render region only. Will also do it on the lower IPR levels on standard rendering.

sjannuz commented 5 years ago

I asked Alan about that, here's what he wrote me: I'd say that for progressive mode you'll be wanting larger buckets since what matters there are frequent updates to the entire image in passes and not so much in the tiny regions of a bucket, and progressive passes with their few samples per pixel per update are much quicker to refresh than non-progressive exactly how much bigger is hard to say you don't want there to be fewer buckets than cores, since it's slightly more efficient for cores to work on their own buckets in isolation

JenusL commented 5 years ago

Thanks for checking. Then I will do a test with dynamically changing the bucket size in the render region depending on progressive/GPU, how large region you draw, etc. and see what happens.

JenusL commented 5 years ago

This is implemented in https://github.com/Autodesk/sitoa/pull/68/commits/91d4b8ae13bc178df895b2de9e774425d667d491 with a new option that will enlarge buckets automatically. enlarge_buckets While doing this I discovered a bug with certain bucket sizes with GPU in Arnold. It's reproducible in kick so I will report it to AD.

JenusL commented 5 years ago

In https://github.com/Autodesk/sitoa/pull/68/commits/da39843d63d351b4d6134ec658b3ad65e2272140 I made the bucket size dependent on number of CPU cores so that total number of buckets is num CPU cores * 2

furby-tm commented 4 years ago

Hey guys, did you ever find a good implementation to fix this, or was simply making the bucket size bigger the only option there is to make less viewport redraw calls? Obviously full screen redraws would be optimal here, but wasn't sure if that was possible to do yet in Arnold or not.

JenusL commented 4 years ago

@tyler-furby I added an "Enlarge buckets in progressive IPR" option that will make the buckets large enough so that the total number of buckets required to cover an image is number of CPU core x2. If enabled, it's still only active in progressive and GPU rendering. On my 4 core machine I got buckets around 300px large with GPU and that helped a lot. Are you having issues with this?

furby-tm commented 4 years ago

Yeah, the best option is to redraw only after the buffer is completely filled with all the buckets, so you aren't making redraws every single time a new bucket is ready for the viewport.

JenusL commented 4 years ago

Softimage handles the redraw and that's not something we can control through the API.

furby-tm commented 4 years ago

Have a callback write to opengl via the AtDisplayOutput in the new rendering API's AtRenderUpdateCallback instead of each time a bucket writes out data in the driver's driver_process_bucket; instead you can just copy the bucket to the appropriate spot in the framebuffer, but not actually display the framebuffer until you ask for it.

The idea is this should ensure extra redraws are not called, by waiting until the entire framebuffer is filled, before redrawing again.

However, I'm unfamiliar with the underlying architecture of softimage, but it's how it should work for most other DCC software.