dreamworksanimation / openmoonray

MoonRay is DreamWorks’ open-source, award-winning, state-of-the-art production MCRT renderer.
https://openmoonray.org/
Apache License 2.0
4.26k stars 246 forks source link

Vulkan support #78

Open orowith2os opened 1 year ago

orowith2os commented 1 year ago

MoonRay currently relies on (proprietary) NVIDIA technologies to function, and falls back to the CPU when needed. It would be better to instead support Vulkan, which also works for compute, and is an open standard. This allows it to work on any system that supports Vulkan, including, but not limited to, Qualcomm, Intel, AMD, and NVIDIA, as well as several operating systems (Windows, Linux, Android, and even the BSD family (and macOS via MoltenVK).

As Vulkan also works for graphics, MoonRay could benefit from this as well, since Vulkan allows for better integration with compute and graphics tasks.

mday-dwa commented 1 year ago

Thank you for your suggestion!

heavyrain266 commented 1 year ago

The Vulkan API is primarly designed for real-time rendering and the compute shaders aren't really suited for offline rendering used by MoonRay.

Some host tools could use Vulkan but not MoonRay itself. Ex. you want a real-time previews of your shots, take a look at Pixar's Hydra and MoonRay's Hydra Delegate. The renderer has built-in support for OpenGL, Metal, and pending support for Vulkan.

MoonRay itself uses software (CPU) rendering with XPU mode that connects single GPU (e.g. Nvidia A6000) to perform quick denoise of rendered shots using OptiX.

As a side note, AMD's HIP is able to execute CUDA and in general, a better idea is to make use of OpenCL as fallback for denoiser, if you want to run MoonRay on a single host without support for CUDA/HIP or a renderfarm that has GPU from vendor which can only execute OpenCL, like Intel or Imagination's PowerVR etc.

orowith2os commented 1 year ago

I figure Vulkan would work really well here, since MoonRay is used for rendering and Vulkan has support for rendering and compute, so at the very least it could handle one of those tasks; I recall being able to mix some OpenCL and Vulkan tasks too.

Wouldn't it basically be a game running at an insanely low refresh rate? The goal isn't to make it like a full game where you can, in real time, view the scene.

I also thought most every GPU made in the past few years has support for Vulkan. There are even drivers in the works for PVR.

heavyrain266 commented 1 year ago

Hmm, I think that you do not fully understands the way how MoonRay really works. It is rendering shots through parallel computation called offlime rendering that is performed on the host or renderfarm and/or supercomputer.

MoonRay implements Monte Carlo pathtracing method and current GPUs, even stacked, they will be much slower compared to multiple multithreaded CPUs. This is beacause CPUs can handle much more rays/bounces in comparsion.

Since Vulkan is designed for real-time rendering (at X fps) at home, using single GPU and not renderfarms or supercomputers, it will not be able to easly process terabytes of data which MoonRay uses to compose the movie or series. Current GPUs and APIs are simply too primitive to fully support parallel compute of Monte Carlo pathtracing with the quality and number of rays/bounces possible on stack of CPUs. This is the case why MoonRay uses single GPU for denoising final images

Edit: because of those limitations, Pixar decided to implement Hydra as real-time addition to RenderMan, which by itself is based on offline rendering on CPU and GPU denoising just like MoonRay. They all call it XPU mode and if I understand correctly, RenderMan now offloads a bit more tasks to GPU but not all of them.

orowith2os commented 1 year ago

Vulkan can be used for real time rendering, yes, but nothing stops it from being used in render farms across several GPUs and for general compute tasks like AI or image rendering. It was made to better represet how a modern GPU works, not for real time rendering in particular.

Vulkan was even made with highly parallel tasks in mind, and its low level nature allows that to be achieved quite easily.

From what I see, MoonRay needs:

Vulkan appears to provide all of this.

Based off of my current knowledge of how both of these work, it couldn't be a better fit.

Modern GPUs also have dedicated hardware for realistic path tracing, which Vulkan also has support for. Many modern GPUs, even workstation ones, have support for the Vulkan API.

bobboli commented 1 year ago

Well, Vulkan compute (or their equivalent in other APIs like D3D) is comparable to dedicated GPGPU compute APIs (CUDA, OpenCL) only to some extent. The main difference is that Vulkan is, in nature, a graphics API dedicated for real-time applications (especially video games), and its compute APIs are supplements to the gfx pipeline. Thus, a lot of styles, tastes, abstractions are based on the concepts of their gfx part.

For example, most fundamentally, even the concept of pointers/member functions do not exist in compute shaders, as opposed to CUDA kernels. Instead, you must manipulate your data through Vulkan abstractions like Uniform Buffer Objects (UBOs), Shader Storage Buffer Objects (SSBOs), and textures (image and sampler). Let alone CUDA has tones of matured HPC libraries (e.g., cuBLAS), profiling tools etc.

In a word, GPGPU is a superset of graphics. Offline rendering, albeit a topic in graphics, is dramatically different from real-time graphics pipelines that utilizes hardware rasterization.

I personally am also troubled by the vendor-specific issues of CUDA. I would like to do some physics simulation, and I could figure out how to implement on CUDA. However, even thinking about how to tackle with the pointers in compute shaders makes me headache… So if you have ideas or findings about general guidelines to port CUDA programs to Vulkan, please also let me know 😊

orowith2os commented 1 year ago

In a word, GPGPU is a superset of graphics. Offline rendering, albeit a topic in graphics, is dramatically different from real-time graphics pipelines that utilizes hardware rasterization

I'm under the impression that offline rendering can be achieved in a similar way to realtime rendering, and was basically "crank up the details really high and let it cook". What are the differences between rendering an image in MoonRay and a game? Not computing physics, but just rendering the final image.

heavyrain266 commented 1 year ago

MoonRay is rendering each pixel of the image separately by simulating the behavior of light and its interaction with surfaces in the scene. This is what we call offline path tracing, in addition, it will result in a pretty noisy image to speed up the rendering process.

Real-time rendering in games is based on rasterization and it will render whole frame at once and update its content based on player's input and results of real-time simulation of the world. The process is based on projecting polygons to the screen and only visible content of scene is rendered to minimize usage of resources and speed up the process.

heavyrain266 commented 1 year ago

This video (link below) from Walt Disney Animation Studios is a pretty good and informative explaination on how Path Tracing works in MoonRay. It will have you understand the difference between it and immediate rasterization used in games.

https://www.youtube.com/watch?v=frLwRLS_ZR0