gt-flexr / FleXR

FleXR: A System Enabling Flexibly Distributed Extended Reality
MIT License
5 stars 1 forks source link

Add a kernel of Vulkan pipeline for VR #47

Open jheo4 opened 3 years ago

jheo4 commented 3 years ago

There are a couple of steps to solve to this issue.

sb-renderpass commented 3 years ago

Do you have a VM or docker image with the dependencies pre-installed to replicate the build environment?

jheo4 commented 3 years ago

@sb-renderpass As mentioned, you can pull the docker image jheo4/flexr:focal from the docker hub. The image is for cpu-only because the GPU setting is more complicated because of the dependencies. I set it up with all the dependencies including Vulkan with LLVM pipeline.

In your task, I think it would be fine to develop with the CPU image till now and put some references which can be helpful for setting up.

https://www.pluralsight.com/guides/create-docker-images-docker-hub https://askubuntu.com/questions/1161646/is-it-possible-to-run-docker-container-and-show-its-graphical-application-window

In short, you can pull the container image from my dockerhub repo, and create a container with the connected display. :) Please try and build flexr on it. Since we have a scheduled meeting tomorrow, I can help you more if you face during these processes.

sb-renderpass commented 3 years ago

Progress Update:

result

Next Milestone:

sb-renderpass commented 3 years ago

Progress Update:

result

Next Milestone:

Not going to add anymore graphical features as this is a good enough baseline and more features would take more time. More focus on FleXR moving forward. However, in the long-term we could consider adding the following:

jheo4 commented 3 years ago

It looks great. There is much good progress. BTw, I have a quick question. As you add cameras, do you think there is a way to render them by single path? As an example, when you think of VR headsets, there are two eye cameras. Since each camera needs its own rendering instructions, it will require two pathes. I just want to hear from you.

sb-renderpass commented 3 years ago

Yes, and that's what is done in VR pipelines.

There are several ways to have 2 (or more) cameras with a single renderer for VR. One approach is to input a real camera and internally generate 2 virtual cameras at an offset from one another. Then internally render twice but configured to output the 2 rendered frames on the same image (or can be separate too). This can be done fairly easily with instanced rendering. explanation

However, there are a lot improvements to this approach that exploit the redundancy in data between the 2 cameras. For example, some Vulkan extensions like VK_KHR_multiview were specfically designed for this. example