parkchamchi / DepthViewer

Unity program that creates 3D scenes for VR using MiDaS model
MIT License
63 stars 5 forks source link
3d depth depthmap midas unity vr

DepthViewer

main_image \ Using MiDaS Machine Learning Model, renders 2D videos/images into 3D object with Unity for VR.

Try Now

Outdated builds (less effective 3D)

Examples

Original input (resized) Plotted (MiDaS v2.1) Projected Src
example1_orig_resized example1_plotted example1_projected #

So what is this program?

This program is essentially a depthmap plotter with an integrated depthmap inferer, with VR support.

demo_basic

The depthmaps can be cached to a file so that it can be loaded later.
demo_cache

Inputs

Models

The built-in model is MiDaS v2.1 small model, which is ideal for real-time rendering.

Loading ONNX models

Tested onnx files:

From my experience dpt_hybrid_384 seems to be more robust against drawn images (i.e. non-photos)

OnnxRuntime GPU execution providers

Recording 360 VR video

If you select a depthfile and an according image/video, a sequence of .jpg file will be generated in Application.persistentDataPath. \ Go to the directory, and execute

ffmpeg -framerate <FRAMERATE> -i %d.jpg <output.mp4>

Where <FRAMERATE> is the original FPS.

To add audio,

ffmpeg -i <source.mp4> -i <output.mp4> -c copy -map 1:v:0 -map 0:a:0 -shortest <output_w_audio.mp4>

Connecting to an image server

The server has to provide a jpg or png bytestring when requested. Like this program: it captures the screen and returns the jpg file. I found it to be faster than the built-in one (20fps for 1080p video).
Open the console with the backtick ` key and execute (url is for the project above, targeting the second monitor)

httpinput localhost:5000/screencaptureserver/jpg?monitor_num=2

Importing/Exporting parameters for the mesh

After loading an image or a video while the Save the output toggle is on, enter the console command

e

This saves the current parameters (Scale, ...) into the depthfile so that it can be used later.

Using ZeroMQ + Python + PyTorch/OnnxRuntime

May be unstable. Implemented after v0.8.11-beta.1.

  1. Run DEPTH/depthpy/depthmq.py. (Also see here for its dependencies, plus pyzmq is required)
  2. In the DepthViewer program, open the console and type zmq 5555.

Use python depthmq.py -h for more options such as port (default: 5555), model (default: dpt_hybrid_384) To use OnnxRuntime instead of PyTorch, add --runner ort and --ort_ep cuda or --ort_ep dml. For this onnxruntime-gpu or onnxruntime-directml is needed, respectively.

Using ZeroMQ + Python + FFmpeg + PyTorch/OnnxRuntime

Gone are the days of VP9 errors and slow GIF decoding. Implemented after v0.8.11-beta.2.

  1. Run DEPTH/depthpy/ffpymq.py. Also add --optimize for the float16 optimazation.
  2. In the DepthViewer program, open the console and type zmq_id 5556. Now all video/GIF inputs are passed to the server and fetches the image and the depth. Use zmq_id -1 to disconnect.

Tested formats:

Images

Videos

Others

Notes

Todo

Building

The Unity Editor version used: 2021.3.10f1

ONNX Runtime dll files

These dll files have to be in DEPTH/Assets/Plugins/OnnxRuntimeDlls/win-x64. They are in the nuget package files (.nupkg), get them from

Microsoft.ML.OnnxRuntime.Gpu => microsoft.ml.onnxruntime.gpu.1.13.1.nupkg/runtimes/win-x64/native/*.dll

Microsoft.ML.OnnxRuntime.Managed => microsoft.ml.onnxruntime.managed.1.13.1.nupkg/lib/netstandard1.1/*.dll

Misc

Libraries used

For Python scripts only:

Misc

Also check out

Remarks

2023 March 9

This project was started in September 2022 with primary goal of using monocualar depth estimation ML model for VR headsets. I could not find any existing programs that fit this need, except for a closed-source program VRin (link above). That program (then and still in Alpha 0.2) was the main inspiration for this project, but I needed more features like image inputs, other models, etc. As it was closed source, I grabbed a Unity/C# book and started to generate a mesh from script.

I gradually added features by trial-and-error rather than through planned development, which made the code a bit messy, and many parts of this program could have been better. But after a series of progressions, I found the v0.8.7 build to be okay enough for my personal use. So this project is on "indefinite hiatus" from now on, but I'm still open for minor feature requests and bug fixes.

I thank all people who gave me compliments, advices, bug reports, and criticisms.

Thank you.

Chanjin Park parkchamchi@gmail.com

2023 March 21

I'll be still updating this project, it can be slow since the school has started again.