-
Hello, does anyone know if it's possible to train with RGBD images using styleGAN2? Is there a way to modify the network so that it can work with an extra layer for the image?
Thank you in advance fo…
-
您好,我想請教一下,就是在訓練過程我們訊練檔的 input 是 2 張圖 (RGB圖、RGBD 深度圖),output 為 RGBD 深度圖。
以 NYU Dataset 來講,這樣我要先轉換 mat 檔案成 RGB圖、RGBD 深度圖,然後才可以訓練嗎?還是只要拿 mat 檔案就好?
謝謝!
Hello, I would like to ask. During the training…
-
### Proposal
My proposal is to add a new `render_mode` to MuJoCo environments for when RGB **and** Depth images are required as observations, e.g. to create point clouds.
(related issue: #727)
##…
-
Can I use this package with RGBD images? My custom model is a two stream network taking RGB image in one stream and depth image in the other. How can I specify input_tensor in such a case? Kindly ad…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
I am trying to run this on 2 images taken from a RealSense.
Is there code or documentation I can look at to know how to input these into the model?
-
Thanks for sharing this amazing work!
I found the simulation can also set up front view Stereo Camera, could you please let me know how to record these images ?
-
I've set render_mode to full-render as in:
https://github.com/CMU-TBD/SocNavBench/issues/9#issuecomment-885727674
But I'm not sure how to get and use the RGBD data during the robot's "sense" step.…
-
Hi! Thanks for your great work! I am curious about how to get multi-view object images in the "Object Caption" step of your annotation pipeline. It seems that only a 3D point cloud and object bounding…
-
I am trying to run MonoGS on custom dataset. It is not very straight forward, so i have some questions.
I collect dataset using my iphone. I click a video and extract images out from that. If i run…