hlwang1124 / SNE-RoadSeg

SNE-RoadSeg for Freespace Detection in PyTorch, ECCV 2020
https://sites.google.com/view/sne-roadseg
MIT License
307 stars 76 forks source link

Inference on own data #56

Open jb455 opened 1 year ago

jb455 commented 1 year ago

Hi, I'm trying to run inference using the pretrained model on my own data which is gathered from a RealSense camera. I've updated run_example.py as suggested, updating rgb and depth paths and setting the camera parameters using camera intrinsics values obtained from the librealsense API, but the output normal.png is not as expected: normal

I have checked my depth data; when I deproject to a point cloud it looks fine.

My question:

I save my depth data as raw uint16 values stored as a binary file, then read this back to a np array where you use cv2.imread - ie, depth_image = np.fromfile("depthdata.bin", dtype="short").reshape(height, width). Are raw depth values like this ok or should I apply a disparity map or something first before passing to SNE? There's no such step for your sample data but I don't know what steps you take to create the sample depth image from your raw depth data.

Thanks for sharing this project and for any help you can offer :)

jb455 commented 1 year ago

Another thought - how noisy can the depth data be? If it's trained on synthetic data, are the depth values assumed to be 'perfect' or are bumps and holes forgiven by the model?

PatrickLowin commented 1 year ago

Did you find a solution to your problem?