ziweiWWANG / SHEF

Code and Dataset for paper: "Stereo Hybrid Event-Frame (SHEF) Cameras for 3D Perception", IROS 2021. Large scale stereo events and frames datasets and baseline algorithm with depth ground truth.
20 stars 2 forks source link

Request for the code of disparity estimation. #2

Closed xhchen10 closed 1 year ago

xhchen10 commented 2 years ago

Hi, thank you for the impressive work. I am wondering if there is any plan to publicly release the code of disparity estimation.

ziweiWWANG commented 2 years ago

Hi there! Thanks for your interest! I am happy to release the code if that can help you, but I am travelling abroad now. I will be back at uni next month. Thank you :)

xhchen10 commented 2 years ago

Thank you so much for the quick reply. Enjoy your tour :)

xhchen10 commented 1 year ago

Hi Ziwei! Could you describe the detailed structure of your DCNet? or Could you share the code fragment of DCNet? Thanks a lot :)

ziweiWWANG commented 1 year ago

The network has two parts. The first part is based on the paper "Autodispnet: Improving disparity estimation with automl". We use image + event frame to estimate a new disparity map. Let's call it D_new. Notably, the encoders for the image and event frame are different. You can use a popular downsampling image encoder, but we suggest adding some more conv layers for event frame encoding. The second part is an Unet-based fusion block. The inputs are the D_new and the pre-estimated sparse disparity map D_p in the paper. The fusion block fuses the two images to generate the completion disparity.

Hope it helps!

Cheers, Ziwei