Open duda1202 opened 4 years ago
Hi,
The benchmark for the ZED depth is mostly for benchmarking the depth using the jetsons for our own record, so that we can quickly see what the best configuration is for our robot at any time. If you want to use depth you will have to modify the parser to output 4 channels and the architecture backbone to take 4 channels as well
Do you believe that this will produce good results? Because from what I read including depth, most cases use fusion or different architectures when including depth
I recommend having a look at this paper. The early fusion approach I told you is definitely suboptimal. But it's a start. As soon as you have that working you can split your backbone into 2 backbones and do all sorts of fusion. But still inside the one backbone structure. Bonnetal allows this to happen. Let me know if you need help more specifically for this. If you want to create your own fork and then move the RGBD stuff here as a branch or a module, let me know, we can figure it out
Hi,
Thats actually the paper i was thinking about using as the approach. So you believe it should not be too complicated to include their fusion nn to bonnetal? Do you believe there will be a conflict because they use tensorflow? Thanks for the help! When I make the modifications for rgbd information, I will let you know and maybe include here as a branch.
Hi,
I am currently thinking of using bonnetal for a project and I saw that there is a benchmark using the Zed Depth. Which encoder-decoder was it used? Do you have any tips on how to include rgb+depth to the current available backbone and decoders in bonnetal or would I need to create my own in this case? Thanks!