Closed niwhsa9 closed 1 year ago
Yeah I'll write something up this evening
@niwhsa9 When trying to accurately recreate the ZED 2i in ROS, I came across a few points preventing an obvious clear route:
Thanks for that great summary and set of options, very nice work.
I've written out some of my thoughts below. I apologize in advanced because this got very long-winded and some of it is not directly related to the issue at hand, but I promise I have some practical points in here.
Lets start with the tangent:
I think fundamentally there are two classes of things we are interested in simulating accurately when it comes to depth cameras:
When I assigned you this issue I had not really considered that second class of features at all. I also did not consider it when we purchased the ZED over the Realsense. Unfortunately, dealing with this is where we get screwed for purchasing a proprietary stereo solution. We don't really know anything about the matching algorithm or what kind of artifacts it produces other than what we can qualitatively observe, and even then we don't know the root cause so its hard to reproduce ourselves. Given that this is the case, I vote that we entirely ignore class 2 features for now. That is long-term goal for the simulation team. Ignoring these will mean our simulation is quite idealistic, so people should be aware of that when testing against sim. On the other hand, I think we should be able to nearly perfectly match class 1 conditions in simulation and that's what I'm really after with this feature request.
Okay, back to the actual problem. Let me address your proposed solutions.
Of these two choices, I like option 2 much better. We just need to work around that caveat of the single focal length parameter. For starters, don't worry too much about the fact that you have a set of focal lengths for each eye. The ZED has a relatively small stereo baseline so I think its totally fine to just pick one eye's set. Now the question is what to do about the non-identity aspect ratio.
There is a simple, conservative solution that might allow us to proceed quickly. That solution is to just pick fy. This will make the vertical FOV faithful to reality and the horizontal FOV smaller than reality. Hopefully this isn't completely detrimental to our performance. If we are still seeing good navigation behavior with this limit than we can punt this issue down the road for our simulation people to deal with more adequately in a custom plugin implementation.
tl;dr: go with option 2
@arschallwig can you summarize what you found out so far? I remember that you were having trouble finding a way to actually simulate the point cloud parameters for a stereo cam due to how the library represents the parameters.