Closed r7vme closed 5 years ago
@jelfring could you answer this?
I was able estimate orientation by making patch above (seems there should be more changes to support both cases) and using PositionEstimator
for "orientation". So far can not yet say how well it works (need to do more testing).
Another question (seeking advice). I'm trying to fuse data from multiple cameras, which are looking on the same area from different positions. Object detection and orientation estimation done separately for each camera and then reprojected to global coordinates. Signal is very noisy object can disappear for a second or so.
wire
tries to match same object from two different cameras.Thanks in advance
You are right, I have removed the not.
In general, fusing information from multiple sensors makes sense: more information should lead to better tracking accuracy. I understand you perform tracking for each camera and then want to fuse the reprojected global coordinates? You should be careful when tuning the covariances involved to avoid 'overconfidence'.
Whenever you would like to track orientation it should not matter how many sensors are involved as long as you use the same frame of reference. You can track the orientation independent of the position, however, it is common combine tracking position and orientation.
I understand you perform tracking for each camera and then want to fuse the reprojected global coordinates?
Yep. I actually ended up using separate PositionEstimator
for orientation (3 dimensions, because i'm using to Euler angles). Works more or less fine, tuning signal variance makes it quite stable.
<behavior_model attribute="rotation" model="wire_state_estimators/PositionEstimator">
<pnew type="uniform" dimensions="3" density="0.0001" />
<pclutter type="uniform" dimensions="3" density="0.0001" />
<param name="max_acceleration" value="0.0" />
<param name="kalman_timeout" value="0" />
<!-- we will ignore fixed objects by checking cov -->
<param name="fixed_pdf_cov" value="99.0" />
</behavior_model>
Can you please shed some light on how i can "combine tracking position and orientation"? Should i just extend "position" property with additional dimensions for orientation? Should position estimator still be used or MultiModel one?
Thanks, Roma
Yep. I actually ended up using separate
PositionEstimator
for orientation (3 dimensions, because i'm using to Euler angles). Works more or less fine, tuning signal variance makes it quite stable.
Glad to hear that.
Can you please shed some light on how i can "combine tracking position and orientation"? Should i just extend "position" property with additional dimensions for orientation? Should position estimator still be used or MultiModel one?
That is indeed what I was referring to. This is typically done for moving objects where the position changes are depending on the orientation (e.g. a car that moves in the direction of its yaw angle). In that case knowledge on orientation can improve the predicted position and thereby improve the overall accuracy of the estimator. It would however require implementing your own estimator (or at least updating the one you are currently using.
If you are considering static objects where position and orientation are independent (or if you are happy with your current results), your solution probably is the best option.
Hello,
i'm trying to find a way to estimate orientation. In this tutorial i see that orientation should be type "mixture", but i can not find any
world_object_models.xml
where orientation is present.I assume i have to use
MultiModelEstimator
for orientation, right?Also it seems this line has a bug, it should not contain
not
.https://github.com/tue-robotics/wire/blob/ebcb59b313bc787099188e119ce9357af1df4d5f/wire_core/src/WorldModelROS.cpp#L277