tue-robotics / wire

BSD 2-Clause "Simplified" License
21 stars 15 forks source link

How to estimate orientation? #7

Closed r7vme closed 5 years ago

r7vme commented 5 years ago

Hello,

i'm trying to find a way to estimate orientation. In this tutorial i see that orientation should be type "mixture", but i can not find any world_object_models.xml where orientation is present.

I assume i have to use MultiModelEstimator for orientation, right?

Also it seems this line has a bug, it should not contain not.

https://github.com/tue-robotics/wire/blob/ebcb59b313bc787099188e119ce9357af1df4d5f/wire_core/src/WorldModelROS.cpp#L277

MatthijsBurgh commented 5 years ago

@jelfring could you answer this?

r7vme commented 5 years ago

I was able estimate orientation by making patch above (seems there should be more changes to support both cases) and using PositionEstimator for "orientation". So far can not yet say how well it works (need to do more testing).

Another question (seeking advice). I'm trying to fuse data from multiple cameras, which are looking on the same area from different positions. Object detection and orientation estimation done separately for each camera and then reprojected to global coordinates. Signal is very noisy object can disappear for a second or so.

Thanks in advance

jelfring commented 5 years ago

You are right, I have removed the not.

r7vme commented 5 years ago

I understand you perform tracking for each camera and then want to fuse the reprojected global coordinates?

Yep. I actually ended up using separate PositionEstimator for orientation (3 dimensions, because i'm using to Euler angles). Works more or less fine, tuning signal variance makes it quite stable.

        <behavior_model attribute="rotation" model="wire_state_estimators/PositionEstimator">
            <pnew type="uniform" dimensions="3" density="0.0001" />
            <pclutter type="uniform" dimensions="3" density="0.0001" />
            <param name="max_acceleration" value="0.0" />
            <param name="kalman_timeout" value="0" />
            <!-- we will ignore fixed objects by checking cov -->
            <param name="fixed_pdf_cov" value="99.0" />
        </behavior_model>

Can you please shed some light on how i can "combine tracking position and orientation"? Should i just extend "position" property with additional dimensions for orientation? Should position estimator still be used or MultiModel one?

Thanks, Roma

jelfring commented 5 years ago

Yep. I actually ended up using separate PositionEstimator for orientation (3 dimensions, because i'm using to Euler angles). Works more or less fine, tuning signal variance makes it quite stable.

Glad to hear that.

Can you please shed some light on how i can "combine tracking position and orientation"? Should i just extend "position" property with additional dimensions for orientation? Should position estimator still be used or MultiModel one?

That is indeed what I was referring to. This is typically done for moving objects where the position changes are depending on the orientation (e.g. a car that moves in the direction of its yaw angle). In that case knowledge on orientation can improve the predicted position and thereby improve the overall accuracy of the estimator. It would however require implementing your own estimator (or at least updating the one you are currently using.

If you are considering static objects where position and orientation are independent (or if you are happy with your current results), your solution probably is the best option.