I'm posting this issue to ask for some help regarding the use of the Tubex library for my robotics problem.
I'm trying to perform the range only localization of an agent relative to three beacons. Because the beacons are very close together, the computation of the position of the agent is very sensitive to measurement error. That's why I'd like to use constraints programming to combine multiple measurement at different times in order to compute an hopefully more precise trajectory.
Looking at the examples and documentation, they are programs working with range only measurements, but they make use of the estimated speed of the agent. I don't have access to this information. My need is rather to combine high frequency range data to estimate speed and position. Is there a way to do it with constrain programming as is in Tubex ? Maybe you would have so educated suggestion on the way to solve this ?
My other questions is about the way to use Tubex for online computation. The examples work with the entire trajectory and all measurements created at the beginning of the code before contracting the trajectory, so i have issues understanding how I can use Tubex to do some online localization. Basically how can I Asynchronously add a new measurement to refine the estimation from the couple last measurements.
Thanks a lot for you're help. Feel free to ask any more information. I have high hopes about this library being able to solve my precision issue.
To be more precise, the only assumption I can make about the speed is that it's continuous, it is in the range [-2;+2] in both dimensions and the acceleration is capped.
Hi,
I'm posting this issue to ask for some help regarding the use of the Tubex library for my robotics problem. I'm trying to perform the range only localization of an agent relative to three beacons. Because the beacons are very close together, the computation of the position of the agent is very sensitive to measurement error. That's why I'd like to use constraints programming to combine multiple measurement at different times in order to compute an hopefully more precise trajectory.
Looking at the examples and documentation, they are programs working with range only measurements, but they make use of the estimated speed of the agent. I don't have access to this information. My need is rather to combine high frequency range data to estimate speed and position. Is there a way to do it with constrain programming as is in Tubex ? Maybe you would have so educated suggestion on the way to solve this ?
My other questions is about the way to use Tubex for online computation. The examples work with the entire trajectory and all measurements created at the beginning of the code before contracting the trajectory, so i have issues understanding how I can use Tubex to do some online localization. Basically how can I Asynchronously add a new measurement to refine the estimation from the couple last measurements.
Thanks a lot for you're help. Feel free to ask any more information. I have high hopes about this library being able to solve my precision issue.