higerra / ridi_imu

MIT License
207 stars 67 forks source link

non tango phone data: irregular trajectory #10

Open arpitg1304 opened 5 years ago

arpitg1304 commented 5 years ago

I used your app: https://github.com/higerra/AndroidIMURecorder to collect data from my phone and then preprocessed using the python script. When I ran ./IMULocalization_cli for this data, I am not getting a regular trajectory. It shows movements in all three dimensions unlike all of the trajectories generated on your data which are flat image image

Do you have any idea why this might be happening?

Ador2 commented 5 years ago

what type is your phone?

arpitg1304 commented 5 years ago

I tried with two phones: -Samsung Galaxy S9 -Honor 6X

I got 3D trajectories in both cases(Unlike the 2D trajectories generated for the dataset provided with RIDI). Have you used RIDI with non-tango phone?

higerra commented 5 years ago

Hi, We tested on the Lenovo Phab Pro 2 and Google Pixel XL. We simply ignore the 3rd axis to get a 2D trajectory.

arpitg1304 commented 5 years ago

Ok, are the trajectories accurate enough to get the distance traveled?

higerra commented 5 years ago

It largely depends on the training data and motions. Since the number of people that train the model is limited, I'm not expecting high accuracy with pretrained model.

Ador2 commented 5 years ago

Well Honor 6X doesn't have a gyro, and as @higerra said it needs to train well on large amount of training data, I've tried the algorithm on a non-tango device and it was showing a little bit movement on the 3rd axis and that completely fine

arpitg1304 commented 5 years ago

Honor 6X has a gyro and I also tested with Galaxy S9. Even if the trajectory is in 2D, the distance traveled (norm of poses) is very inaccurate. I held my phone and walked for about 11 meters while recording the data.

Then, I ran ./IMULocalization_cli on that data. The norm between first and last pose is only 4.5 meters.

@higerra Is the pre-trained model different than the one you used for generating results in your videos and paper?

higerra commented 5 years ago

It's the same, but trained specifically for one person. We were using a traditional machine learning model (SVR), so it might not do well on large dataset.

If you understand the paper are doing research on this, I would recommend trying:

  1. Ditch the "stabilized-IMU frame" formulation. We found later that this formulation has some singularity issue. Instead, simply regress velocities on 3 local axes.
  2. Use a deep learning based model, for example, ResNet, or LSTM. You can still use a large portion of RIDI codebase as a starting point of your implementation. It should work well on provided dataset.

Then, if you have the resource, you can try collecting a lot more training data. This is very effective with deep learning models.