Open jtoerber opened 2 days ago
Issue: #114: fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros' Could the (only) value of the other Camera be taken? Or is then in the Case of 2 cameras the 3D information missing?
Hi,
You need at least 2 cameras to triangulate the coordinates. When there is a hole for a few frames you can interpolate it, but if a point is not seen for the whole sequence, it is not going to work.
Here are a few ideas, by order of hackiness:
fill_large_gaps_with = 'zeros'
(if it does not work, replace the columns with zeros in pandas or a spreadsheet editor)Please tell me how it goes!
Hi, I guess you mean:
<IKMarkerTask name="RSmallToe">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<!-- <apply>true</apply> -->
<apply>false</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>1</weight>
</IKMarkerTask>
in
IK_SetupPose2Sim
With fill_large_gaps_with = 'zeros' # 'last_value', 'nan', or 'zeros' I get a lot of 0.0, not only for the nan. With fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros' and manually replacing the nan with 0.0 makes "kinematics" possible.
In Opensim there are some issues with it as maybe the number of Cameras may not be enough or the alignment of the two used Cameras is not properly. The person is not really standing nor doing what it did in the Video. In OpenCap there were no problems with two cameras. (if interested the last_values, zeros and manually are in attached zip-file) Demo_SinglePerson.zip
There are definitely issues in your triangulated trc files. How did you calibrate?
Hi, I did Pose2Sim.calibration() with a 3x5 (45mm) checkerboard.
You did not specify which issues there are in the trc files. And btw.: The synchronization seems to be not deterministic as I usually get 3 frames difference in this case, but once 166 frames, which are about 10 seconds here. I guess setting a frame-range may help here.
Check this thread starting from the linked comment maybe. It seems like if you download synchronized videos from OpenCap, they are exported at half the resolution, which leads to a faulty calibration https://github.com/perfanalytics/pose2sim/issues/142#issuecomment-2454283931
EDIT: Sorry I had not seen your last message, let me read it
The issues are that the trc files look don't look like a human being doing anything 😅
Synchronization is likely to be another issue indeed. Check the appropriate doc section to make sure it is correct: https://github.com/perfanalytics/pose2sim/blob/main/README.md#synchronization
If results are not satisfying, edit your Config.toml file:\ All keypoints can be taken into account, or a subset of them.\ The whole capture can be used for synchronization, or you can choose a time range when the participant is roughly horizontally static but with a clear vertical motion (set
approx_time_maxspeed
andtime_range_around_maxspeed
accordingly).N.B.: Works best when:
- the participant does not move towards or away from the cameras
- they perform a clear vertical movement
- the capture lasts at least 5 seconds long, so that there is enough data to synchronize on
- the capture lasts a few minutes maximum, so that cameras are less likely to drift with time
I think in calibration.py you should check the resolution of all the videos and their frames per second. Your advantage/disadvantage opposite to OpenCap is that you allow all kind of cameras. This may even lead to inhomogenous cameras or camera settings during one recording.
Got new camera. I'll try my final setting tomorrow.
Synchronization: I did calibration and poseEstimation. Then I did all but calibration and poseEstimation. Hence same videos and Pose-Estimations and then I got once 166 frames difference instead of 3 frames with a clear right wrist raise in the first seconds and Videos captured in one Thread to be (relatively) safe/close.
Calibration does check the resolution of videos, I just tried to make a guess based on the information your provided but I went in the wrong direction. To be clearer, if you capture with OpenCap and convert its calibration files, and the export the wrong videos, you will have a discrepancy between the resolutions of the calibration file and the videos. But that's not what you did.
The framerate is also detected automatically, see Config.toml:
frame_rate = 'auto' # fps # int or 'auto'. If 'auto', finds from video (or defaults to 60 fps if you work with images)
Synchronization should be deterministic if you run on the same videos with the same parameters. If the right wrist clearly raises only at one time, it should not give you such different results, unless there is a problem in the parameters of your Config.toml file.
If you want to send some of your data to contact@david-pagnon.com, I can look at it and search for the bug.
Hi, I have 2 Cameras. One of the camera could see my right small toe all of the time, the other not. So after FIltering I have nan in the trc-File for the corresponding Landmark/Keypoint. When I do Inverse Kinematics to get the angles, I get an error message:
Lowering the thresholds did not work. I am not able to work with more than two Cameras as they need high FPS (>=100) in my case. In this specific case I am able to adjust the positions of the cameras. But, in another case I want to track arm and fingers and it will happen that there are gaps, which are even larger than (large) interp_if_gap_smaller_than-Parameter, especially for landmarks/keypoints, which are in a specific case not interesting. How am I able to get InverseKinematics running? I could replace nan with 0.0? I could (at least try to) remove the non interesting landmark(s)? Any recommendations? Interestingly the filtering has a problem with that not the triangulation, which may decide to take the Landmark/Keypoint from the Camera, that is able to see it.