dattalab / keypoint-moseq

https://keypoint-moseq.readthedocs.io
Other
63 stars 25 forks source link

Loading my own keypoint tracking data #109

Closed liwei4932 closed 7 months ago

liwei4932 commented 8 months ago

I want to use facemap (https://github.com/MouseLand/FaceMap)generates movement trajectories for key points of the face('eye', 'lowerlip', 'mouth', 'nose', 'paw', 'whisker''). The data format is h5. May you help me to write a loader for my data. Appreciate very much. image face2023-10-28T19_13_25 00_00_00-00_03_00_FacemapPose.zip

liwei4932 commented 8 months ago

image

calebweinreb commented 8 months ago

Yes, I'm happy to write a loader. One thing I noticed is that the h5 file you sent doesn't contain a 3D array as shown in your screenshot. Rather, it contains a group called "Facemap" which has keys "eye(back)", "eye(bottom)", etc.. That's totally fine of course, but can you confirm that this is the standard format exported by FaceMap? Also can you let me know how exactly you exported these data from Facemap? This info will be useful for writing the docs.

romainligneul commented 8 months ago

Regarding this, I had two related questions.

  1. Is there a function that allows adding custom keypoints/confidences to an already existing keypoints/confidences list?
  2. Is there a strong reason against including "non-positional" keypoints that can contribute to define the posture of the animal? For example, the overall area occupied by an animal or the major and minor axes of the centroid may add useful information.
calebweinreb commented 8 months ago

Hello,

(1) No there isn't. You'd have to merge the old and new keypoints ahead of time. Alternatively, could merge them after loading using a few lines of code. E.g.:

old_coordinates = # dictionary mapping recording names to arrays of shape (n_timepoints, n_old_keyoints, 2)
new_coordinates = # dictionary mapping the same recording names to arrays of shape (n_timepoints, n_new_keyoints, 2)

# merge coordinates
coordinates = {k: np.concatenate([old_coordinates[k], new_coordinates[k]], axis=1) for k in new_coordinates.keys()}

(2) In principle, one could fit an AR-HMM to a combination of keypoint and non-keypoint information. In practice, this can't easily be done with the existing code since there are many steps that assume the input data has the form of keypoints. For example, the keypoints are centered and rotated to maintain a constant location and heading direction. It's also worth noting that the variables you mentioned (area, length, width) are probably inferable from the keypoints anyway (assuming you tracked enough), so the model effectively has access to them whether or not they are included explicitly.

romainligneul commented 8 months ago

Thanks for the answer. For (2) it is what I suspected. I might expand my DLC model accordingly.

liwei4932 commented 8 months ago

In this file: face2023-10-28T19_13_25 00_00_00-00_03_00_FacemapPose.zip this is 'Facemap'GUI output data. I'm not good at this field. Could you get the keypoint_h5 data information from "https://github.com/MouseLand/facemap/blob/main/docs/outputs.rst#keypoints-processing"

calebweinreb commented 8 months ago

I added a facemap loader that you can access by installing the dev branch of keypoint moseq

pip install -U git+https://github.com/dattalab/keypoint-moseq.git@dev

and then use as follows:

load_keypoints(YOUR PATH, 'facemap')
liwei4932 commented 7 months ago

Okay, thank you very much. I'll try again,thank you.