dattalab / keypoint-moseq

https://keypoint-moseq.readthedocs.io
Other
71 stars 28 forks source link

importing tracking data from the Livemousetracker in Moseq ? #102

Closed dhuzard closed 11 months ago

dhuzard commented 1 year ago

Hi ! I'm new at Moseq and haven't try the classic version yet, but I'm wondering if it would be possible to applying a the Moseq pipeline on livemousetracker (LMT website link) data? It does use a Kinect v2 and export (in a sqlite db) the masks and coordinates of the animals, so it may be possible to use those as an input in Moseq? I can provide more details, db, info if needed! thanks in advance for your feedback Damien

calebweinreb commented 1 year ago

Hello,

It would be a challenge to use depth MoSeq without a lot of engineering on your end. Keypoint-MoSeq would be easier to apply, provided you are able to track keypoints on each mouse using a tool such as DLC or SLEAP.

dhuzard commented 1 year ago

Thank you for your fast reply! indeed, I guess Keypoint-Moseq would be the most adapted. However, I'm not sure it is feasible to apply DLC/SLEAP on the videos acquired (low quality and some permanent marks appear on the videos). But, we have for every frame, the position (x,y,z) of a Nose, Center and Back, in addition to the 'Mask' of the animal. Maybe this could be somehow used as an input in Keypoint-Moseq (or used as a base for an DLC/SLEAP on masks) ? Here is an example of a mouse data at frame t=32: <root><DATA back_x="162.02" back_y="70.99" back_z="19.17" front_x="216.56" front_y="62.11" front_z="31.18" isLookingDown="false" isLookingUp="false" isRearing="true" mass_x="189.29" mass_y="66.55" mass_z="81.2" t="32"/><ROI><classname>plugins.fab.livemousetracker.ROI2DAreaX</classname><id>172</id><name>seg ok</name><selected>false</selected><readOnly>false</readOnly><properties/><color>-1</color><stroke>2</stroke><opacity>0.3</opacity><showName>false</showName><z>-1</z><t>-1</t><c>-1</c><boundsX>138</boundsX><boundsY>54</boundsY><boundsW>81</boundsW><boundsH>34</boundsH><boolMaskData>78:5e:cd:d4:c9:2:83:20:c:45:d1:f8:ff:3f:ad:28:4e:64:ba:d1:85:ba:6a:3:39:be:50:ab:8:bd:a6:e3:a2:1d:f1:be:d3:bb:7c:12:69:5f:e2:46:7b:d5:e4:5e:44:8e:bd:7a:c6:cc:ab:8e:d:bc:52:48:e4:55:42:52:90:a6:e4:1e:13:2b:1e:9a:fb:6b:30:fd:db:14:3:a6:c7:58:f6:92:84:75:2f:0:1f:60:d1:af:fc:88:b3:df:54:4b:75:d7:da:6:2c:bb:5a:27:8e:75:26:fa:dc:e6:dd:d6:9:e9:82:6b:f3:b8:9a:8a:63:c3:6d:3c:6b:31:11:ad:96:56:33:d3:f5:cd:e:e9:51:6b:dd:1a:f6:6c:18:44:51:e7:a2:6c:e3:ec:86:3d:9b:d9:ee:ac:9a:75:21:8e:77:9:a:a2:f5:71:f5:4d:5e:54:f2:71:6b:f8:df:3d:78:cc:78:68:f4:10:60:d:3e:54:1f:7a:32:3:63:49:3:da</boolMaskData></ROI></root>

calebweinreb commented 1 year ago

It might be possible to use those inputs -- you would need somehow to derive some additional keypoints from the masks -- but with such sparse input it might not learn many more syllables than what LMT already provides. More promising would be to include a higher-res camera and/or better lightning.