KumarLabJax / deep-hrnet-mouse

MIT License
8 stars 1 forks source link

Is there a detailed user guide? #3

Open longddd-blink opened 1 year ago

longddd-blink commented 1 year ago

What tools' scripts should I start running to generate pose_est_v2 files for JABS ?

SkepticRaven commented 1 year ago

The command to run a trained model on a video should look like this:

python3 -u /pose-est-env/deep-hres-net/tools/infermousepose.py --model-file "/pose-est-env/pose-est-model.pth" "/pose-est-env/pose-est-conf.yaml" "[Input_video.avi]" "[Output_pose_est_v2.h5]"

This happens to be the command using our singularity image -- which includes this code repository and our trained model inside it. If you don't use the singularity image, deep-hres-net/tools/infermousepose.py is this file.

JJjjxx commented 1 year ago

If I prepare a mouse gait video test.avi, how can I use this model to start behavioral prediction? Are there any specific steps?

SkepticRaven commented 1 year ago

@jacobbeierle has been working on a star protocols report for going from video to gait, which we will link in this repository when it's ready.

The major steps are

  1. video -> pose (in the comment above)
  2. pose -> gait, using this repository python3 gengaitstats.py --batch-file ~/my-experiment-batch.txt --root-dir ~/my-experiment-pose-dir --out-file ~/my-experiment-gait.h5
  3. gait -> gait summaries (same repository in 2) python summarizegaitcsv.py --gait-h5 ~/my-experiment-gait.h5 --speed speed-bin > ~/filename.csv

There's a lot of other minor steps to make sure your videos can work well with our pose network alongside a variety of tools related to the gait analysis. Right now all the features are listed behind python <script.py> --help. I'll be sure to ping this thread when the star protocols document becomes public.

JJjjxx commented 1 year ago

Thanks for the instructions, so I need to use deep-hrnet-mouserepository to generate Output_pose_est_v2.h5 corresponding to Input_video.avi In the first step. I used your instruction in the first comment, but I encountered an error of dimension mismatch, so what pre-work should done with my input videos?

SkepticRaven commented 1 year ago

I wasn't the initial author of this repository, so I'm not as experienced with troubleshooting errors that it produces. However, the video data we send into it is 480x480. I don't know if this is a requirement of the network, however.

We use ffmpeg to pre-process the video, but any other software that can crop and scale the video should work. ffmpeg -i <input.avi> -c:v mpeg4 -q 0 -vf 'scale=480:480' <output.avi> Note that if your input video is not square, that command will stretch the video. To solve that, you will need to add in a crop operation (eg -vf 'crop=out_w:out_h:x:y,scale=480:480')

If that doesn't help, please add the exact command with error log along with the pixel resolution of the video you're trying to use.

JJjjxx commented 1 year ago

Thank you. It's very helpful!