ZexinChen / AlphaTracker

AlphaTracker is a computer vision pipeline with the practical and real-time advantages , which requires minimal hardware requirements and produces reliable tracking of multiple unmarked animals. An easy-to-use user interface further enables manual inspection and curation of results.
61 stars 16 forks source link

get NaN after step9Tracking / pose estimation in colab #21

Open YIDAIYAO opened 1 year ago

YIDAIYAO commented 1 year ago

/content/drive/My Drive/AlphaTracker/Tracking/AlphaTracker Frame will be saved in /gdrive/result_folder/oriFrameFromVideo//trainvideo1/frame_folder/ extracting frames from video... processing /gdrive/Sample_Data/trainvideo1.mp4 100% 1814/1814 [03:35<00:00, 8.41it/s] getting demo image: CUDA_VISIBLE_DEVICES='0' python3 demo.py \ --nClasses 4 \ --indir /gdrive/result_folder/oriFrameFromVideo//trainvideo1/frame_folder/ \ --outdir /gdrive/result_folder \ --yolo_model_path /gdrive/AlphaTracker/Tracking/AlphaTracker/train_yolo/darknet//backup/demo/yolov3-mice_final.weights \ --yolo_model_cfg /gdrive/AlphaTracker/Tracking/AlphaTracker/train_yolo/darknet//cfg/yolov3-mice.cfg \ --pose_model_path /gdrive/AlphaTracker/Tracking/AlphaTracker/train_sppe/exp/coco/demo/model_10.pkl \ --use_boxGT 0 Loading YOLO model.. not using ground truth box to do the eval. Loading pose model from /gdrive/AlphaTracker/Tracking/AlphaTracker/train_sppe/exp/coco/demo/model_10.pkl 0% 0/1814 [00:00<?, ?it/s]/usr/local/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448278899/work/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) 100% 1814/1814 [02:14<00:00, 13.45it/s] ===========================> Finish Model Running.

Tracking pose: python ./PoseFlow/tracker-general-fixNum-newSelect-noOrb.py \ --imgdir /gdrive/result_folder/oriFrameFromVideo//trainvideo1/frame_folder/ \ --in_json /gdrive/result_folder/alphapose-results.json \ --out_json /gdrive/result_folder/alphapose-results-forvis-tracked.json \ --visdir /gdrive/result_folder/pose_track_vis/ --vis 1\ --image_format %s.png --max_pid_id_setting 2 --match 0 --weights 0 6 0 0 0 0 \ --out_video_path /gdrive/result_folder/demo_2_0_060000.mp4

Start loading json file... remove extract persons... 100% 26/26 [00:00<00:00, 773417.76it/s] 0% 0/26 [00:00<?, ?it/s] Traceback (most recent call last): File "./PoseFlow/tracker-general-fixNum-newSelect-noOrb.py", line 247, in track[img_name][bid+1]['box_pos'] = [ int(notrack[img_name][bid]['box'][0]),\ ValueError: cannot convert float NaN to integer

laurelrr commented 1 year ago

Hello!

Thank you for your interest in AlphaTracker! Are you attempting to track on the demo video we provided or a different video? Could you first verify that the tracking works on our sample demo video: https://drive.google.com/file/d/1N0JjazqW6JmBheLrn6RoDTSRXSPp1t4K

Thank you!

Hizafa-Nadeem commented 1 year ago

Hi,

I am facing the same error on my own annotated data. I have tried on your sample data and sample video, it worked. However, when I annotated your frames for mice data using Annotation tool. I am receiving same error in Tracking. Please, guide what I am doing wrong?

image

image

laurelrr commented 1 year ago

I think this issue could be caused by not labeling all the body points expected. For example, say you set the number of body parts to 4. You must annotate 4 body points for both mice in sloth. You cannot have only 3 body point annotations in the json for that image.