tusharsangam / TransVisDrone

MIT License
29 stars 3 forks source link

Setup and inference #11

Closed yumurtaci closed 2 weeks ago

yumurtaci commented 3 months ago

Dear developers, thanks a lot for your contribution. I need some help with the setup.

Could you please tell me which Ubuntu and Python versions are required / tested? Once the setup is successful, how can I run the inference.py on a custom video?

Thanks!

tusharsangam commented 3 months ago

Hi @yumurtaci, we appreciate your interest in our work. Since we developed our system on Slurm cluster which usually has a Linux-based system it would be hard to tell exactly which flavor it is. Thus we recommend installing the system using Anaconda please take a look at pytorch-ampere.yml. Use this to create your environment & install the remaining dependencies using pip install -r requirements.txt.

Preprocess To test your custom video/dataset : preprocess it like NPS dataset.yml. You can separate frames & store it's labels using NPS preprocessing script in case if you don't have labels generate a random or fixed label for each frame (ignore mAP, etc. metrics in this case). The above linked preprocessing files are available on different branch. Make the directory structure like NPS dataset.

Val.py Run val.py as shown in submit-test.slurm with your customdataset.yml in argument.

Weights Choosing the right weight: since these models are fit on three different datasets, we provide three checkpoints please try all three to find the best one that works for you. You can also choose to train model from scratch on your dataset or start from the provided checkpoint (if you have less data).

yumurtaci commented 3 months ago

Dear @tusharsangam thank you very much for the explanation. I managed to create a virtual environment with the right dependencies and also generated the images from the videos using the scripts in the reproduce folder.

However, I couldn't exactly figure out how the folder structure should look like to run inference on the NPS Dataset. Here is my folder structure:

When I run the following command

python inference.py --data /home/user/TransVisDrone/data/NPS_custom.yaml \
--weights /home/user/best.pt \
--batch-size 2 --img 1280 --num-frames 5 \
--project ./runs/inference/NPS/image_size_1280_temporal_YOLO5l_5_frames_NPS_end_to_end_skip_0 \
--name best_augment_full_save \
--task inference --exist-ok --save-aot-predictions

with the following yaml file:

# Train/val/test sets as 1) dir: path/to/imgs, 
# 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
#path: /home/tu666280/NPS-Data-Uncompressed/AllFrames  # dataset root dir
path: /home/user/DroneDataset/AllFrames
train: train  # train images (relative to 'path')  6471 images
val: val  # val images (relative to 'path')  548 images
test: test  # test images (optional)  1610 images
inference: val
#annotation_path: /home/tu666280/NPSvisdroneStyle
annotation_path: /home/user/NPSvisdroneStyle
annotation_train: train/annotations
annotation_val: val/annotations
annotation_test: test/annotations
#video_root_path: /home/tu666280/NPS/Videos
video_root_path: /home/user/DroneDataset/Videos
video_root_path_train: train
video_root_path_val: val
video_root_path_test: test
video_root_path_inference: val

# Classes
nc: 1  # number of classes
names: ['drone']

The inference task stops, and it starts a training that eventually fails.

TransVisDrone_inference_error

I really appreciate your help. Thanks in advance!