Open junxnone opened 5 years ago
. venv/bin/activate
cd data/bitvehicle unrar x BITVehicle_Dataset.rar mv BITVehicle_Dataset images
python ./tools/downscale_images.py -target_size 512 ./data/bitvehicle/images
modify the config.py to specified the GPU.
vi vlp/config.py
python3 train.py vlp/config.py
The model.ckpt will save in the training_toolbox/ssd_detector/vlp/model/model.ckpt.
python3 eval.py vlp/config.py
python3 infer.py --json --input=../../data/bitvehicle/bitvehicle_test.json --show vlp/config.py
This will show the inference with the test image.
python3 export.py vlp/config.py /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py
$ tree vlp/model/ie_model/ vlp/model/ie_model/ ├── checkpoint ├── graph.bin ├── graph.ckpt.data-00000-of-00001 ├── graph.ckpt.index ├── graph.mapping ├── graph.pb ├── graph.pb.frozen ├── graph.pbtxt ├── graph.tfmo.json └── graph.xml
python3 object_detection_demo_ssd_async.py -m vlp/model/ie_model/graph.xml -l /opt/intel/computer_vision_sdk/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so -i xxx.mp4
Reference
Setup the Environment
Downloads
Pre-processing dataset
modify the config.py to specified the GPU.
Training
Evaluation
Inference
Export OpenVINO format model
Inference with OpenVINO