HSqure / ultralytics-pt-yolov3-vitis-ai-edge

This demo is only used for inference testing of Vitis AI v1.4 and quantitative compilation of DPU. It is compatible with the training results of v9.5.0 version of ultramatics (it needs to use the model saving method of Python 1.4 version)
GNU General Public License v3.0
6 stars 3 forks source link

compile the model #3

Open shoayi opened 2 years ago

shoayi commented 2 years ago

Hi! When i run 'python quant.py --quant_mode test --subset_len 1 --batch_size 1 --deploy ',i get this error: [VAIQ_NOTE]: =>Quantizable module is generated.(quantize_result/Model.py)

[VAIQ_NOTE]: =>Get module with quantization. 200 0%| | 0/1 [00:00<?, ?it/s]/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1639180518675/work/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.40it/s] Scanning 'data/downloaded/val.cache' images and labels... 0 found, 200 missing, 0 empty, 0 corrupted: 100%|████████████████████| 200/200 [00:00<?, ?it/s] Class Images Targets P R mAP F1 Computing mAP: 0%| | 0/13 [00:00<?, ?it/s] [VAIQ_WARN]: The tensor type of Model::input_0 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9546 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9556 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9566 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9576 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/input.1 is torch.uint8. Only support float32/double quantization. Computing mAP: 0%| | 0/13 [00:00<?, ?it/s] Traceback (most recent call last): File "quant.py", line 505, in file_path=file_path) File "quant.py", line 473, in quantization register_buffers=register_buffers) File "quant.py", line 123, in test inf_out, train_out = model_with_post_precess(imgs, model, data_cfg, register_buffers) # inference and training outputs File "quant.py", line 354, in model_with_post_precess for output in model(images): File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "quantize_result/Model.py", line 186, in forward output_module_1 = self.module_6(output_module_1) File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(input, **kwargs) File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/conv.py", line 115, in forward groups = self.groups) RuntimeError: expected scalar type Byte but found Float can you help me?thank you!

HSqure commented 1 year ago

Hi! When i run 'python quant.py --quant_mode test --subset_len 1 --batch_size 1 --deploy ',i get this error: [VAIQ_NOTE]: =>Quantizable module is generated.(quantize_result/Model.py)

[VAIQ_NOTE]: =>Get module with quantization. 200 0%| | 0/1 [00:00<?, ?it/s]/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1639180518675/work/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.40it/s] Scanning 'data/downloaded/val.cache' images and labels... 0 found, 200 missing, 0 empty, 0 corrupted: 100%|████████████████████| 200/200 [00:00<?, ?it/s] Class Images Targets P R mAP F1 Computing mAP: 0%| | 0/13 [00:00<?, ?it/s] [VAIQ_WARN]: The tensor type of Model::input_0 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9546 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9556 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9566 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/9576 is torch.uint8. Only support float32/double quantization.

[VAIQ_WARN]: The tensor type of Model::Model/Focus[model]/Focus[0]/input.1 is torch.uint8. Only support float32/double quantization. Computing mAP: 0%| | 0/13 [00:00<?, ?it/s] Traceback (most recent call last): File "quant.py", line 505, in file_path=file_path) File "quant.py", line 473, in quantization register_buffers=register_buffers) File "quant.py", line 123, in test inf_out, train_out = model_with_post_precess(imgs, model, data_cfg, register_buffers) # inference and training outputs File "quant.py", line 354, in model_with_post_precess for output in model(images): File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "quantize_result/Model.py", line 186, in forward output_module_1 = self.module_6(output_module_1) File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(input, **kwargs) File "/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/conv.py", line 115, in forward groups = self.groups) RuntimeError: expected scalar type Byte but found Float can you help me?thank you!

Hi @shoayi ! Sorry It's been a long time so I forget some details about this program. Here I use some parameter directlty taking from model file may cause some compatbility issues.