marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 357 forks source link

Reshaping failed for tensor #352

Closed HoangTienDuc closed 1 year ago

HoangTienDuc commented 1 year ago

• Hardware Platform (Jetson / GPU) Both • DeepStream Version [5.1 on GPU and 6.2 on jetson] • JetPack Version (valid for Jetson only) 5.1

Hi @marcoslucianops, thank you for your great work. After re-parameter my yolov7 onnx model, I tried to run it on ds However, I got the bellow error. Pls help me to check it.

WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
ERROR: [TRT]: 4: [shapeCompiler.cpp::evaluateShapeChecks::832] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: reshape would change volume. IShuffleLayer /0/model.105/Reshape: reshaping failed for tensor: /0/model.105/m.0/Conv_output_0)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1119 Build engine failed from config file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:811 failed to build trt engine.
0:00:04.085174714 21333 0x7fa9142cfad0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.103121539 21333 0x7fa9142cfad0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.103153087 21333 0x7fa9142cfad0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.103186529 21333 0x7fa9142cfad0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
marcoslucianops commented 1 year ago

This current version of DeepStream-Yolo doesn't work on DeepStream 5.1

Are you using the export_yoloV7.py from util folder?

HoangTienDuc commented 1 year ago

Yes. I follow your documentation

marcoslucianops commented 1 year ago

Can you send your pytorch model to my email for testing?

HoangTienDuc commented 1 year ago

Hi @marcoslucianops , Thank for your reply. The .pt model was sent to your email marcoslucianops@gmail.com

marcoslucianops commented 1 year ago

I think you are using the wrong export file. Here the export and engine build worked normally.

HoangTienDuc commented 1 year ago

Hi @marcoslucianops, I really appreciate your help. The onnx models and these evns were sent to you.

  1. After getting .pt model, I re-parameter it
    
    from copy import deepcopy
    from models.yolo import Model
    import torch
    from utils.torch_utils import select_device, is_parallel
    import yaml

device = select_device('0', batch_size=1)

model trained by cfg/training/*.yaml

ckpt = torch.load('yolov7.pt', map_location=device)

reparameterized model in cfg/deploy/*.yaml

model = Model('cfg/deploy/yolov7.yaml', ch=3, nc=5).to(device)

with open('cfg/deploy/yolov7.yaml') as f: yml = yaml.load(f, Loader=yaml.SafeLoader) anchors = len(yml['anchors'][0]) // 2

copy intersect weights

state_dict = ckpt['model'].float().state_dict() exclude = [] intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape} model.load_state_dict(intersect_state_dict, strict=False) model.names = ckpt['model'].names model.nc = ckpt['model'].nc

reparametrized YOLOR

for i in range((model.nc+5)anchors): model.state_dict()['model.105.m.0.weight'].data[i, :, :, :] = state_dict['model.105.im.0.implicit'].data[:, i, : :].squeeze() model.state_dict()['model.105.m.1.weight'].data[i, :, :, :] = state_dict['model.105.im.1.implicit'].data[:, i, : :].squeeze() model.state_dict()['model.105.m.2.weight'].data[i, :, :, :] = state_dict['model.105.im.2.implicit'].data[:, i, : :].squeeze() model.state_dict()['model.105.m.0.bias'].data += state_dict['model.105.m.0.weight'].mul(state_dict['model.105.ia.0.implicit']).sum(1).squeeze() model.state_dict()['model.105.m.1.bias'].data += state_dict['model.105.m.1.weight'].mul(state_dict['model.105.ia.1.implicit']).sum(1).squeeze() model.state_dict()['model.105.m.2.bias'].data += state_dict['model.105.m.2.weight'].mul(state_dict['model.105.ia.2.implicit']).sum(1).squeeze() model.state_dict()['model.105.m.0.bias'].data = state_dict['model.105.im.0.implicit'].data.squeeze() model.state_dict()['model.105.m.1.bias'].data = state_dict['model.105.im.1.implicit'].data.squeeze() model.state_dict()['model.105.m.2.bias'].data *= state_dict['model.105.im.2.implicit'].data.squeeze()

model to be saved

ckpt = {'model': deepcopy(model.module if is_parallel(model) else model).half(), 'optimizer': None, 'training_results': None, 'epoch': -1}

save reparameterized model

torch.save(ckpt, 'yolov7_reparameter.pt')



2. I convert the re-parameter .pt model to onnx using export_yoloV7.py
HoangTienDuc commented 1 year ago

Thank @marcoslucianops so much, solved this problem