grimoire / torch2trt_dynamic

A pytorch to tensorrt convert with dynamic shape support
MIT License
254 stars 34 forks source link

Is it possible to disable dynamic_shape feature? #9

Closed 111qqz closed 3 years ago

111qqz commented 3 years ago

Hi,Thanks to this repo for saving me a lot of time using mmdet on nano. The problem is that I don't want to use TensorRT dynamic_shape feature . I comment code below,but still get output dim like [-1,1000].


        config = builder.create_builder_config()
        config.max_workspace_size = max_workspace_size
        # profile = builder.create_optimization_profile()

        # if input_names is None:
        #     input_names = ['input_%d' % i for i in range(len(inputs))]
        # for input_index, input_tensor in enumerate(inputs):
        #     if opt_shape_param is not None:
        #         min_shape = tuple(opt_shape_param[input_index][0][:])
        #         opt_shape = tuple(opt_shape_param[input_index][1][:])
        #         max_shape = tuple(opt_shape_param[input_index][2][:])
        #     else:
        #         opt_shape = tuple(input_tensor.shape)
        #         min_shape = opt_shape
        #         max_shape = opt_shape
        #     profile.set_shape(
        #         input_names[input_index], min_shape, opt_shape, max_shape)
        # config.add_optimization_profile(profile)

Is there any gloabl flag to control this behavior? Thanks

grimoire commented 3 years ago

Hi For now, the answer is no. I make this repo as a complement of official torch2trt. If you does not need dynamic shape, the official one should be enough. And I will add this to my TODO list. Thanks for your suggestion.

111qqz commented 3 years ago

thanks for your reply.