czg1225 / SlimSAM

SlimSAM: 0.1% Data Makes Segment Anything Slim
Apache License 2.0
248 stars 14 forks source link

AttributeError: 'DataParallel' object has no attribute 'img_size' #20

Closed huangshilong911 closed 1 month ago

huangshilong911 commented 1 month ago

Hi,I'm using torch2trt for model conversion, and I'm getting the following error when converting .pth to .engine, but previously converting another network's .pth worked fine, is this due to network structure or parameter mismatch or something like that when I'm training the model?

In addition, it should be mentioned that the problematic .pth file was pruned, could it be that the pruning operation resulted in missing or null parameters and hence the error? Since the trained .pth file behaves normally in the inference operation, and the problem occurs only in the model transformation, is it due to the fact that the training model and the model transformation do not have the same stringent requirements for the parameters and other contents in the .pth file?

Traceback (most recent call last): File "convert-sam-trt.py", line 90, in model_trt = torch2trt(model, [batched_input, multimask_output], fp16_mode=True,strict_type_constraints=True) File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8.egg/torch2trt/torch2trt.py", line 558, in torch2trt outputs = module(inputs) File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl return forward_call(input, *kwargs) File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in forward input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0) File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0) File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 171, in preprocess padh = self.image_encoder.img_size - h File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'DataParallel' object has no attribute 'img_size'

czg1225 commented 1 month ago

Hi @huangshilong911 , I'm not very familiar with tensorRT transformation. But it seems like you didn't transform the model from DataParallel to the normal format. Employ the following code may solve this problem

SlimSAM_model = torch.load(<model_path>)
SlimSAM_model.image_encoder = SlimSAM_model.image_encoder.module
huangshilong911 commented 1 month ago

Thanks for the answer, the problem was solved after using your advice.