model = ultralytics.YOLO('yolov5nu.pt') # also with any ultralytics yolo model
dummy_input = torch.rand(1, 3, 640, 640).cuda()
ModelValidator.validate_model(model.model.cuda(), model_input=dummy_input)
I get:
2024-06-14 14:31:56,134 - Utils - INFO - Running validator check <function validate_for_reused_modules at 0x72bd5a507370>
2024-06-14 14:31:56,153 - Utils - ERROR - The following modules are used more than once in the model: ['model.0.act', 'model.9.m']
AIMET features are not designed to work with reused modules. Please redefine your model to use distinct modules for each instance.
2024-06-14 14:31:56,154 - Utils - INFO - Running validator check <function validate_for_missing_modules at 0x72bd5a507400>
2024-06-14 14:31:56,594 - Utils - ERROR - A connected graph failed to be built. This may prevent from AIMET features from being able to run on the model. Please address the errors shown.
And when preparing the model,
TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
The model preparer issue is related to Torch FX tracing. To resolve this, you'll need to modify your code to ensure that the model is traceable by Torch FX.
When running
I get:
And when preparing the model,