Open dungng27 opened 1 year ago
It turns out that I can modify these ops by using torch functions rather than tensor's functions and register these. The problem is solved!
It turns out that I can modify these ops by using torch functions rather than tensor's functions and register these. The problem is solved!
This is just a temporary fix and not applicable to other ops. I wonder if there is a better work around.
@dungng27
It turns out that I can modify these ops by using torch functions rather than tensor's functions and register these. The problem is solved!
Can you elaborate a little more on this with a dummy example? I'd really appreciate it.
I'm also stumped on this, with the specific operators:
aten::meshgrid
aten::split_with_sizes
Did you manage to add validation for quantized model from MMEngine? Can you share please?
I'm also stumped on this, with the specific operators:
aten::meshgrid aten::split_with_sizes
Hello,i also encounter the same problem as yours,i want to know if you have solved it? Thirsty for your reply!
Hi, you have to replace these operations in your model definition with supported ones.
Hi, you have to replace these operations in your model definition with supported ones.
Thanks for your kind reply! But i still have a problem that i have found meshgrid in make_anchors function,but i can't catch where the split_with_sizes is , could you tell me where aten::split_with_sizes is used?
Thanks.
Well, not sure which model and repository version are you using. Perhaps in PyCharm try "Find in Files" option with just "split_with_sizes". In my case, I used RTMDet-Ins in MMdetection, but "split_with_sizes" occurred in post-processing, which quantizer anyhow doesn't put in the model. So, everything after the model forward function, you'd have to implement yourself.
Well, not sure which model and repository version are you using. Perhaps in PyCharm try "Find in Files" option with just "split_with_sizes". In my case, I used RTMDet-Ins in MMdetection, but "split_with_sizes" occurred in post-processing, which quantizer anyhow doesn't put in the model. So, everything after the model forward function, you'd have to implement yourself.
fully thanks! Actually i am using vitis-ai to quantize yolov8 model , i will try to find the op in post-processing.
Hi, I'm trying to quantize and compile a PyTorch model with some Aten operations not supported by Vitis-AI yet. Specifically, I'm deploying the model (quantizing the model with test mode) but some errors occured:
Here is the script i run to quantize and deploy the model:
with the command:
I'm using Vitis-AI 2.5 for stability. I read the docs about Register Custom Operation but I don't know how to apply this workflow to register these custom Aten ops. Could someone show me how? Many thanks.