Closed ConradoMateu closed 1 year ago
How is this different from #853?
As I've explained before, this issue can not be fixed in the coremltools repository. Submit this issue using the Feedback Assistant.
I opened this issue to give some visibility and provide a way to reproduce it, documented step by step.
I Already submitted it using Feedback Assistant. but this bug has been happening for Long Time ago. Is there a way that you can give some visibility to the issue, that would be really helpful.
Here is the feedback ID: FB12180874 (CoreML: conv_transpose2d Cannot be traced: Input weight must be const at compile time.)
@TobyRoseman
Thanks in advance.
Thanks for the feedback id. I've looked up the internal issue and am now following it. I've also added further details to it. I'll do what I can to get that MIL op extended so the weight parameter doesn't need to be a constant.
I'm going to close this issues as a duplicate. #853 has much more concise code to reproduce the issue.
πDescribing the bug
Stack Trace
Tuple detected at graph output. This will be flattened in the converted model. Converting PyTorch Frontend ==> MIL Ops: 84%|βββββββββββββββββββββββββββββββ | 518/619 [00:00<00:00, 3731.02 ops/s] Error during Core ML conversion: ('Op "706" (op_type: conv_transpose) Input weight must be const at compile time', 'weight', 'wi_center')
converter.py
System environment:
Additional context
Here is the
ContextualAttention
class innetwork.py
that calls this conv_transpose2d and generates the bug. In this lineyi = F.conv_transpose2d( yi, wi_center, stride=self.rate, padding=1) / 4.