Open AustinStarnes opened 1 year ago
upsample_bicubic2d
is now supported.
@TobyRoseman The commit you refer is upsample_bilinear2d
, the op name in torch is upsample_bicubic2d
.
I got same error in coremltool==7 and torch==2.0.0 when have interpolate with bicubic
PyTorch convert function for op 'upsample_bicubic2d' not implemented
@darrenxyli - you're correct. Sorry for the confusion. Reopening this issue.
Any update on this? It would be very useful, especially upsample_bicubic2d_aa
(bicubic with anti-aliasing) as it would be the closest to what PIL does.
An update to the code you provided to include anti-aliasing:
import torch
from torchvision.transforms import InterpolationMode, Resize
import coremltools as ct
class Net(torch.nn.Module):
def forward(self, img):
return Resize((336, 336), InterpolationMode.BICUBIC, antialias=True)(img)
model = Net()
model.eval()
example_input = torch.rand(1, 3, 112, 112)
traced_model = torch.jit.trace(model, example_input)
out = traced_model(example_input)
model = ct.convert(
traced_model,
inputs=[ct.TensorType(shape=example_input.shape)]
)
And the error it produces:
PyTorch convert function for op '_upsample_bicubic2d_aa' not implemented.
Does anyone know when coremltools will support upsample_bicubic2d?
@TobyRoseman , @zhengweix
another upvote for this feature!
I am experimenting with wuerstchen and cascade models that depend on decent up/down sampling with anti-aliasing.
all related tickets are being closed, is this still in the feature bin?
Perhaps there exists an alternative conversion method ?
@TobyRoseman @zhengweix also upvoting! I'm stuck converting a DINOv2-based model because of the same error:
RuntimeError Traceback (most recent call last)
Cell In[10], [line 1](vscode-notebook-cell:?execution_count=10&line=1)
----> [1](vscode-notebook-cell:?execution_count=10&line=1) mlmodel = ct.convert(
[2](vscode-notebook-cell:?execution_count=10&line=2) traceable_model,
[3](vscode-notebook-cell:?execution_count=10&line=3) inputs=[ct.ImageType(name="input", shape=input_tensor.shape)],
[4](vscode-notebook-cell:?execution_count=10&line=4) )
File ~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:581, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline)
[573](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:573) specification_version = _set_default_specification_version(exact_target)
[575](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:575) use_default_fp16_io = (
[576](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:576) specification_version is not None
[577](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:577) and specification_version >= AvailableTarget.iOS16
[578](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:578) and need_fp16_cast_pass
[579](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:579) )
--> [581](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:581) mlmodel = mil_convert(
[582](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:582) model,
[583](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:583) convert_from=exact_source,
[584](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:584) convert_to=exact_target,
[585](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:585) inputs=inputs,
[586](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:586) outputs=outputs_as_tensor_or_image_types, # None or list[ct.ImageType/ct.TensorType]
[587](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:587) classifier_config=classifier_config,
[588](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:588) skip_model_load=skip_model_load,
[589](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:589) compute_units=compute_units,
[590](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:590) package_dir=package_dir,
...
[114](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:114) )
[116](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:116) logger.info("Converting op {} : {}".format(node.name, op_lookup))
[118](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:118) scopes = []
RuntimeError: PyTorch convert function for op 'upsample_bicubic2d' not implemented.
and I don't know how to overcome this! Are there ways to work around this while we wait?
Anybody have success converting depth anything v2 to coreml ? huggingface have coreml model but only for smallest model
upsample_bicubic2d
coremltools
aspires to the same feature set as PyTorch, and implements functionality from common image processing libraries (such aspillow
, which is probably why PyTorch implemented this to begin with, i.e.PIL.Image.BICUBIC
)I can't recommend the prioritization of this one over other missing torch ops, but I figured I could create a ticket to track discussion of this layer type.
Here is a minimal environment you could create to reproduce:
And here is a minimal script to trigger the error notice that the op is unimplemented: