google-ai-edge / ai-edge-torch

Supporting PyTorch models with the Google AI Edge TFLite runtime.
Apache License 2.0
349 stars 48 forks source link

Failed to convert model with bilinear resizing. #158

Closed hgaiser closed 2 months ago

hgaiser commented 2 months ago

Description of the bug:

I am trying to convert a model from segmentation_models.pytorch. It uses one or more UpsamplingBilinear2d layer, but the conversion for this layer seems to be handled incorrectly.

The conversion itself succeeds without issue, but when trying to load this model the following error is printed:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/tmp/2024-08-19-16-36-23/venv/lib/python3.11/site-packages/tflite_runtime/interpreter.py", line 531, in allocate_tensors
    return self._interpreter.AllocateTensors()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: If half_pixel_centers is True, align_corners must be False.Node number 219 (RESIZE_BILINEAR) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

Actual vs expected behavior:

I expect the converted model to load in tflite_runtime, but I am getting an exception.

Any other information you'd like to share?

The UpsamplingBilinear2d layer sets align_corners=True and mode='bilinear'.

Tensorflow seems to set half_pixel_centers=True for bilinear interpolation (link). The combination of the two seems to be incorrect, though I haven't looked into why that is.

I'm not sure what the correct behavior is in this case?

pkgoogle commented 2 months ago

Hi @hgaiser, would you happen to have your conversion script handy? or provide an example model from https://github.com/qubvel-org/segmentation_models.pytorch which exhibits/shows this behavior? Thanks.

hgaiser commented 2 months ago

Hi @hgaiser, would you happen to have your conversion script handy? or provide an example model from https://github.com/qubvel-org/segmentation_models.pytorch which exhibits/shows this behavior? Thanks.

Yeah absolutely. I wanted to do that yesterday but was in a hurry :).

import os

import torch
import tflite_runtime.interpreter as tflite
import ai_edge_torch

class SomeModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.resize = torch.nn.UpsamplingBilinear2d((100, 100))

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return self.resize(x)

model = SomeModel()

os.environ['PJRT_DEVICE'] = 'CPU'
sample_inputs = (torch.randn(1, 3, 50, 50),)
edge_model = ai_edge_torch.convert(model.eval(), sample_inputs)
edge_model.export('model.tflite')

interpreter = tflite.Interpreter(model_path='./model.tflite')
interpreter.allocate_tensors()
pkgoogle commented 2 months ago

I was able to replicate exactly as above on nightly.

majiddadashi commented 2 months ago

Hi @hgaiser ,

Thanks for reporting this and providing the reproducer.

tflite-runtime package is old (last one released Oct 3rd, 2023).

You can try installing tf-nightly if you want to directly use tflite to run the converted model.

import tensorflow as tf

interpreter = tf.lite.Interpreter(...)
...

or if you are only interested in running the model in python, the edge_model object is callable:

sample_inputs = (torch.randn(1, 3, 50, 50),)
edge_model = ai_edge_torch.convert(model.eval(), sample_inputs)
results = edge_model(*sample_inputs)
hgaiser commented 2 months ago

Hey @majiddadashi , thanks, that seems to work. I am using a delegate that produces the same error (no error on CPU, same error when loading delegate), but I guess it's a problem for another repo.

Any idea why tflite_runtime doesn't get released anymore? Is it not recommended anymore? I compiled tflite_runtime from tensorflow, which also seems to work.