isl-org / MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.27k stars 599 forks source link

RuntimeError encountered when converting midas_v21_small_256.pt to onnx model #198

Open lam-ys opened 1 year ago

lam-ys commented 1 year ago

I tried to convert the midas_v21_small_256.pt to onnx model using my own custom export script but encountered an error "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)". I'm not sure what cause this error as I've already moved both model and the tensor to GPU using .to(device). I'm suspecting the error is likely cause by the MidasNet_small but I still can't find the problem after going through the code. I've attached the full console log below:

Device: cuda
Loading weights:  ./weights/midas_v21_small_256.pt
Using cache found in /home/jetson/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master
Starting ONNX export...
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Traceback (most recent call last):
  File "export.py", line 40, in <module>
    torch.onnx.export(model,
  File "/home/jetson/.local/lib/python3.8/site-packages/torch/onnx/__init__.py", line 319, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/home/jetson/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 113, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/jetson/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 734, in _export
    params_dict = torch._C._jit_pass_onnx_deduplicate_initializers(graph, params_dict,
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)

Below is the custom export script I've wrote:

import os
import glob
import torch
import numpy as np
import onnx
from midas.dpt_depth import DPTDepthModel
from midas.midas_net import MidasNet
from midas.midas_net_custom import MidasNet_small
from midas.transforms import Resize, NormalizeImage, PrepareForNet

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device: %s" % device)

model_path = './weights/midas_v21_small_256.pt'

model = MidasNet_small(model_path, features=64, backbone="efficientnet_lite3", exportable=True,
    non_negative=True, blocks={'expand': True})

model.eval().to(device)

x = np.zeros((3, 256, 256), np.float32)

with torch.no_grad():
    x = torch.from_numpy(x).unsqueeze(0)
    tensor_x = x.to(device)
    prediction = model.forward(tensor_x)
    prediction = (
        torch.nn.functional.interpolate(
            prediction.unsqueeze(1),
            size=(256, 256),
            mode="bicubic",
            align_corners=False,
        )
        .squeeze()
        .cpu()
        .numpy()
    )
    print("Starting ONNX export...")
    torch.onnx.export(model,
        tensor_x,
        "./weights/midas_v21_small_256.onnx",
        opset_version=11,
        input_names=['input'],
        output_names=['output'],
    )
    print("Done!")

onnx_model = onnx.load("./weights/model-small-traced.onnx")
onnx.checker.check_model(onnx_model)

I'm using torch == 1.12.0 and onnx == 1.12.0 for the environment. I've been stucked on this issue for quite a while. Any helps or guides are appreciated. Thanks!

foemre commented 1 year ago

Any progress? I'm experiencing the same issue