Closed lyysl closed 2 years ago
I tried to convert the .pth file to .onnx files onnx. I followed this website onnxruntime. Finally I get a numpy array with size (5,1,1,256,256). Should I get an array with 3 dimension rather than 5 dimension? Thank you.
import onnxruntime as ort import numpy as np import cv2 import matplotlib.pyplot as plt img_path = "0.jpg" input_img = cv2.imread(img_path) mean = np.array([0.485, 0.456, 0.406]) * 255.0 scale = 1 / 255.0 std = [0.229, 0.224, 0.225] input_blob = cv2.dnn.blobFromImage( image=input_img, scalefactor=scale, size=(256, 256), # img target size mean=mean, swapRB=True, # BGR -> RGB crop=False # center crop ) input_blob[0] /= np.asarray(std, dtype=np.float32).reshape(3, 1, 1) ort_sess = ort.InferenceSession("pidinet_tiny_converted_new.onnx") outputs = ort_sess.run(None, {'input': input_blob}) outputs = np.asarray(outputs) print(outputs.shape)
There were some problems when I convert the model. I solved the problem, thank you.
I tried to convert the .pth file to .onnx files onnx. I followed this website onnxruntime. Finally I get a numpy array with size (5,1,1,256,256). Should I get an array with 3 dimension rather than 5 dimension? Thank you.