Open XEssence opened 5 months ago
The demo
in deploy
is not applicable now.
我遇到了同样的问题,请问还有什么办法运行这个ONNX模型吗?
Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.
import libs
import onnx
import onnxruntime as ort
from PIL import Image, ImageOps
import numpy as np
import supervision as sv
import matplotlib.pyplot as plt
BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator() LABEL_ANNOTATOR = sv.LabelAnnotator() MASK_ANNOTATOR = sv.MaskAnnotator()
2. `load data`
```python
def load_image(image_path):
image = Image.open(image_path).convert('RGB')
# Get sample input data as a numpy array in a method of your choosing.
img_width, img_height = image.size
size = max(img_width, img_height)
image = ImageOps.pad(image, (size, size), method=Image.BILINEAR)
image = image.resize((640, 640), Image.BILINEAR)
tensor_image = np.asarray(image).astype(np.float32)
tensor_image /= 255.0
tensor_image = np.expand_dims(tensor_image, axis=0)
return tensor_image, (img_width, img_height, size)
simple visualization
def visualize(results, img):
bboxes = results[2][0]
scores = results[1][0]
labels = results[0][0]
bboxes = bboxes[labels >= 0]
scores = scores[labels >= 0]
labels = labels[labels >= 0]
print(bboxes.shape)
detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores)
labels = [
f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in
zip(detections.class_id, detections.confidence)
]
# label images
image = (img * 255).astype(np.uint8)
anno_image = image.copy()
image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections)
image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels)
return image
load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
provider_options = ort_session.get_provider_options()
run a sample
img, meta_info = load_image(image_path)
input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2)))
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
img_out = visualize(results, img[0])
plt.imshow(img_out)
NOTE: You need to initialize texts
according to your demand.
Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.
import libs
import onnx import onnxruntime as ort from PIL import Image, ImageOps import numpy as np import supervision as sv import matplotlib.pyplot as plt BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator() LABEL_ANNOTATOR = sv.LabelAnnotator() MASK_ANNOTATOR = sv.MaskAnnotator()
load data
def load_image(image_path): image = Image.open(image_path).convert('RGB') # Get sample input data as a numpy array in a method of your choosing. img_width, img_height = image.size size = max(img_width, img_height) image = ImageOps.pad(image, (size, size), method=Image.BILINEAR) image = image.resize((640, 640), Image.BILINEAR) tensor_image = np.asarray(image).astype(np.float32) tensor_image /= 255.0 tensor_image = np.expand_dims(tensor_image, axis=0) return tensor_image, (img_width, img_height, size)
simple visualization
def visualize(results, img): bboxes = results[2][0] scores = results[1][0] labels = results[0][0] bboxes = bboxes[labels >= 0] scores = scores[labels >= 0] labels = labels[labels >= 0] print(bboxes.shape) detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores) labels = [ f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in zip(detections.class_id, detections.confidence) ] # label images image = (img * 255).astype(np.uint8) anno_image = image.copy() image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections) image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels) return image
load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) provider_options = ort_session.get_provider_options()
run a sample
img, meta_info = load_image(image_path) input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2))) results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort}) img_out = visualize(results, img[0]) plt.imshow(img_out)
NOTE: You need to initialize
texts
according to your demand.
我运行了这段代码。出现下面的报错:
Traceback (most recent call last):
File "E:\YOLO\YOLO-World\onnxdemo.py", line 61, in
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
File "D:\miniconda3\envs\yolo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:87 onnxruntime::NonMaxSuppressionBase::PrepareCompute boxes and scores should have same spatial_dimension.
使用的模型是从huggingface上的demo直接导出的ONNX,请问哪里出了问题?
Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.
import libs
import onnx import onnxruntime as ort from PIL import Image, ImageOps import numpy as np import supervision as sv import matplotlib.pyplot as plt BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator() LABEL_ANNOTATOR = sv.LabelAnnotator() MASK_ANNOTATOR = sv.MaskAnnotator()
load data
def load_image(image_path): image = Image.open(image_path).convert('RGB') # Get sample input data as a numpy array in a method of your choosing. img_width, img_height = image.size size = max(img_width, img_height) image = ImageOps.pad(image, (size, size), method=Image.BILINEAR) image = image.resize((640, 640), Image.BILINEAR) tensor_image = np.asarray(image).astype(np.float32) tensor_image /= 255.0 tensor_image = np.expand_dims(tensor_image, axis=0) return tensor_image, (img_width, img_height, size)
simple visualization
def visualize(results, img): bboxes = results[2][0] scores = results[1][0] labels = results[0][0] bboxes = bboxes[labels >= 0] scores = scores[labels >= 0] labels = labels[labels >= 0] print(bboxes.shape) detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores) labels = [ f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in zip(detections.class_id, detections.confidence) ] # label images image = (img * 255).astype(np.uint8) anno_image = image.copy() image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections) image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels) return image
load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) provider_options = ort_session.get_provider_options()
run a sample
img, meta_info = load_image(image_path) input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2))) results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort}) img_out = visualize(results, img[0]) plt.imshow(img_out)
NOTE: You need to initialize
texts
according to your demand.我运行了这段代码。出现下面的报错: Traceback (most recent call last): File "E:\YOLO\YOLO-World\onnxdemo.py", line 61, in results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort}) File "D:\miniconda3\envs\yolo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:87 onnxruntime::NonMaxSuppressionBase::PrepareCompute boxes and scores should have same spatial_dimension. 使用的模型是从huggingface上的demo直接导出的ONNX,请问哪里出了问题?
我在本地重新生成了ONNX模型,替换了从huggingface上的demo直接导出的ONNX模型,现在可以正常运行了。今天的huggingface_demo似乎有点问题,无论我输入什么都会报错。所以我之前使用的从huggingface直接导出的ONNX一开始就出错了。
Hi @tomgotjack, @XEssence, the official code of ONNX demo
has been released. You can check it at deploy/onnx_demo.py
I use
deploy/export_onnx.py
export onnx model, the command as follows:python deploy/export_onnx.py configs/finetune_coco/yolo_world_v2_m_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py work_dirs/epoch_80.pth --custom-text data/texts/coco_class_texts.json --model-only --opset 12
and then,python deploy/image-demo.py ./test_images/ configs/finetune_coco/yolo_world_v2_m_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py work_dirs/epoch_80.onnx
, error as follows:What causes this?