Open louisfacun opened 3 months ago
@louisfacun please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
@microsoft-github-policy-service agree [company="{your company}"]
Options:
- (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
- (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Also, based on this: https://onnxruntime.ai/docs/tutorials/mobile/pose-detection.html, if we didn't modify
yolo_detection()
, how can it can return two values (rawImage and boxOutput)? It looks like the usedyolo_detection()
fromyolo_e2e.py
is ahead from what I used, any ideas?
I'm not quite sure what you're asking here and whether it's about the pose model or the object detection model. Each has different output. And can you clarify what 'rawImage' equates to?
/azp run onnxruntime-extensions.CI,licence/cla
Also, based on this: https://onnxruntime.ai/docs/tutorials/mobile/pose-detection.html, if we didn't modify
yolo_detection()
, how can it can return two values (rawImage and boxOutput)? It looks like the usedyolo_detection()
fromyolo_e2e.py
is ahead from what I used, any ideas?I'm not quite sure what you're asking here and whether it's about the pose model or the object detection model. Each has different output. And can you clarify what 'rawImage' equates to?
Sorry, it's for object detection model, and the rawImage
I am referring is the detected image with drawn bounding boxes.
You can add an Identity step at the end to produce an additional model output from the ScaleBoundingBoxes step.
e.g. after the last post-processing step, add an Identity step like below
# Encode to jpg/png
ConvertBGRToImage(image_format=output_format),
# also return bounding boxes
(Identity(name="bounding_boxes"),
[utils.IoMapEntry("ScaleBoundingBoxes", producer_idx=0, consumer_idx=0)]),
Implemented option for users to choose between two output formats: byte conversion of the image with drawn bounding boxes or a list of bounding boxes for further processing.
Also, based on this: https://onnxruntime.ai/docs/tutorials/mobile/pose-detection.html, if we didn't modify
yolo_detection()
, how can it can return two values (rawImage and boxOutput)? It looks like the usedyolo_detection()
fromyolo_e2e.py
is ahead from what I used, any ideas?