Closed tonyreina closed 1 month ago
👋 Hello @tonyreina, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.
If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.
We try to respond to all issues as promptly as possible. Thank you for your patience!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
I've got images that are 8k and larger in size. I'd like to use the entire image in the Yolo model but of course that leads to memory and latency issues in the inference.
Is anyone aware of doing inference for Yolo where they employ model parallelism across say 4 GPUs (NVLinked on same node) to handle 4 sections of the image (or 4 sections of the model) simultaneously and then stich them back together at the end?
Is there anyway to do something like this with Ultralytics?
Thanks. Best. -Tony
Additional
No response