Judging by the issues in this repo, the Yoloworld ESAM node should probably allow loading of yolo models from disk rather than via roboflow. There's really no reason to use a third party service when the relevant models are freely available on hugginface.co.
Should probably also add a warning on the readme to stop peeps from installing both onnxruntime and onnxruntime-gpu (which somehow isn't blocked while installing inference).
I was just looking into how I can load custom YOLOv8 model.
+1 on this request this tool can become very powerful if you can load custom modules you trained yourself.
Judging by the issues in this repo, the Yoloworld ESAM node should probably allow loading of yolo models from disk rather than via roboflow. There's really no reason to use a third party service when the relevant models are freely available on hugginface.co.
Should probably also add a warning on the readme to stop peeps from installing both onnxruntime and onnxruntime-gpu (which somehow isn't blocked while installing inference).