Open warhammercasey opened 1 month ago
Does yolov8n.pt
work?
Yes yolov8n.pt
works fine, so does onnx and tflite format. This is only an issue with the ncnn format.
Does NCNN work with yolov8n.pt
?
Do you mean something like:
model = YOLO('yolov8n.pt')
model.export(format='ncnn')
model = YOLO('yolov8n_ncnn_model')
# inference
?
No. Thats what I have been trying to do and thats whats not working
It seems like the NCNN export might be causing issues on your device. Please ensure your environment meets all NCNN requirements and consider testing with a smaller model like yolov8n
to see if the problem persists.
In your question, you're using the yolov8s
model, so I was wondering whether yolov8n
works with NCNN since it is smaller and should consume less memory.
Oh my bad I should have clarified. Ive tried both the yolov8n
and yolov8s
models as well as the yolov8m
model.
What are the NCNN requirements? I dont believe its running out of memory considering it runs the pytorch and onnx models without issue.
It could be an issue with ncnn
itself. You can try running this and see if it causes a reboot.
import ncnn as pyncnn
import numpy as np
from pathlib import Path
w = "yolov8n_ncnn_model"
net = pyncnn.Net()
net.opt.use_vulkan_compute = False
w = Path(w)
if not w.is_file(): # if not *.param
w = next(w.glob("*.param"))
net.load_param(str(w))
net.load_model(str(w.with_suffix(".bin")))
im = np.random.rand(1, 3, 640, 640)
mat_in = pyncnn.Mat(im[0])
for i in range(30):
with net.create_extractor() as ex:
ex.input(net.input_names()[0], mat_in)
y = [np.array(ex.extract(x)[1])[None] for x in sor
ted(net.output_names())]
That errors on the y = [np.array(ex.extract(x)[1])[None] for x in sorted(net.output_names())]
line.
On my x86 machine it runs into this error when doing np.array(ex.extract('out0')[1])
:
terminate called after throwing an instance of 'std::runtime_error'
what(): Convert ncnn.Mat to numpy.ndarray. Support only elemsize 1, 2, 4; but given 8
Aborted
On the pi clone it segfaults when running ex.extract('out0')
.
So both error but for different reasons.
It seems like the issue might be related to the data type conversion in NCNN. You could try checking the model's output layer configurations or consider using a different model format that works on your devices.
Try using im = np.random.rand(1, 3, 640, 640).astype(np.float32)
Search before asking
Ultralytics YOLO Component
Predict
Bug
I'm trying to run inference on a Le Potato (raspberry pi clone) using the yolov8s model exported as an NCNN model but I seem to be running into some kind of memory issues. Running predict with the model causes the entire system to reboot immediately roughly 70% of the time. The other ~30% of the time seems to be a toss up between working properly or getting a segfault.
The model was exported using
model = YOLO("yolov8s.pt")
andmodel.export(format='ncnn')
so its nothing custom.To rule out platform/environment issues I tried running the same thing on my desktop machine in WSL2 which appeared to run fine, except after running it roughly 5 times something caused explorer.exe and a few other processes visible in event viewer to crash. Thats possibly a coincidence so after I post this Im going to see if its repeatable but I want to post this bug before I potentially lose my work if my browser also crashes.
If I load the pytorch model alone (I.E
model = YOLO("yolov8s.pt")
rather thanmodel = YOLO("yolov8s_ncnn_model")
) it works perfectly fine indicating it likely has to do with the ncnn model specifically.Both tests were run in a python 3.11 venv with only ultralytics (and dependencies) installed using
pip install ultralytics
.I'm trying to see if I can get any more information on whats causing this and Ill update this thread if I find anything else but given the issue is intermittent crashing on WSL2 and "the entire system resets" on the le potato its a little hard to debug anything specific.
Environment
Output of
yolo checks
on le potato:Output of
yolo checks
on desktop:Minimal Reproducible Example
Code on le potato:
Code on WSL2:
Additional
No response
Are you willing to submit a PR?