BMW-InnovationLab / BMW-YOLOv4-Inference-API-GPU

This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
BSD 3-Clause "New" or "Revised" License
281 stars 71 forks source link

Inference engine not found: yolov4_darknet_detection #6

Closed Classsic closed 3 years ago

Classsic commented 4 years ago

Hi, can improve the docs about models folder structure. this is my config: /root/BMW-YOLOv4-Inference-API-GPU/models/ /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolov4.weights /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.data /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/config.json /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolov4.cfg /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.names /root/BMW-YOLOv4-Inference-API-GPU/models/.gitignore

And this is de content of the files: /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.data: classes=80 names=/models/yolov4/obj.names /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/config.json: { "inference_engine_name": "yolov4_darknet_detection", "detection_threshold": 0.6, "nms_threshold": 0.45, "hier_threshold": 0.5, "framework": "yolo", "type": "detection", "network": "network_name" } /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolov4.cfg: [net] batch=64 subdivisions=8

Training

width=512

height=512

width=608 height=608 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

learning_rate=0.0013 burn_in=1000 max_batches = 500500 policy=steps steps=400000,450000 scales=.1,.1

cutmix=1

mosaic=1

:104x104 54:52x52 85:26x26 104:13x13 for 416

[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-7

[convolutional] ..... /root/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.names: person bicycle car motorbike aeroplane bus train truck boat traffic light fire hydrant stop sign parking meter bench bird cat dog horse sheep cow elephant bear zebra giraffe backpack umbrella handbag tie suitcase frisbee skis snowboard sports ball kite baseball bat baseball glove skateboard surfboard tennis racket bottle wine glass cup fork knife spoon bowl banana apple sandwich orange broccoli carrot hot dog pizza donut cake chair sofa pottedplant bed diningtable toilet tvmonitor laptop mouse remote keyboard cell phone microwave oven toaster sink refrigerator book clock vase scissors teddy bear hair drier toothbrush And this is the response when try to load from http://..docs#/default/load_custom_load_get { "data": null, "error": "Inference engine not found: yolov4_darknet_detection", "success": false } Thanks.

hadikoub commented 4 years ago

hello please can you try to renameyolov4.weights toyolo-obj.weights and yolov4.cfg to yolo-obj.cfg and please change the location of names inside obj.data to

name=/models/yolov4_darknet_detection/obj.names https://github.com/BMW-InnovationLab/BMW-YOLOv4-Inference-API-GPU#model-structure this should fix the problem

Classsic commented 4 years ago

Hi, I follow all your steps, without success. I think there is a bug when try to read the folders/subfolders from /models. I will try to follow the process, but I´m not a phyton programmer, so if somebody can testit with yolov4, that would be great I see one of the last commit, adds the yolov4 inference model, maybe there is a bug. By the way great project, keep up.

hadikoub commented 4 years ago

Have you tried to pull the latest version from github and please if you can delete the docker image and re-build it.

Classsic commented 4 years ago

I build from scratch, 2 days ago, and there is no commit since there, so I´m testing the latest version. I will try to get more debug info, thanks.

hadikoub commented 4 years ago

I will try to reproduce the issue and test if their any issue. Are you using the official yolov4 weights and cfg ?

Classsic commented 4 years ago

I downloaded from here the .weights https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights and the .cfg from here --> https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg obj.names --->https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names The config.json I copy from the this repo (like you pointme), and finally the obj.data fill with this: classes=80 names=/models/yolov4_darknet_detection/obj.names There is something I´m missing?

hadikoub commented 4 years ago

After I tried to reproduce the problem with your config I did not have any problem. ps: I reproduce the issue with the correction I suggested regarding names=/models/yolov4_darknet_detection/obj.names this is the structure of the folder:

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/ that contains the following:

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/config.json
/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.data
/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.names
/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolo-obj.cfg
/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolo-obj.weights

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/config.json

{
"inference_engine_name": "yolov4_darknet_detection",
"detection_threshold": 0.6,
"nms_threshold": 0.45,
"hier_threshold": 0.5,
"framework": "yolo",
"type": "detection",
"network": "network_name"
}

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.data

classes=80
names=/models/yolov4_darknet_detection/obj.names

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/obj.names

person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolo-obj.cfg

[net]
batch=32
subdivisions=16
# Training
#width=512
#height=512
width=608
height=608
channels=3
momentum=0.949
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.0013
burn_in=1000
max_batches = 500500
policy=steps
steps=400000,450000
scales=.1,.1

#cutmix=1
mosaic=1

#:104x104 54:52x52 85:26x26 104:13x13 for 416

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-7

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-10

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-28

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-28

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-16

[convolutional]
batch_normalize=1
filters=1024
size=1
stride=1
pad=1
activation=mish

##########################

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

### SPP ###
[maxpool]
stride=1
size=5

[route]
layers=-2

[maxpool]
stride=1
size=9

[route]
layers=-4

[maxpool]
stride=1
size=13

[route]
layers=-1,-3,-5,-6
### End SPP ###

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = 85

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -1, -3

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = 54

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -1, -3

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

##########################

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear

[yolo]
mask = 0,1,2
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.2
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=256
activation=leaky

[route]
layers = -1, -16

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear

[yolo]
mask = 3,4,5
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.1
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=512
activation=leaky

[route]
layers = -1, -37

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear

[yolo]
mask = 6,7,8
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

/home/user/BMW-YOLOv4-Inference-API-GPU/models/yolov4_darknet_detection/yolo-obj.weights I downloaded from https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights

and the output of my terminal is:

Screenshot from 2020-07-28 10-33-15 the output of the swagger UI is:

Screenshot from 2020-07-28 10-33-28

If this doesn't solve your problem please try to delete the docker image and the folder and clone the Repo again and build from scratch the docker file.

Also please make sure that you have the following file in

<location of your Repo>/BMW-YOLOv4-Inference-API-GPU/src/main/darknet_v4/libdarknetv4.so
Classsic commented 4 years ago

There is a change, I rebuild the image, when try to run, I get this error: .../BMW-YOLOv4-Inference-API-GPU# NV_GPU=0 docker run -itv $(pwd)/models:/models --name api-yolov -p 7777:1234 yolov4_inference_api_gpu_new Traceback (most recent call last): File "/usr/local/bin/uvicorn", line 8, in sys.exit(main()) File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 829, in call return self.main(args, kwargs) File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 610, in invoke return callback(args, kwargs) File "/usr/local/lib/python3.6/dist-packages/uvicorn/main.py", line 339, in main run(kwargs) File "/usr/local/lib/python3.6/dist-packages/uvicorn/main.py", line 362, in run server.run() File "/usr/local/lib/python3.6/dist-packages/uvicorn/main.py", line 390, in run loop.run_until_complete(self.serve(sockets=sockets)) File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete File "/usr/local/lib/python3.6/dist-packages/uvicorn/main.py", line 397, in serve config.load() File "/usr/local/lib/python3.6/dist-packages/uvicorn/config.py", line 278, in load self.loaded_app = import_from_string(self.app) File "/usr/local/lib/python3.6/dist-packages/uvicorn/importer.py", line 20, in import_from_string module = importlib.import_module(module_str) File "/usr/lib/python3.6/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 994, in _gcd_import File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 665, in _load_unlocked File "", line 678, in exec_module File "", line 219, in _call_with_frames_removed File "./start.py", line 17, in dl_service = DeepLearningService() File "./deep_learning_service.py", line 29, in init with open('/models_hash/model_hash.json', 'w'): FileNotFoundError: [Errno 2] No such file or directory: '/models_hash/model_hash.json' There is a path 'models_hash/' ?

Classsic commented 4 years ago

I see you add a new folder when run docker, and I get the same error running: NV_GPU=0 docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash --name api-yolov -p 7777:1234 yolov4_inference_api_gpu_new

curl -X GET "http://localhost:7777/load" -H "accept: application/json"

result: {"data":null,"error":"Inference engine not found: yolov4_darknet_detection","success":false}

Maybe you mount something else. Sorry for insistence, but this project is irreplaceable for us. thanks.

hadikoub commented 4 years ago

We have some changes regarding the model hashing so we have a new commit. Please make sure to pull the new commit and check if you have models_hash/ folder and inside it should be model_hash.json. After that please rebuild the image and run as mentioned in the new README where we have added folder volume.

If this doesn't solve your problem please don't hesitate to contact us again.

Classsic commented 3 years ago

Hi, I can run succesfully the load, but get this error:

INFO:     Started server process [1]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1234 (Press CTRL+C to quit)
Loaded - names_list: /models/yolov4_darknet_detection/obj.names, classes = 80 
 Try to load cfg: /models/yolov4_darknet_detection/yolo-obj.cfg, clear = 0 
 0 : compute_capability = 610, cudnn_half = 0, GPU: GeForce GTX 1050 
net.optimized_memory = 0 
mini_batch = 1, batch = 64, time_steps = 1, train = 1 
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 1    608 x 608 x   3 ->  608 x 608 x  32 0.639 BF
   1 conv     64       3 x 3/ 2    608 x 608 x  32 ->  304 x 304 x  64 3.407 BF
   2 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   3 route  1                                  ->  304 x 304 x  64 
   4 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   5 conv     32       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  32 0.379 BF
   6 conv     64       3 x 3/ 1    304 x 304 x  32 ->  304 x 304 x  64 3.407 BF
   7 Shortcut Layer: 4,  wt = 0, wn = 0, outputs: 304 x 304 x  64 0.006 BF
   8 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   9 route  8 2                                ->  304 x 304 x 128 
  10 conv     64       1 x 1/ 1    304 x 304 x 128 ->  304 x 304 x  64 1.514 BF
  11 conv    128       3 x 3/ 2    304 x 304 x  64 ->  152 x 152 x 128 3.407 BF
  12 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  13 route  11                                 ->  152 x 152 x 128 
  14 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  15 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  16 conv     64       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x  64 1.703 BF
  17 Shortcut Layer: 14,  wt = 0, wn = 0, outputs: 152 x 152 x  64 0.001 BF
  18 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  19 conv     64       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x  64 1.703 BF
  20 Shortcut Layer: 17,  wt = 0, wn = 0, outputs: 152 x 152 x  64 0.001 BF
  21 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  22 route  21 12                              ->  152 x 152 x 128 
  23 conv    128       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x 128 0.757 BF
  24 conv    256       3 x 3/ 2    152 x 152 x 128 ->   76 x  76 x 256 3.407 BF
  25 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  26 route  24                                 ->   76 x  76 x 256 
  27 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  28 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  29 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  31 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  32 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  34 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  35 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  37 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  38 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  39 Shortcut Layer: 36,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  40 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  41 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  42 Shortcut Layer: 39,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  43 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  44 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  45 Shortcut Layer: 42,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  46 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  47 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  48 Shortcut Layer: 45,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  49 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  50 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  51 Shortcut Layer: 48,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  52 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  53 route  52 25                              ->   76 x  76 x 256 
  54 conv    256       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 256 0.757 BF
  55  Try to set subdivisions=64 in your cfg-file. 
CUDA status Error: file: ./src/dark_cuda.c : () : line: 373 : build time: Jun 30 2020 - 16:13:28 

 CUDA Error: out of memory
/usr/bin/python3: : Unknown error -1632497565

I try to change as suggested to subdivisions=64, and get the same error. I´m testing on NVIDIA 1050 with only 2GB, is possible to run on it? Thanks.

hadikoub commented 3 years ago

Please try to put subdivision=128 or try to lower the batch_size to suit your gpu performance.

Classsic commented 3 years ago

Now works! What about the performance, I see uvicorn, is monothread. I will try it and open another issue about performance specific. Thanks!

hadikoub commented 3 years ago

Please notice that the uvicorn is just an interface to work with the yolo inference core the main operation are done by the yolo core which is using the GPU to run inference on.