ultralytics / yolov3

YOLOv3 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
10.21k stars 3.45k forks source link

Understanding My results #413

Closed FeatheryW closed 5 years ago

FeatheryW commented 5 years ago

results

Just how good or bad is this lol. Thanks in advance!

glenn-jocher commented 5 years ago

@FeatheryW well, you've reached 1.0 mAP, so it seems are perfect now. You can pack it up and go on vacation, there's nothing left to do.

FeatheryW commented 5 years ago

@glenn-jocher Wow lol in only 20 epochs, it seems so unlikely. Other Questions

  1. Anyways, I have figured out how to run detect.py with the webcam, however it does not identify any objects and is very slow. Is there any way I can speed it up?

  2. What about identifying my custom object on my Iphone. How would I go by doing this?

  3. Is it possible to use my phone as a webcam? I plan on using my custom object for future A.I projects.

Need to know if any of this is possible. Thanks again for the great tutorial and the help.

glenn-jocher commented 5 years ago

We just updated the webcam functionality. To detect with webcam, git pull and run:

python3 detect.py --webcam

To run a faster model you could try

python3 detect.py --webcam --weights weights/yolov3-tiny.weights --cfg cfg/yolov3-tiny.cfg

Or to speed up additionally you can reduce the inference size to 320 or smaller:

python3 detect.py --webcam --weights weights/yolov3-tiny.weights --cfg cfg/yolov3-tiny.cfg --img-size 320

The iPhone app allows the use of YOLOv3-SPP and YOLOv3-tiny, trained on COCO 2014 for 80 objects classes. The iOS app is not open source currently, though we offer consulting services if you aim to create something similar: https://www.ultralytics.com/store

glenn-jocher commented 5 years ago

@FeatheryW ah I forgot, when you specify weights in detect.py or test.py you also need to specify the *.cfg file that corresponds, so the correct commands would be:

python3 detect.py --webcam --weights weights/yolov3-tiny.weights --cfg cfg/yolov3-tiny.cfg
python3 detect.py --webcam --weights weights/yolov3-tiny.weights --cfg cfg/yolov3-tiny.cfg--img-size 320
FeatheryW commented 5 years ago

@glenn-jocher After the git pull webcam is no longer working on MacOS.

Error message stating the following:
usage: detect.py [-h] [--cfg CFG] [--data DATA] [--weights WEIGHTS]
                 [--images IMAGES] [--img-size IMG_SIZE]
                 [--conf-thres CONF_THRES] [--nms-thres NMS_THRES]
                 [--fourcc FOURCC] [--output OUTPUT]
detect.py: error: unrecognized arguments: --webcam

And how would i go by training with yolov3-tiny.cfg. Every time I try I get the error message,

Namespace(accumulate=4, batch_size=16, bucket='', cfg='cfg/yolov3-tiny.cfg', data='data/coco.data', epochs=100, evolve=False, img_size=416, multi_scale=False, nosave=False, notest=False, num_workers=4, rect=False, resume=False, transfer=False, var=0, xywh=False)
Using CUDA device0 _CudaDeviceProperties(name='Tesla T4', total_memory=15079MB)

Traceback (most recent call last):
  File "train.py", line 359, in <module>
    accumulate=opt.accumulate)
  File "train.py", line 143, in train
    cutoff = load_darknet_weights(model, weights + 'yolov3-tiny.conv.15')
  File "/content/drive/My Drive/yolov3-master/models.py", line 289, in load_darknet_weights
    bn_rm = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.running_mean)
RuntimeError: shape '[16]' is invalid for input of size 5
glenn-jocher commented 5 years ago

It works fine on our iMac. Maybe you need a clean git clone:

(yolov3) Glenns-iMac:yolov3 glennjocher$ python3 detect.py --webcam
Namespace(cfg='cfg/yolov3-spp.cfg', conf_thres=0.5, data='data/coco.data', fourcc='mp4v', half=False, images='data/samples', img_size=416, nms_thres=0.5, output='output', webcam=True, weights='weights/yolov3-spp.weights')
Using CPU

webcam 0: 256x416 1 persons, Done. (0.358s)
webcam 1: 256x416 1 persons, Done. (0.390s)
webcam 2: 256x416 1 persons, Done. (0.363s)
webcam 3: 256x416 1 persons, Done. (0.347s)
webcam 4: 256x416 1 persons, Done. (0.378s)

To train with yolov3-tiny you need to download the backbone first from our Google Drive folder (its unavailable on the pjreddie server). I see your error message, this occured because curl downloads the 404 file not found html message and saves it as the tiny backbone. We've updated the code to streamline this reporting in the future. If you git pull it will now direct you to the appropriate folder: AssertionError: weights/yolov3-tiny.conv.15 missing, download from https://drive.google.com/drive/folders/1uxgUBemJVw9wZsdpboYbzUN4bcRhsuAI

Download the tiny backbone, place it in your /weights folder and then:

(yolov3) Glenns-iMac:yolov3 glennjocher$ python3 train.py --cfg cfg/yolov3-tiny.cfg
Namespace(accumulate=2, batch_size=32, bucket='', cfg='cfg/yolov3-tiny.cfg', data='data/coco.data', epochs=273, evolve=False, img_size=416, img_weights=False, multi_scale=False, nosave=False, notest=False, num_workers=4, rect=False, resume=False, transfer=False, xywh=False)
Using CPU

Model Summary: 37 layers, 8.85237e+06 parameters, 8.85237e+06 gradients

     Epoch   gpu_mem   GIoU/xy        wh       obj       cls     total   targets  img_size
     0/272        0G     0.994         0      7.16      8.83        17       222       416:   0%|      | 4/3665 [00:21<5:38:29,  5.55s/it]