Open deepkshikha opened 6 years ago
RNN, GRU, LSTM, ... - is for prediction sequences. It isn't for Detection. It is for:
./darknet rnn generate cfg/rnn.cfg tolstoy.weights -srand 2 -seed Chapter
Just download: https://pjreddie.com/media/files/tolstoy.weightsMore info: https://pjreddie.com/darknet/rnns-in-darknet/
https://github.com/tensorflow/models/tree/master/research/object_detection
A selection of trainable detection models, including: Single Shot Multibox Detector (SSD) with MobileNet, SSD with Inception V2, Region-Based Fully Convolutional Networks (R-FCN) with Resnet 101, Faster RCNN with Resnet 101, Faster RCNN with Inception Resnet v2
As you see it supports: SSD, R-FCN, FasterRCNN ResNet-101/Resnet v2. You can compare these networks with Yolo v3:
As you can see SSD, R-FCN, FasterRCNN ResNet-101/Resnet v2, RetinaNet - are slower and has less accuracy than Yolo v3.
Thanks
On Tuesday, May 8, 2018, Alexey notifications@github.com wrote:
RNN, GRU, LSTM, ... - is for prediction sequences. It isn't for Detection. It is for:
- Motion prediction - Object tracking
- Price prediction - Trading robots
- Text prediction - Text generation: https://github.com/AlexeyAB/ darknet/blob/master/build/darknet/x64/rnn_tolstoy.cmd https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/rnn_tolstoy.cmd ./darknet rnn generate cfg/rnn.cfg tolstoy.weights -srand 2 -seed Chapter Just download: https://pjreddie.com/media/files/tolstoy.weights
- ...
More info: https://pjreddie.com/darknet/rnns-in-darknet/
https://github.com/tensorflow/models/tree/master/research/object_detection
A selection of trainable detection models, including: Single Shot Multibox Detector (SSD) with MobileNet, SSD with Inception V2, Region-Based Fully Convolutional Networks (R-FCN) with Resnet 101, Faster RCNN with Resnet 101, Faster RCNN with Inception Resnet v2
As you see it supports: SSD, R-FCN, FasterRCNN ResNet-101/Resnet v2. You can compare these networks with Yolo v3:
As you can see SSD, R-FCN, FasterRCNN ResNet-101/Resnet v2, RetinaNet - are slower and has less accuracy than Yolo v3. [image: 68747470733a2f2f6873746f2e6f72672f776562742f70772f7a642f306a2f70777a64306a623967377a6e745f646273797739717a626e7674692e6a706567] https://user-images.githubusercontent.com/4096485/39752656-c8873d3a-52c4-11e8-8b8f-5609c3c8bcdc.jpg
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/AlexeyAB/darknet/issues/768#issuecomment-387361781, or mute the thread https://github.com/notifications/unsubscribe-auth/AMkuQTCX_LdTd93R08k16jw-wPdYwB85ks5twXalgaJpZM4T2A-F .
There is one more question. I have trained model over multi GPU and it is very fast even when we use multicore CPU it is faster so how the code divides over multi GPU batch wise or how it get distributed ?
I have trained model over multi GPU and it is very fast even when we use multicore CPU it is faster
What do you mean?
If you train using multi-GPU then it doesn't matter do you compile with OPENMP=1 or OPENMP=0.
how the code divides over multi GPU batch wise or how it get distributed ?
For example you use 4 GPU, so will be ngpus=4
If in the yolov3.cfg
there are batch=64 subdivison=8
batch=64 subdivison=8
batch=4 subdivison=8
imgs = batch*subdivision*ngpus= 4*8*4 = 128
batch*subdivision = 64
images as usuall (as for 1 GPU)https://github.com/AlexeyAB/darknet/blob/cda8171feb76bcb405350fd8341d42a0300e2f4b/src/parser.c#L602
https://github.com/AlexeyAB/darknet/blob/cda8171feb76bcb405350fd8341d42a0300e2f4b/src/parser.c#L608
https://github.com/AlexeyAB/darknet/blob/cda8171feb76bcb405350fd8341d42a0300e2f4b/src/detector.c#L69
https://github.com/AlexeyAB/darknet/blob/cda8171feb76bcb405350fd8341d42a0300e2f4b/src/data.c#L863
I wanted to ask about the speed only that is increasing over multi gpu or multi core cpu .how it is distributing over multi core gpu or multicore CPU. I understood thanks for fast reply .
multi-GPU - different mini-batches will be processed on different GPUs, and GPUs will be synchronized for each 100 iterations
multicore-CPU - different rows in the GEMM-function will be processed on the different CPU-Cores
Thanks
@AlexeyAB I have seen gru.cfg and rnn.cfg in cfg folder If I use this file instead of v3 will it give higher accuracy. And If I want to add more layers in v3 is it possible? And how it differ from https://github.com/tensorflow/models/tree/master/research/object_detection this repository.