-
The chainer implementation of Justin's paper "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" uses a script that reduces the model to just the convolutional layers, creating a mod…
-
I trained a VGG-16 for CIFAR10, 32x32 pixels. When I run netdissect for different Conv layers, e.g. conv1 and conv13, I got the same results. Should it be like this?
I got the same results for both :…
-
hello,I use my own training data ,when i run the script ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG16 rrpn.i got the error:
./experiments/scripts/faster_rcnn_end2end.sh: line 78: 38172 Float…
-
Hey @snwagh .
I try to test the AlexNet and VGG16 which takes **CIFAR-10** and **ImageNet** datasets as training data. Is there data loader for these two datasets?
And i notice that the input s…
-
Hello, Li,
First congratulations for your excellent work and thank you a lot for sharing the code. It's really helpful for people like me who starts to work on mammography.
But when I ran a simple …
-
I have trained a vgg16 faster-RCNN model on a custom dataset using imagenet pretrained weights. Evaluating it using PASCAL VOC 2010 metrics I get 76% MAP. When I tried to obtain the COCO evaluation me…
-
training vg dataset using this repo in 4*P40. I got the mAP close to 0 (0.0006/0.0004/0)
using the given model (vgg16 in vg), which should have 4.4 mAP, but I got 0.4 (0.0004) after test the model …
-
我想把gluoncv的ssd转成caffe的,但是报错了:
Traceback (most recent call last):
File "convert.py", line 18, in
text_net, binary_weights = convert_ssd_model(model, input_shape=(1,3,shape,shape), to_bgr=Tr…
-
Hi, first of all, I really appreciated your impressive work.
I just followed your [command](https://github.com/balancap/SSD-Tensorflow#fine-tuning-a-network-trained-on-imagenet) which guide how to …
-
how to obtain VGG16: flops = [3.1, 57.8,14.1, 28.9, 7:0, 14.5,14.5,3.5, 7.2,7.2, 1.8,1.8, 1.8, 1.8] according to FLOPs = 2HW(CinK2 + 1)Cout?
for example, no.1 conv FLOPs(0) = 3.1, while FLOPs = 2HW(C…
eeric updated
6 years ago