-
Hi, besides VGG-16, have you tried any other Networks, such as ResNet, DenseNet or DualPathNet?
I have tried resnet101 and densenet92 by replacing the corresponding layers in vgg16. It's a pity these…
dkjsh updated
6 years ago
-
Hello, @AlexeyAB I want to classify the direction of the pedestrian.
The learning data contains a total of 8 directions. I used a darknet-reference network and a vgg-16 network. However, the valida…
-
hi, I need to convert model VGG_FACEt.7 (This model, I download form this link: https://www.robots.ox.ac.uk/~vgg/software/vgg_face/) to Pytorch model. When I run command "python convert_torch.py -m VG…
-
Please add some feature the same feature in https://www.robots.ox.ac.uk/~vgg/software/via/via.html.
Detail:
![image](https://user-images.githubusercontent.com/48760748/150496080-d54480a9-d9d0-40fe-…
-
Hi, dear author, how can I get the Trained Weights file "Saved_Model.h5"?
-
Hi!
Thanks for your contribution! But when I run the command line you mentioned in readme file
"python attack.py --model_name vgg_16 --attack_method FIAPIM --layer_name vgg_16/conv3/conv3_3/Relu --e…
-
载入一个[128,64,224,224]的tensor的时候显存不够了,我的显卡可用显存11g左右,请问这个tensor是vgg里面的tensor吗,需要怎么调整
-
I tried to run the demo_two_stream.py by downloading the vgg16.ckpt but not able to load the model.
-
Hi, I'm really new to deep learning. While running disguiseNet.py in python, I keep getting the error
File "disguiseNet.py", line 97
print vgg_model.summary()
^
SyntaxError…
-
I am trainning with instance_norm and arch c9s1-16,d32,d64,R64,R64,R64,R64,R64,u32,u16,C9S1-3. In the train.lua, the default settings for style and content layers are
style_layers = {4, 9, 16,…
ghost updated
7 years ago