-
Thank you for your great contribution.
I noticed that the dense layer could be incorporated in VGG successfully, but I did not found the implementation of CONV layer, I would like to know if the CON…
-
Hi, besides VGG-16, have you tried any other Networks, such as ResNet, DenseNet or DualPathNet?
I have tried resnet101 and densenet92 by replacing the corresponding layers in vgg16. It's a pity these…
dkjsh updated
5 years ago
-
Hello, @AlexeyAB I want to classify the direction of the pedestrian.
The learning data contains a total of 8 directions. I used a darknet-reference network and a vgg-16 network. However, the valida…
-
As pointed out in readme, I don't see any link for the dataset or feature vectors. Where can I download it from?
-
Hi,
I am trying to run the code for l1-norm-pruning on windows machine. But I am getting an error due to multiprocessing failure as follows:
RuntimeError:
An attempt has been made to start a ne…
-
# Key Idea
- Propose ***feed-forward network*** that applies the style of a ***painting to a sketch***
- Explain why U-net training might ***fail***, and propose a remedy using two ***guide decode…
-
This one is a situation that I've come accross before with Lazarus (it was fixed for FPC)
Torch will optionally download models from a centralized repository if required. When it does this it uses …
-
Hi,
I'm sorry for creating a issue just for a question. Could you tell me how to train the model (VGG or Inception) with my own dataset? I tried the trained models with anime style but output images …
-
Hi there, sorry to bother. It seems the trained model pth file is not available in the README file. Is there a link to download the trained model for VGG backbone trained on coco dataset? Thanks a lot…
-
I annotated my own dataset using VGG annotator and exported the annotations as json.
When training, I am getting this error:
![image](https://user-images.githubusercontent.com/61691413/108746894-8e9…