QDucasse / nn_benchmark

🧠 Benchmark facility to train networks on different datasets for PyTorch/Brevitas
MIT License
23 stars 1 forks source link

quantVGG16 model for input size(1,3,224,224) #23

Open simplelins opened 3 years ago

simplelins commented 3 years ago

Hi, I try to train the model quantvgg16 with the dataset ILSVRC2012 , but can not get a Convergent model. did you have a try ? for the dataset ILSVRC2012, the classifier is as below:

self.classifier = nn.Sequential(
            make_quant_linear(512 * 7 * 7, 4096, bias=True, bit_width=bit_width),
            make_quant_relu(bit_width),
            nn.Dropout(),
            make_quant_linear(4096, 4096, bias=True, bit_width=bit_width),
            make_quant_relu(bit_width),
            nn.Dropout(),
            make_quant_linear(4096, num_classes, bias=False, bit_width=bit_width,
                              weight_scaling_per_output_channel=False),
        )
 I also tried with your code for mnist, it will be quickly up to 100% acc!
 thanks!
QDucasse commented 3 years ago

Hi, I will put an announcement in the readme but I don't plan on maintaining this repo I am sorry... I wrote a simple wrapper around Brevitas to conduct simple experiments with the base MLP examples FINN provided but do not mean to replace the examples themselves (https://github.com/Xilinx/finn-examples). I would advise you to ask your questions on the gitter channels or in the FINN/Brevitas repositories directly! As for your case in particular, I remember I could not make VGG/Mobilenet to work but planned on trying to port them later on (which eventually never happened). In conclusion, sorry but I cannot help you! ☹️