-
### Environment info
iOS
### steps:
1, i train inception v1 (slim) on a subset of ImageNet dataset (269 of 1000)
2, convert .ckpt into .pb by freeze_graph
3, convert .pb file into 8bit precision
4, …
-
## Description
Hello. I ran to a problem with a model described in https://mxnet.incubator.apache.org/tutorials/python/predict_image.html
I tried to infer shapes for model inputs but encountered an …
-
I transform the datasets (flowers) to tfrecords as your github shows, and the trainning performs correct.
However I change the datasets to another (17flowers), the structure as following:
flowers\
…
-
Hi,
I am running some tests on Slim's imagenet training using Inception Resnet V2. The training is done on AWS ec2 instances (p2.xlarge and p2.8xlarge): Here are the specs for both:
1) p2.xlarge…
-
@soeaver have you by any chance done some training of ILSVRC12 on your inception v4 / inception_resnet2? I would be interested in an ILSVRC12 pre-trained model for both architectures, therefore I have…
-
I am trying to replace the provided inception5h with latest inception-resnetV2. I have downloaded the inception-resnet-V2 by entring
`wget http://download.tensorflow.org/models/inception_resnet_v2_…
-
If I remember the paper correctly Inception-ResNet-v2 has comparable accuracy but trains faster.
BTW I think the paper URL formatting in README needs some fixing.
jrao1 updated
7 years ago
-
Finally see someone training inception-v3 on Caffe :)
Refer to the google's inception-v3 paper (Figure 2), on epoch 53, the test top1 accuracy is above 0.7 (2*10^6 iteration ). Do you have any idea ab…