forresti / SqueezeNet

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters
BSD 2-Clause "Simplified" License
2.17k stars 723 forks source link

add deploy #2

Closed terrychenism closed 8 years ago

terrychenism commented 8 years ago

deploy.prototxt added

forresti commented 8 years ago

Thanks for your contribution! We will run some tests and let you know if anything needs to be changed. :)

forresti commented 8 years ago

Couple of notes:

  1. We use the default values of lr_mult, decay_mult, and bias_filler during training, so there's no need to include them in the prototxt for training. (and, lr_mult, decay_mult, bias_filler are for weight initialization and weight updates during training, but I don't think they have any affect during inference.)
  2. For inference, if you want to keep it simple, you could remove the weight initialization xavier lines.
terrychenism commented 8 years ago

Good suggestions, I have updated the deploy file

forresti commented 8 years ago

Merged as https://github.com/DeepScale/SqueezeNet/commit/445b1d97f8cfd106727e658d17222b24cfddf17d