changtimwu / changtimwu.github.com

Tim's testing/practice notes
7 stars 2 forks source link

squeezenet #68

Open changtimwu opened 7 years ago

changtimwu commented 7 years ago
changtimwu commented 7 years ago

squeezenet 1.1 has lower computation requirement. https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1

many options. Can we combine them all?

few of squeezenet implementations include deep compression. Only songhan himself implement it. It's caffe only.

general quantization techniques in tensorflow and songhan leaves positive comments on it. By utilizaing this, maybe we can create deep compression + squeezenet in tensorflow.

changtimwu commented 7 years ago

some chinese doc

reading group: http://www.kdnuggets.com/2016/09/deep-learning-reading-group-squeezenet.html

changtimwu commented 7 years ago

darknet's tinynet could be smaller than squeezenet.
https://pjreddie.com/darknet/tiny-darknet/

changtimwu commented 7 years ago

FPGA impelmentation https://github.com/dgschwend/zynqnet

FINN: BNN on FPGA https://forums.xilinx.com/t5/Xcell-Daily-Blog/Zynq-PYNQ-Python-BNNs-Machine-inference-does-not-get-any-easier/ba-p/754705

changtimwu commented 7 years ago

implementation:

changtimwu commented 7 years ago

the same author also proposes squeezdet to prove that squeezenet applied to kitti tasks https://arxiv.org/abs/1612.01051

changtimwu commented 7 years ago

about the quantization, Dorefa does an excellent work.

  1. It proves on resnet
  2. its precision is flexible(1,4,32).
  3. Its 2nd author creates a good framework tensorpack for this paper.

    Trained Ternary Quantization is the derived work.

changtimwu commented 7 years ago

let's compare https://github.com/songhan/SqueezeNet-DSD-Training and original squeezenet to tell if DSD is repeatable.

changtimwu commented 7 years ago

google mobilenet https://arxiv.org/abs/1704.04861

Speed/accuracy trade-offs for modern convolutional object detectors https://arxiv.org/abs/1611.10012

another detailed discussion of two different approaches to minimize model size. network vs precision https://arxiv.org/pdf/1605.06402.pdf

changtimwu commented 7 years ago

quite a good net comparsion in keras

changtimwu commented 7 years ago

really nice to read. The keras author demonstrates typical image classification procedure.

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html gist: https://gist.github.com/fchollet/f35fbc80e066a49d65f1688a7e99f069

changtimwu commented 7 years ago

We should follow this guy. All his repos are much related to us. https://github.com/Zehaos?tab=repositories

mysterious company
http://www.dt42.io https://github.com/DT42

changtimwu commented 7 years ago

looks good it has nothing to do with UBER. UberNet : Training a ‘Universal’ Convolutional Neural Network for Low-, Mid-, and High-Level Vision using Diverse Datasets and Limited Memory

changtimwu commented 7 years ago

https://github.com/allanzelener/YAD2K YOLO9000's tensorflow/keras implementation

changtimwu commented 7 years ago

I want to try squeezenet 1.1 rcmalli implement it but his input/output is a mess. chasingbob 's dog vs cat example utilizes DT42's squeezenet 1.0 implementation.
So, I should try chasingbob and alter his model with squeezenet 1.1

changtimwu commented 7 years ago

ARM compute lib and squeezenet https://arxiv.org/abs/1704.03751

changtimwu commented 7 years ago
changtimwu commented 7 years ago

let's trace this without bullshits https://github.com/mtmd/Mobile_ConvNet

changtimwu commented 7 years ago

squeezenet just reduce number of parameters. It doesn't save computation. table clip from mobilenet paper

squeeznet_compare