Open NHZlX opened 7 years ago
Sorry about late reply. What do you mean by "ordinary multithreading"? Do you mean pthread? Whether you're using pthread or OpenMP won't matter the performance. Please let me know if this was not what you were asking.
@jspark1105 Yes, i mean pthread. Thanks for your answer. Hum,i have another question. Does this code just open the process of inference ? I didn't find the code for the pruning training.
@jspark1105 In the paper " Faster CNNs with Direct Sparse Convolutions and Guided Pruning", it mentioned that one can use the DNS to do pruning with the gsl, i have read the paper before, the sparseness of each layer has a great relationship with the hyper-parameters of the layer, i have made several experiments, these hyper-parameters are hard to set. How do you guarantee that the sparseness of these layers will reach between lowerbound and upperbound using this way?
Regarding inference/training, we only speedup inference using sparsity. It's possible to also speedup pruning training but it'd be more challenging because the sparsity will keep changing during the pruning training. If you're asking the code for pruning training, please look at https://github.com/IntelLabs/SkimCaffe/blob/intel_scnn/src/caffe/solvers/sgd_solver.cpp to see how thresholding is applied when regularization type is L1.
Regarding DNS, the authors of the paper helped me on the hyper-parameters. For most accurate information, I'd recommend to contact the authors. In SkimCaffe, sparsity is mostly controlled by weight_decay. This way doesn't guarantee any lowerbound/upperbound, but we found weight_decay=5e-5 works well for AlexNet, GoogLeNetV1, and ResNet-50.
If I implemented direct conv with ordinary multithreading, what gaps would I have with skimcaffe implementations?