Closed JSnobody closed 5 years ago
Sorry, I do not fully understand your question.
Such as channel pruning,weight sparsification,weight quantization,network distillation,multi-GPU training, hyper-parameter optimization.
TensorFlow is a framework for both training and inference, while ncnn is only for inference on mobile devices. That's why we choose TensorFlow instead of ncnn to implement these methods, since they are all training algorithms for model compression.
Does PocketFlow's compression methods apply to other training frameworks like PyTorch or Keras? Why choose Tensorflow?
@3dimensions I also want to ask the same question."Does the optimization apply to other DL framework?", this optimization means "PocketFlow's compression methods".
@3dimensions @JSnobody We choose TensorFlow since it is one of the most popular DL framework for the moment. It is possible to extend PocketFlow to support other frameworks, e.g. using PyTorch as another back-end for model compression methods, but this will require lots of works. We consider this as a future feature, but currently we do not have a clear time table for this. On the other hand, these model compression methods are not limited to TensorFlow. You surely can implement them using PyTorch or other DL frameworks.
OK,I got it. Thanks very much!
Closing this issue. Reopen it if there are any further questions.
I find PocketFlow use Tensorflow. Is there a strong correlation between your optimization and Tensorflow? Does the optimization apply to other DL framework? Why not use NCNN? Look forward to your reply! Thansk very much!