Closed msubedar closed 2 years ago
I think we should assign a value to kl even if it is null, so it won't break when we are integrating to already working PyTorch models and training scripts.
Also, maybe we should add unit-tests, rebase this branch on it and check if that won't break things.
What do you think @ranganathkrishnan ?
Hi @piEsposito , I agree to include the unit-tests, but that can be done on top of this PR. @msubedar and I have validated all the existing scripts for training and evaluating the models in examples folder, so that the new feature does not break older model versions (i.e. Bayesian models defined without dnn_to_bnn()). With these changes, kl can be computed in kl_loss() function (https://github.com/IntelLabs/bayesian-torch/pull/11/files#diff-a588ca678816135810210bacf4645009ff6ecd3f16626713fd576db4aa603b07R126). For example, see https://github.com/IntelLabs/bayesian-torch/pull/11/files#diff-ade00648d59ef89627b50d68cb88f47ffc6d4832dd4c366902285fce80fec709R157 . Appreciate your interest in contributing to the unit-tests. Thanks!
PR supports DNN to BNN auto conversion. One can create a torch.nn DNN model and call dnn_to_bnn() to convert it to BNN model. An example script examples/main_bayesian_cifar_dnn2bnn.py is provided.