In the update_dict method in https://github.com/cwlkr/torchvahadane/blob/main/torchvahadane/dict_learning.py#L100, when the atomic norm is too small, the corresponding atom is re-initialized with tensor.normal_.
However, if all numbers drawn are negative, the following in-place clamp will yield a zero vector, and in the in-place normalization
of dictionary[:, k] /= dictionary[:, k].norm() a nan vector will be generated and this will invalidate the whole computation procedure.
Herein two layers of protection is added:
(1) make the re-initialized atom non-negative by adding abs()
(2) add a small eps torch.finfo(torch.float32).eps to the norm so the denominator (norm) will always be non-zero.
In the update_dict method in https://github.com/cwlkr/torchvahadane/blob/main/torchvahadane/dict_learning.py#L100, when the atomic norm is too small, the corresponding atom is re-initialized with tensor.normal_. However, if all numbers drawn are negative, the following in-place clamp will yield a zero vector, and in the in-place normalization of
dictionary[:, k] /= dictionary[:, k].norm()
a nan vector will be generated and this will invalidate the whole computation procedure.Herein two layers of protection is added: (1) make the re-initialized atom non-negative by adding abs() (2) add a small eps
torch.finfo(torch.float32).eps
to the norm so the denominator (norm) will always be non-zero.