cwlkr / torchvahadane

Gpu accelerated vahadane stain normalization for Digital Pathology workflows.
MIT License
14 stars 3 forks source link

Fix a potential random generated related nan issue caused by 0/0 #1

Closed CielAl closed 9 months ago

CielAl commented 11 months ago

In the update_dict method in https://github.com/cwlkr/torchvahadane/blob/main/torchvahadane/dict_learning.py#L100, when the atomic norm is too small, the corresponding atom is re-initialized with tensor.normal_. However, if all numbers drawn are negative, the following in-place clamp will yield a zero vector, and in the in-place normalization of dictionary[:, k] /= dictionary[:, k].norm() a nan vector will be generated and this will invalidate the whole computation procedure.

Herein two layers of protection is added: (1) make the re-initialized atom non-negative by adding abs() (2) add a small eps torch.finfo(torch.float32).eps to the norm so the denominator (norm) will always be non-zero.