Hi, I recently met an issue that my gradient becomes nan. I checked the codes and I found becomes some of my inputs contain infinity. On the one hand, I think in theory this makes sense. However, for practice, the existence of infinity usually serves a purpose: for example, in my case, I just want to get 0 from my gaussian kernel by inputing infinity to x. The reason I did not remove infinity input is just to make my input as arrays with same shape. Now, I have switched to loop and ignore the infinity.
So here I was wondering if there could be any warning, or handling of infinity during calculating gradient in pytroch. I understand that in theory ppl should know it. But in practice, mistake happens... Thanks!
❓ Questions and Help
Hi, I recently met an issue that my gradient becomes nan. I checked the codes and I found becomes some of my inputs contain infinity. On the one hand, I think in theory this makes sense. However, for practice, the existence of infinity usually serves a purpose: for example, in my case, I just want to get 0 from my gaussian kernel by inputing infinity to x. The reason I did not remove infinity input is just to make my input as arrays with same shape. Now, I have switched to loop and ignore the infinity.
So here I was wondering if there could be any warning, or handling of infinity during calculating gradient in pytroch. I understand that in theory ppl should know it. But in practice, mistake happens... Thanks!