I am providing locations of the important landmarks for image classification problem I am solving, but when these 2 dimensions are passed for each image, instead of loss being decreased, loss is becoming nan along with accuracy. Is it expected to normalise the location pixels?
I am providing locations of the important landmarks for image classification problem I am solving, but when these 2 dimensions are passed for each image, instead of loss being decreased, loss is becoming nan along with accuracy. Is it expected to normalise the location pixels?