Why is the relu activation function being used as the final activation in the lastconv layers? Shouldn't it be sigmoid as the values of our depth map needs to be in the range of 0-255. ReLU is unbounded function, so wouldn't sigmoid be better activation for depth map generation?
Why is the relu activation function being used as the final activation in the lastconv layers? Shouldn't it be sigmoid as the values of our depth map needs to be in the range of 0-255. ReLU is unbounded function, so wouldn't sigmoid be better activation for depth map generation?