Open ZahraDehghani99 opened 2 years ago
Why do we need bias in neural networks? Bias is an additional constant parameter that gives our model flexibility. Without bias weights, our model has limited movement in the solution space. It helps our model to fit better on the data set.
Can activation functions in neural networks be linear? why? Linear activation functions are rarely used, for example in regression models for predicting continuous value, they use for the output layer, but they should not be used in hidden layers because if they use, the output becomes a linear combination of input. In other words, if we use linear activation functions in hidden layers, we have an equivalent one hidden layer neural network with it, and it's not important how deep our neural network is.
[FILL HERE WITH RIGHT ANSWERS TO QUESTION 1]
[FILL HERE WITH RIGHT ANSWERS TO QUESTION 2]