Mohammad-Rahmdel / FGSM-Tensorflow2-Caltech101

Fast Gradient Sign Method attack on Caltech101 Classifier using Tensorflow 2
2 stars 0 forks source link

asdf #1

Open lilinilili opened 3 years ago

lilinilili commented 3 years ago

sir, i wondering why add softmax layer after compile model and was there influence to add this layer when building the model?

Mohammad-Rahmdel commented 2 years ago

Sorry for my very late response! by writing this line

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) 

tensorflow will add the softmax itself. Because we are using cross entropy loss and our output values must be between 0 and 1 (look at the cross-entropy loss).

After compiling and training the model we aim to see the output of the network in terms of probability. The last layer of network is Dense; it gives some numbers without any limitation. So by adding softmax here for prediction we are actually just scaling values from (-inf,+inf) to (0,1)!