Closed erinmgraham closed 10 months ago
Hi @erinmgraham, have been thinking about this and wondered if we could keep on the record a way of going about Opt2 (perhaps after first trial run?):
(train_images, train_labels), (val_images, val_labels) = keras.datasets.cifar10.load_data()
print('train_labels before one hot encoding')
print(train_labels)
train_labels = to_categorical(train_labels)
print()
print('train_labels after one hot encoding')
print(train_labels)
train_labels before one hot encoding
[[6]
[9]
[9]
...
[9]
[1]
[1]]
train_labels after one hot encoding
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 1.]
...
[0. 0. 0. ... 0. 0. 1.]
[0. 1. 0. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]]
made changes for one-hot across the lesson and added this content to episode 2
In ep 01 we fit the intro model with the default activation – none; keras guesses linear - which means the output prediction values are raw, not probabilities; we should be fitting with softmax but then I think the input data needs to be one-hot encoded and keras.cifar.load_data() is not
Opt1 – ep01 keep the intro as is, noting the output as raw; ep 02 hot code the cifar dataset and create test set (from cinic?); ep 03 change activation Opt2 - change activation from getgo and onehot in ep01