Closed github-learning-lab[bot] closed 4 years ago
False
That's right! We need to train 🏋️ our data before we test 🧪 it.
Now that the model is trained to our training data, we can test it against our training data:
loss_value, accuracy_value = model.evaluate(test_data, test_labels)
print(f'Our test accuracy was {accuracy_value})'
>>> Our test accuracy was 0.980132
model.evaluate
will evaluate how strong our model is with the test data, and report that in the form of loss value and accuracy value (since we specified accuracy
in our selected_metrics
variable when we compiled the model). We'll just focus on our accuracy for now. With an accuracy of ~98%, it's not perfect, but it's very accurate.
We can also use our model to predict specific Pokémon, or at least have it tell us which status the Pokémon is most likely to have, with model.predict
. All it needs to predict a Pokémon is the data for that Pokémon itself. We're providing that by selecting a certain index
of test_data
:
def predictor(test_data, test_labels, index):
prediction = model.predict(test_data)
if np.argmax(prediction[index]) == test_labels[index]:
print(f'This was correctly predicted to be a \"{test_labels[index]}\"!')
else:
print(f'This was incorrectly predicted to be a \"{np.argmax(prediction[index])}\". It was actually a \"{test_labels[index]}\".')
return(prediction)
Let's look at one of the more well-known legendary Pokémon: Mewtwo. He's number 150 in the list of Pokémon, so we'll look at index 149:
predictor(test_data, test_labels, 149)
>>> This was correctly predicted to be a "1"!
Nice! It accurately predicted Mewtwo was a legendary Pokémon.
Close this issue when you have a predict function that is working
Congratulations! 👏 You finished the course!
TensorFlow really thrives with large datasets, ones much larger than a couple hundred Pokémon entries. That being said, it was pretty accurate here! Improvement is always a possibility, however. There are some tweaks we can make as well, such as changing the layers the model is built on, and how the model is compiled. Try tweaking those parameters and see if you can improve!
Once we have decided on the specifics of our model, we need to do two processes: Compile the model and fit the data to the model.
We can compile the model like so:
model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
Here we're just feeding three parameters to
model.compile
. We pick an optimizer, which determines how the model is updated as it gains information, a loss function, which measures how accurate the model is as it trains, and metrics, which specifies which information it provides so we can analyze the model.The optimizer we're using is the Stochastic Gradient Descent (SGD) optimization algorithm, but there are others available. For our loss we're using sparse_categorical_crossentropy. If our values were one-hot encoded, we would want to use "categorial_crossentropy" instead.
Then we have the model fit our training data:
model.fit(train_data, train_labels, epochs=400)
The three parameters
model.fit
needs are our training data, our training labels, and the number of epochs. One epoch is when the model has iterated over every sample once. Essentially the number of epochs is equal to the number of times we want to cycle through the data. We'll start with just 1 epoch, and then show that increasing the epoch improves the results.True or False: We fit our model using test_data and test_labels.