KamelliaRe / GitHub-Project-Management

0 stars 0 forks source link

Evaluating the performance using accuracy and confusion matrix #4

Open KamelliaRe opened 1 year ago

KamelliaRe commented 1 year ago

Split the Data: Split the dataset into a training set and a test set. The training set will be used to train the model, while the test set will be used to evaluate its performance. Make Predictions: Use the trained model to make predictions on the test set. Calculate Accuracy: Calculate the accuracy of the model by comparing the predicted labels with the actual labels in the test set. The accuracy is the number of correct predictions divided by the total number of predictions. Create Confusion Matrix: Create a confusion matrix to visualize the performance of the model. A confusion matrix is a table that shows the number of true positives, true negatives, false positives, and false negatives. Calculate Metrics: Calculate additional evaluation metrics such as precision, recall, and F1 score from the confusion matrix. These metrics provide more detailed information about the performance of the model.

We implemented the following code for this task: https://github.com/mGalarnyk/DSGO_IntroductionScikitLearn/blob/main/notebooks/TrainTestSplit.ipynb