Closed ijyliu closed 5 months ago
@ashutoshtiwari13 i suggest adding color coding to the matrix for each classifier (blue for logistic regression, red for xgboost, green for svm, etc)
you could make a dictionary that maps the classifier name input to color or something
@ashutoshtiwari13 please continue work on this. it's now in Code/Evaluation. rename the Confusion matrix files to something like Evaluate_Classifiers and Evaluate_Classifier_Functions. we actually do have to turn in code with the report and they are grading it for how well commented, etc. it is
really need to see the misclassified images! it's important to understand where the classifiers are going wrong for the report
also, everything should be saved to Output/Classifier Evaluation in a subfolder for each classifier
@ashutoshtiwari13 please move the output to a subfolder for each classifier in https://github.com/ijyliu/computer-vision-project/tree/main/Output/Classifier%20Evaluation
also, when you do your misclassified images, set the seed in python so we get the same images every time we run
Done!
i'm still getting different misclassified images every time the script is run. you need to set a seed in random.sample
also, it's not clear to me for what specific feature set the misclassified images figure is from. please add that to the filename. we should have one saved for "Logistic_Regression_All_Data", one saved for "Logistic_Regression_All_Features_PCA", etc.
actually nevermind about the seed, that is correct, but please fix the misclassified images
i fixed the paths for you, you should edit this file
you should also remove all underscores from the plot title for all plots, everything should be formatted neatly because they care about presentation
@ashutoshtiwari13
completed by @ijyliu, @ashutoshtiwari13 in default of obligation
Write a function in a .py file that we can use for evaluation. Then write evaluation notebooks for each classifier that call this.
Input: Filepath for Classifier Predictions Excel File, Classifier Name
Output: