Closed vladimirkovacevic closed 2 months ago
Hello, I've implemented the feature as requested. Now, each model's performance metrics (precision, recall, and F1-score) are stored in a dictionary, and I've added functionality to sort these entries based on the F1-score. This allows us to automatically determine which model configuration performs best.
Here's the key part of the implementation:
# Assuming 'scores_by_model' is already populated with model names as keys and a list of metric averages as values
best_model_by_f1 = sorted(scores_by_model.items(), key=lambda x: x[1][3], reverse=True)[0][0]
This approach ensures that the model with the best F1 score can be easily identified, supporting efficient model selection. Please let me know if there are any additional changes or details you would like to see implemented.
Best regards, Raša Stojanović
Great! Thank you!
Put average metric (precision, recall, f-score) for every model configuration as one row of the table, sort the table by f-score, so you can automatically decide which model is the best.