Open jbarns14 opened 7 months ago
Please provide more detailed feedback here on what was done particularly well, and what could be improved. It is especially important to elaborate on items that you were not able to check off in the list above.
The report is very well-structured and explains the model in detail. However, the report can be more comprehensible to people with non-technical background if you can use less technical terminologies and explain the results in a simpler manner. For example, under Results and DIscussion section, instead of presenting the detailed cross-validation table for various models, maybe you can try to summarize the findings in sentences, pointing out the key takeaways, and presenting only the final score for the best one.
In the git repository, I noticed that the scripts are currenyly put under the src directory. To further enhance the project structure, maybe you can try to put all scripts into one separate script folder instead.
All the models, figures amnd tables are put into one result folder which is very nice. However, it may be better to create separate 'tables', 'figures', and 'models' under result folder to enhance project structure.
This was derived from the JOSE review checklist and the ROpenSci review checklist.
Overall, a great work!
The report is well-organized, offering context and relevance to the topic without an excessive use of images. Specifically regarding Figure 3, it might be worth considering to leave it out completely and rather summarize the findings from hyperparameter optimization in a few sentences (maybe ask yourself: Is the table truly of great relevance to the reader, or is only the best result from hyperparameter optimization relevant?)
It would be beneficial to include a 'Community' section in the README, outlining how external contributors can participate in the project. This section could provide guidelines for contributing, including procedures to follow if issues or errors are identified in the current analysis.
Concerning the repository's structure, there are currently 16 branches. I recommend removing any unused branches to improve the overall organization.
Currently, your scripts and functions are both in the 'src' folders which is totally fine. But in my opinion, it would enhance the structure of the report if you place the scripts in a separate 'scripts' folder.
All script outputs are displayed in the results folder. You could enhance organization within the results folder by creating subfolders such as 'models' and 'figures' for further differentiation.
I personally find it preferable (though I acknowledge that Tiff might not include it in her repository either) to have the link to the rendered HTML also included in the 'About' section. This way, you don't have to search for it in the README first, but, is probably just my personal preference.
But again, these are minor issues, great job, guys!
This was derived from the JOSE review checklist and the ROpenSci review checklist.
This was derived from the JOSE review checklist and the ROpenSci review checklist.
Submitting authors: @srfrew @meretelutz @jbarns14 @WaleedMahmood1
Repository: https://github.com/UBC-MDS/fifa-potential Report link: https://ubc-mds.github.io/fifa-potential/high-potential-fifa-prediction-report.html Abstract/executive summary: We attempt to construct a classification model using an RBF SVM classifier algorithm which uses FIFA22 player attribute ratings to classify players’ potential with target classes “Low”, “Medium”, “Good”, and “Great”. The classes are split on the quartiles of the distribution of the FIFA22 potential ratings. Our model performed reasonably well on the test data with an accuracy score of 0.809, with hyperparamters C: 100.0 & Gamma: 0.010. However, we believe there is still significant room for improvement before the model is ready to be utilized by soccer clubs and coaching staffs to predict the potential of players on the field instead of on the screen.
Editor: @ttimbers Reviewer: Karan Khubdikar, Sandra Gross, Nicole Tu, and Jordan Cairns