Open jjadeb opened 7 months ago
Initially the Creative Commons license (for the project report) was not included. This was pointed out to us in TA feedback for Milestone 1. This was fixed in the following commit: 08384703a4d9821f9002deb325523cf4e387f1e6
Initially we were missing versions in our environment.yml file for the packages make and jupyterlab. This issue was pointed out to us in TA feedback for Milestone 1. The lack of package versions was still present when we transferred over to using a Dockerfile. This was fixed in the following commit: 7e98db3e3d5be4510c81378a8e64eb2e5198facd and pull request 9411a8cd98cb856f864646385b4cdafe378cdcb5
Initially our use of "==" and "=" for specifying package versions in our environment.yml file were inconsistent. This issue was pointed out to us in TA feedback for Milestone 1. Also, the issue was still present when we transferred over to using a Dockerfile. This was fixed in the following commit: 2addbe210f66e39d4cc3d0346511dd5c3de8ac26
Our analysis scripts had in-line documentation, but the documentation at the beginning of the scripts describing what they do and how to run them was missing. This was pointed out to us in the feedback for Milestone 2. The issue was fixed in the following commits:
We only mentioned ethical considerations and possible biases. We improved this from feedback from lesleymai during peer review we received to further discuss how ibiases can be reduced and the possible influence of ethical concerns on model.
Commit: f49f9a5649ecf433031e51c5bf3ba45a1a305b1d Pull request: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/pull/107
Evaluation metrics were chosen and evaluated but we did not highlight the significance in the selection of evaluation metrics. After receiving this feedback by lesleymai as part of peer review we now included a more in-depth explanation into the selection of evaluation metrics as well as consequences of false positives and false negatives in credit risk.
Commit: f49f9a5649ecf433031e51c5bf3ba45a1a305b1d Pull request: use_package
Addressed milestone-1 TA feedback: Model was fixed to choose the best model from analysis and then run evaluation metrics on the test set only once to avoid breaking the golden rule. Also the ipynb report was also fixed to adhere to the changes.
Commit: b1c1c9c187b14541ef5495ecd46cba0771177eda Pull Request: 811db5189d2d68b42913e8359ade8ea18ed56ff1
Adjusted the quarto to address the peer review regarding why the models specially the random forest model was chosen for the analysis. Commit: b1c1c9c187b14541ef5495ecd46cba0771177eda Pull Request: 811db5189d2d68b42913e8359ade8ea18ed56ff1
Initially in our report introduction, our features were showcased as a long list which seemed to be a less effective format and visually straining. To fix this, we created a table in our quarto file to allow a table reformatting in our PDF and HTML files, as well as put a table into our original ipynb analysis file too.
Commit: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/commit/22328d5f6fc73b036c801896c5cbe3eb743776bc Pull request: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/pull/111
We received feedback to define good and bad credit before diving into analysis to level the plain-field of understanding. We went ahead and implemented this at the beginning of our PDF and HTML reports through the quarto file as well as in our ipynb analysis file. Commit: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/commit/22328d5f6fc73b036c801896c5cbe3eb743776bc Pull request: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/pull/111
Another piece of feedback was to use direct quotations as opposed to paraphrasing. We think this makes sense as it can be more credible and prevents accidental paraphrasing of the wrong meaning hence made those changes as well. Commit: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/commit/22328d5f6fc73b036c801896c5cbe3eb743776bc Pull request: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/pull/111
We received feedback to reference the data when mentioning it in the introduction hence we have embedded a link in our PDF and HTML reports through the quarto file as well as in our ipynb analysis file. Commit: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/commit/22328d5f6fc73b036c801896c5cbe3eb743776bc Pull request: https://github.com/DSCI-310-2024/DSCI310_Group-12_Credit-Risk-Classification/pull/111
Here we will describe improvements we made to the project based on feedback, and point to evidence of these improvements. Will may provide URLs to reference specific lines of code, commit messages, pull requests, etc. We will also add some narration when sharing these URLs so that it is easy for the reader to identify which changes to our work addressed which pieces of feedback.