Open hbaniecki opened 2 years ago
Hi @hbaniecki!
It's been forever, I'm so sorry! I've moved to Switzerland and started working as a postdoctoral researcher here, and it was a huge change in my life.
I've addressed most of the points you highlighted (commit 61c8419), btw THANKS for your valuable advices. Below I'll address each one of your points:
Comments
Summary
Statement of need Yes, I totally agree, in the initial version I didn't include it due to space limitation (in fact the paper exceeded the 1000 words limitation). This point and the ones you pointed out in the following item I'll addressed them both in this section. The idea is to talk about the interpretability and explainability a little bit, cite the papers you suggested, and then add the "gap phrase", like "However, little attention has been paid" etc. and focus on the need of interpretable models (not just explainable, but interpretable, i.e. self-explainable). What do you think?
State of the field These references and this discussion will be added above.
Implementation
Illustrative examples
Dataset.load_from_url()
function.Conclusions I've changed the conclusion removing "novel" and adding an extra sentence.
I'm still working on the changes regarding the "Statement of need", I'll let you know as soon as I finish with it. Again, thank you so much for your review work, and apologize for the delay....
https://github.com/openjournals/joss-reviews/issues/3934 Hi, I hope these comments help in improving the paper.
Comments
1.0
of the package (on GitHub, PyPI) and mark that in the paper, e.g. in the Summary section.Summary
Statement of need This part discusses mainly the need for open-source implementation of the machine learning models. However, as I see it, the significant contributions of the software/paper, distinguishing it from the previous work, are the
Live_Test
/Evaluation
tools allowing for visual explanation and hyperparameter optimization. This could be further underlined.State of the field The paper lacks a brief discussion on packages in the field of interpretable and explainable machine learning. In that, I suggest the authors reference/compare to the following software related to interactive explainability:
Other possibly missing/useful references:
Implementation
scikit-learn
Illustrative examples
Dataset.load_from_url()
function.Conclusions Again, I have doubts that the machine learning model is "novel", as it has been previously published etc.. It might be misunderstood as "introducing a novel machine learning model".