Closed lshandross closed 5 months ago
Summary of discussion starting here on reproducibility:
@elray1 suggested removing a decent amount of code from the case study about scoring and evaluation given that 1) these topics are not the main point of the paper, 2) the code shown is a temporary fix due to not yet having the
hubEvals
package up and running yet, and 3) the code shown was just abbreviated examples showing how these functions could be used, not actually responsible for generating any tables or figures in the paper. Instead, we both agree that a short section on reproducibility makes more sense
I'm on board with @elray1 's suggested path forward with reproducibility. Maybe we could just say explicitly in the paper what code we have and have not included, with a justification?
Original comment from @nickreich in #23:
Additional code has been added to the case study section of the manuscript, with additional scripts stored in the
inst
directory of the repo (any functions are stored in theR
directory). This issue is being split off from Issue #23 in case there are additional comments to be made specific to this point