Open jmanitz opened 2 years ago
Follow-up to my above question - can people share any examples where this process caught errors during the assessment process
For NVS, there have been several high-risk packages that were requested by users who, when confronted with the task of writing PQ tests, opted to withdraw their request. This happens for a few packages in almost every release.
Similarly, out internal tool used for automating large parts of this process, usually undergoes several changes after each release, based on lessons learned.
For Roche, we had to reject a few CRAN packages that were requested by our users to be used in a validated environment. Those were multgee
, ggpp
, and blscrapeR
. During the evaluation of those packages, we assessed that they don't meet some of the requirements out in front of the R packages in Roche. In those cases, these requirements were: tests coverage below 80% and some of the exported functions not covered by any of the unit tests. Additionally, packages are not well adapted, as all three of them have a total CRAN downloads number below 100000, and they are not associated with any major software development entity. Based on that we couldn't validate those as we don't have enough proofs that the code works as expected and is actively maintained.
EDIT: fixed typo
hey @maksymiuks , what do you mean by
downloads number below 100%
? Thanks!
@bizzinho Brain lag, I think, I meant 100000 downloads :) Thank you for pointing this out!
One topic area I would love if any groups were willing to share, is what specific packages have been rejected while going through the quality process. It would be interesting to see if packages are being rejected by some orgs but not others, and what parts of their risk criteria cause that differentiation