docsteveharris / 2022-adversarial-penguin

0 stars 0 forks source link

reviewer-3-comment-4 #3

Closed docsteveharris closed 2 years ago

docsteveharris commented 2 years ago
  1. There is a quickly growing literature on fail-safes in clinical ML and the authors should point readers to it. This idea has been referred to as selective prediction (Chow, 1970; Tortorella,2000; Bartlett and Wegkamp, 2008; El-Yaniv and Wiener, 2010; Feng, Sondhi, Perry, and Simon 2021). The idea of deploying ensembles with fail-safes is promising as well, and has also been previously described in Feng 2021.

@waty: you've already covered this in

Model fail-safes

Fail safes should be designed into support systems to pre-empt and mitigate model misbehaviour. The European Commission High-Level Expert Group on AI presented the Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019 with recommendations for AI-support systems that continue to maintain human-agency via a human-in-the-loop oversight. Prediction models that map patient data to medically meaningful classes are forced to predict within the predetermined set of classes without the option to flag users when the model is unsure of an answer. To address this problem, there is good evidence that methods such as Bayesian deep learning and various uncertainty estimates [@abdar2021] can provide promising ways to detect and refer data samples with high probability of misprediction for human expert review [@Leibig2017; @Filos2019; @Ghoshal2020]. This may even permit less interpretable models to operate when implemented in conjunction with an effective fail-safe system.

Assume we just need to acknowledge and add in the references requested?

Waty-Lilaonitkul commented 2 years ago

Hi Steve, Just sent an email to you on the fail-safe section with

  1. edited text for latex doc
  2. added reference for latex doc