Closed bheames closed 4 weeks ago
Hi Brennen,
Thanks for reaching out!
To clarify, those experiments were meant to quantify the extent to which certain risk factors are predictable from the CT -- risk factor prediction was not part of training Sybil itself (also why it's not in the codebase).
The dataset used is the NLST_Risk_Factor_Task
class. In that case, we use the same architecture as Sybil (RiskFactorPredictor
), but train it only on predicting those risk factors; this is trained independently end-to-end, so there's no sharing or freezing of weights between this task and lung cancer risk prediction. Otherwise similar hyper-parameters were used.
Hope this helps!
Many thanks for the explanation, that's very helpful!
Thanks for making Sybil available!
I'm interested in reproducing Fig. A2 from the manuscript, but can't see from the codebase or data supplement if the exact training setup for the
RiskFactorPredictor
is specified anywhere.For now I've assumed freezing the backbone, and training the
RiskFactorPredictor
for e.g. a few epochs using the default hyperparameters insybil/parsing.py
and the loss fromsybil/utils/losses.get_risk_factor_loss
.Would you be able to clarify how this was done, or point me to some more details on the optimisation in case I missed these anywhere?
Best wishes, Brennen