Need to extend it to the other target functions, remove it as a parameter, or document this more thoroughly.
Not as important for DIII-D datasets as it is for our JET datasets (for which the non-/disruptive classes are more imbalanced).
[ ] Also, conf['training']['ranking_difficulty_fac']: 1.0 # how much to upweight incorrectly classified shots during training appears to perform a very related role, but instead within loader.py, mpi_runner.py, and performance.py.
The only time in which
positive_example_penalty
appears in the codebase is in: https://github.com/PPPLDeepLearning/plasma-python/blob/c82ba61e339882a5af10b1052edc0348e16119f4/plasma/conf_parser.py#L86-L102 which is loaded only for an unused method in theMaxHingeTarget
class, noted here: https://github.com/PPPLDeepLearning/plasma-python/blob/7986f468e43a56a5ae845dd1b88cf9fca048ac5a/plasma/models/targets.py#L153Need to extend it to the other target functions, remove it as a parameter, or document this more thoroughly.
Not as important for DIII-D datasets as it is for our JET datasets (for which the non-/disruptive classes are more imbalanced).
conf['training']['ranking_difficulty_fac']: 1.0 # how much to upweight incorrectly classified shots during training
appears to perform a very related role, but instead withinloader.py
,mpi_runner.py
, andperformance.py
.