Closed tallamjr closed 3 years ago
And then in astronet/t2/train.py the model.compile( .. ) would need something like:
model.compile(
loss="categorical_crossentropy", optimizer=optimizers.Adam(lr=lr), metrics=[plasticc_log_loss]
)
Should really be:
loss = plasticc_log_loss
...
model.compile(
loss=loss, optimizer=optimizers.Adam(lr=lr), metrics=['auc']
)
Whilst it is now optimising using the loss function of "categorical_crossentropy"
with 3b7abb85787574ee0f9, a "plasticc" implementation is not used just yet as it requires a tensorflow mathematical operation implementation as opposed to a numpy one due to this error:
Error:
NotImplementedError: Cannot convert a symbolic Tensor (IteratorGetNext:1) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
See 7312f7849ffb264b94eb2.
Whilst this is possible with tf-nightly
, currently there is a bug that is preventing using it at the moment. Leaving this issue open until this is fixed and a full plasticc-tensorflow version of LogLoss implemented
Previous models have been optimised using the following line in
astronet/t2/opt/hypertrain.py:118
However, it would be desirable to optimise this using the metric defined by AI. Malz et. al of a weighted Log-Loss
An example implementation of this can be found in
astrorapids
or perhaps withsklearn.metrics.log_loss
akin to:There, the above code in
astronet/t2/opt/hypertrain.py:118
would be something like follows:And then in
astronet/t2/train.py
themodel.compile( .. )
would need something like: