Closed moliniao closed 1 year ago
Hello, can you explain the design principle of the loss function?
Recently I am editing how to use, once I finish the introduction of the loss part, I will let you know in this issue
But seeing that the implementation of the loss function in your code has been changing, what is the correct implementation?
But seeing that the implementation of the loss function in your code has been changing, what is the correct implementation?
Now the FA only supports the losses of torch, you can add torch loss into this name_dict and the get_loss function will get it when you give the related loss name in config file.
in the config file, for example, CrossEntropy loss as below, first the loss name is ce
, then the weight
is used to set weight in a total loss, the params
should set the necessary params of nn.CrossEntropyLoss
, after these settings above, you can use the torch losses.
losses
ce:
weight: 0.5
params: # ~ means None type, the initial params of loss could be identified here
ignore_index: 255
label_one_hot: False
if you want to identify your own loss, I recommend create it in loss.py, implement the __init__
and forward
function, then add it in the name_dict. you are welcome to submit PR for a customized loss.
the loss part of how to use is updated.
Hello, can you explain the design principle of the loss function?