For imbalanced datasets a weighted loss function works better than either oversampling or under sampling. Can we add this feature in Turing.jl and AdvancedHMC's NUTS? The log likelihood function would require a weight term added for each class, and according to the number of samples in the training dataset, we would scale the corresponding log likelihood term by the inverse of the number of the samples for that class. I would be happy to contribute but can't find where is it located in the source code, could someone please help?
For imbalanced datasets a weighted loss function works better than either oversampling or under sampling. Can we add this feature in Turing.jl and AdvancedHMC's NUTS? The log likelihood function would require a weight term added for each class, and according to the number of samples in the training dataset, we would scale the corresponding log likelihood term by the inverse of the number of the samples for that class. I would be happy to contribute but can't find where is it located in the source code, could someone please help?