The goal of this issue is understanding better if there are any bugs in the code or problems in the algorithm. Thus, you can save results where it is convenient, for example at the and of the thesis draft, GitHub, or google drive.
[x] add an additional layers so that you can distinguish between latent spaces and two components of the total decision
[x] fix alpha = 1, beta =1, plot total loss and the three losses against epochs
[x] divide data by training, validation and testing set and plot the training and validation losses
[x] run the baseline model that predicts decision from the dataset based on both sensitive and permissible attributes. You can keep the same structure of neural network as in the desired one but the loss is just cross entropy between predictions and decisions. Calculate fair metric on training and testing sets
[x] Plot the 3 losses as a function of beta: conservatism loss (you can call it constancy loss by the way), accuracy loss, fairness loss. Also plot the mean value of predictions to see what is going on.
The goal of this issue is understanding better if there are any bugs in the code or problems in the algorithm. Thus, you can save results where it is convenient, for example at the and of the thesis draft, GitHub, or google drive.
Let me know if you have any difficulties!