Closed negar-hassanpour closed 4 years ago
Thanks for your interest!
The difference in code between dragonnet and tarnet is the line t_predictions = .... In dragonnet, the input is the shared representation layer In tarnet, the input is the input to the function (i.e., the data). This line is just computing logistic regression for convenience (e.g., so we can compute the tmle for the baselines)
The two stage version of the training is 'nednet', described in the paper. Dragonnet trains end-to-end as in the code.
Thank you for your prompt response! This explains all my concerns!
Thanks, awesome work!
Great, glad it got sorted :)
Thank you for sharing your code-base publicly. The idea presented in the paper is interesting. There are, however, several disparities between this code-base and the paper; these include:
Not only
make_tarnet
andmake_dragonnet
share the same code, but also the same objective function is used to learn the parameters of TARnet and DRAGONnet. Therefore, the results must be the same.It is mentioned in the paper that:
However, the code is implemented such that both outcome loss and cross entropy loss are optimized in the same objective function.