Closed ghost closed 2 years ago
Hi, your understanding is absolutely correct. There should be a factor 0.5 in the f_tf, coming from sigma = sqrt(2). Thanks for pointing it out. The bug may be introduced when I refactored the code. I will double-check and fix it.
Thank you very much for reply!
Hello Prof. Han,
I read your DeepBSDE paper with great interest. I was focusing on understanding especially the LQGC example. I enjoyed studying your paper and learned a lot from it. Thank you for making such a nice contribution and for writing the paper so clearly.
I also appreciate the fact that you provided your code. It is so helpful! I just wanted to double-check something with you about the code in the equation.py file. Inside HJB class, I understand that the f function for that case should be -\lambda||\nabla u||^2. So inside the code, you have written -self.lambd tf.reduce_sum(tf.square(z), 1, keepdims=True) where z is used to denote \nabla u.
However, inside solver.py and also in the paper you use z to denote \sigma^T \nabla u. It seems that z is used to denote both \nabla u and \sigma^T \nabla u in that case. Is this accurate? If so, to resolve this issue, should we multiply the expression inside f_tf for HJBLQ class by 0.5 and write it as -self.lambd tf.reduce_sum(tf.square(z), 1, keepdims=True)0.5?
I just wanted to double-check if I am confused or there may be a typo in the code. Would you mind letting me know? Thank you very much!