frankhan91 / DeepBSDE

Deep BSDE solver in TensorFlow
MIT License
256 stars 128 forks source link

An issue inside HJBLQ class #7

Closed ghost closed 2 years ago

ghost commented 2 years ago

Hello Prof. Han,

I read your DeepBSDE paper with great interest. I was focusing on understanding especially the LQGC example. I enjoyed studying your paper and learned a lot from it. Thank you for making such a nice contribution and for writing the paper so clearly.

I also appreciate the fact that you provided your code. It is so helpful! I just wanted to double-check something with you about the code in the equation.py file. Inside HJB class, I understand that the f function for that case should be -\lambda||\nabla u||^2. So inside the code, you have written -self.lambd tf.reduce_sum(tf.square(z), 1, keepdims=True) where z is used to denote \nabla u.

However, inside solver.py and also in the paper you use z to denote \sigma^T \nabla u. It seems that z is used to denote both \nabla u and \sigma^T \nabla u in that case. Is this accurate? If so, to resolve this issue, should we multiply the expression inside f_tf for HJBLQ class by 0.5 and write it as -self.lambd tf.reduce_sum(tf.square(z), 1, keepdims=True)0.5?

I just wanted to double-check if I am confused or there may be a typo in the code. Would you mind letting me know? Thank you very much!

frankhan91 commented 2 years ago

Hi, your understanding is absolutely correct. There should be a factor 0.5 in the f_tf, coming from sigma = sqrt(2). Thanks for pointing it out. The bug may be introduced when I refactored the code. I will double-check and fix it.

ghost commented 2 years ago

Thank you very much for reply!