Closed luweizheng closed 2 years ago
Hi @luweizheng
AFAIK, double precision is the standard. I think there is research on utilizing mixed precision to speed up calculations (see, e.g., here). At the time the paper you are mentioning was released, TPU did not support double precision but as of today it does (the last time I checked there was a double precision emulator). The paper demonstrates advantage of a TPU in terms of speed and cost but you'd still have to be very careful managing the error. I am not aware if there is a rule of thumb for single precision calculations (check out the reference above). I can imagine there are cases where single precision can be used. For example, for model calibration, where you can verify the calibrated model by testing against the market data.
@cyrilchim Thanks a lot!
I'm happy to hear other discussions.
Hi tff team,
Thank you so much for developing this project. My question may be vague. Which precision should I use when doing derivative pricing? Is there any industry standards? Is float32 enough for most cases? Or it depends case by case?
Other quant libraries on CPU, for example QuantLib and QuantLib.jl, they use float64 for almost all scenarios (analytically method, Monte-Carlo or PDE). CPUs usually have float64 support.
I find TFF usually uses
tf.float64
when choosingdtype
. TFF can run on accelerators: GPUs or TPUs. NVIDIA GPUs have float64 CUDA cores and are capable of handing float64. However, TPUs may not have float64 support. But I find a paper from Google about doing Monte-Carlo on TPUs. They say in most cases, TPUs can do Monte-Carlo. On the hardware side, lower precision usually means faster computing speed. Other academic papers do some experiments on mix-precision.Is there a simple or quick guide on precision? Like:
If precision is depended by cases. How to measure the precision is enough for my case?
Thanks!