Closed puelon closed 1 year ago
Did you enabled Determinisim feature somewhere in your code?
https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism
Sinse we don't support determinism for QAT at this moment, you may have not to use this feature. Thanks!
Hi Xhark, thank you very much for replying, I was able to run it by turning off determinism.
@Xhark So, Is there no way to achieve reproducible results?
I'm trying to run Quantization Aware Training (QAT) on TensorFlow with GPU support on my local machine, but I keep running into the following error:
I am currently trying to run it on a RTX 3090 but it is not working. I tried running it on Google Colab as a test and no problems there on the platform. However, I would rather be able to run it on my local machine because I have more RAM memory available. I am unsure what is causing this error, the package versions I am currently using is as follows:
TensorFlow version: 2.10.0 (with GPU support) CUDA version: 64_112 cuDNN version: 64_8
Model layers:
QAT code used:
See the error logs it triggers below: