Hello, since this is a big project, I do not know which part is responsible for the problem, hence why I post the issue where.
Using the rocm/tensorflow:latest container pulled yesterday, if my model has too much (only three) convolutions/deconvolutions + merge layers, the model is stuck at :
tensorflow/core/kernels/conv_grad_input_ops.cc:981] running auto-tune for Backward-Data forever (even after 6 hours of run non-stop).
Here is a screenshot :
And here is the model summary by Keras :
I am using a custom loss function (combination of DSSIM, MSE and MAE), but it din't cause any problem with the same level without the third layer of convolution, nor with model with more convolution/convolutions, but without merge layers. Could there be some kind of loop ? Or is it a bug.
Thank you :smile:
PS : if the issue shall not be posted here, please tell me where so I can close it here and open it somewhere else.
Hello, since this is a big project, I do not know which part is responsible for the problem, hence why I post the issue where. Using the rocm/tensorflow:latest container pulled yesterday, if my model has too much (only three) convolutions/deconvolutions + merge layers, the model is stuck at :
tensorflow/core/kernels/conv_grad_input_ops.cc:981] running auto-tune for Backward-Data
forever (even after 6 hours of run non-stop). Here is a screenshot : And here is the model summary by Keras :I am using a custom loss function (combination of DSSIM, MSE and MAE), but it din't cause any problem with the same level without the third layer of convolution, nor with model with more convolution/convolutions, but without merge layers. Could there be some kind of loop ? Or is it a bug.
Thank you :smile:
PS : if the issue shall not be posted here, please tell me where so I can close it here and open it somewhere else.