Open aquilaadrian opened 3 years ago
Longer train session
ResNet50 with tensorflow 2 cpu:
ResNet50 with tensorflow 1.15 gpu:
I would not say it's expected or normal to see this kind of difference. We've fixed a number of bugs since the last package of tensorflow-directml on pypi.org, and we still have more to go through, so it's possible this will be resolved in the next release (hopefully quite soon, i.e. weeks). It would be great if you could try this again when we have the new release, which also has many performance improvements, since this type of issue can be quite challenging to debug without the same data you're using.
We're also ramping up our own internal conformance testing to hopefully catch more issues like this.
@jstoecker thanks for the answer really appreciate it. Unfortunately i cannot share the dataset because of restriction. i tried it as well with tensroflow-rocm and get result closer to cpu tensorflow. I'll try this again once new release is up and give a followup
Thanks @aquilaadrian. It's really helpful that you file these types of issues regardless, as it's something we'll try to look at (with our own data) at some point. :)
Hi, I am running tensorflow-directml==1.15.7 (Windows) on my RX6600XT 8GB
I use Xception to train dogs-vs-cats classification. (train:test=9:1 by split Kaggle train dataset)
The validation accuracy is equal to 0.5 on first two epochs
But val_acc going up after third epochs...
It seems validation model is not the same with training model due to there is larger than 0.6 train_acc in first one epoch.
My GPU driver is AMD Software Adrenalin Edition==22.10.01.03 (2022/5/5)
Any suggestion?
Hi @CardLin,
Is this a problem that you were having with tensorflow-directml==1.15.5, or only 1.15.7?
Hi @CardLin,
Is this a problem that you were having with tensorflow-directml==1.15.5, or only 1.15.7?
I have fix this issue by add GAP and Dense layer before Xception...
But it is strange that why there have different result compare to tensorflow-gpu==2.8.0 with CUDA runs on RTX2070?
I use 299x299x15 as input that should using weights=None. (random initialization)
Maybe the random initialization of the weights is different?
System Information:
Windows 10 Build/Version: 20H2 (OS Build 19042.906) native windows Python Version: Python 3.7 via anaconda virtual env TensorFlow-DirectML Version: 1.15.4.dev201216 Graphics card driver version: Radeon Adrenalin 21.2.3 Radeon RX Vega 64 8gb
Repro:
Hey i did a transfer learning of image classification ResNet50with my own dataset using the basis of https://github.com/krishnaik06/Tomato-Leaf-Disease-Prediction script for "Transfer Learning Resnet 50.ipynb".
i compare two result one from cpu tensorflow 2, and directml tensorflow 1.15 run on vega 64. however the result of two execution is really drastic.
five epochs cpu tensorflow 2: five epochs gpu directml-tensorflow 1.15:
i also try to run the code with google colab but only one epoch due to long training time:
epoch 1/10 323/323 [==============================] - 13357s 41s/step - loss: 0.7214 - accuracy: 0.8985 - val_loss: 0.1454 - val_accuracy: 0.9617
is this normal because of different version of tensorflow? time execution for cpu tensorflow for one epoch is about +-20 min time execution for gpu directml tensorflow for one epoch is about +-16 min