Closed Alimarashli closed 2 years ago
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. What is the top-level directory of the model you are using Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce
Hi, this is a keras question. Could you ask this in the keras repo? https://github.com/keras-team/keras
@Alimarashli I was able to run your code without any issues with TF2.8.2
. Here is the gist for our reference. Please try with recent TF versions and let us know if this was resolved for you.
If you want to use more data, then try to follow Better performance with the tf.data
API. Thanks!
Please close the issue if this was resolved for you.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.
Hi,
I am training a Conv1D autoencoder but when I try to apply model.fit() it gets stuck at epoch1/2 regardless of how small the batch size is. When running on Colab, with random data of the same size, it runs out of memory and disconnects (not sure if GPU RAM or normal RAM) The code runs for smaller data and I also tried it on my personal workstation with RTX 3090 and 128GB of memory. I am not sure what I can do to fix the issue, the data size is only 4MB, while GPU memory is 24GB and PC memory is 128GB but even with a dataset of 2 and batch size of 2 it still gets stuck. code and colab link: https://colab.research.google.com/drive/1KL3tYnJc8rNn-5eqIPtdQrheogfwic0h?usp=sharing `
` Note: I come from physics background so sorry if the answer is obvious and I did something wrong