Closed jessicametzger closed 3 years ago
Actually, I was able to avoid both this issue and the GPU memory issue by setting
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
to change to CPU, and "0"
to change to GPU, instead of using tf.device()
. I still have no idea why one works and not the other for something like compiling the model, but this works just as well for me.
I am trying to train the ssd300 model but it throws an error when I compile it with the ssd_loss function. I am calling functions I wrote, which construct the model, load the weights, and compile it, from a jupyter notebook. All model parameters have stayed the same.
System information:
Here is the full stack trace:
where the file test_model_compilation.py can be found here: test_model_compilation.zip. Running the code from a jupyter notebook should reproduce this error.
Strangely, running it from a python file throws a different error first. When running
tf.device('CPU:0')
cuda throws a memory error. That's not ssd_keras specific so I won't include that but I can't say whether the above error is thrown outside jupyter notebooks because of this.I am at a loss for why this is happening and any help would be greatly appreciated. Thanks.