Closed versipellis closed 4 months ago
you can initialize the GPU device for TensorFlow loading with dynamic memory allocation:
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices("GPU")
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
Then, you can specify and device for bleurt loading:
with tf.device(f"GPU:{rank}"):
self.scorer = score.BleurtScorer(args.bleurt_ckpt)
note that the tensorflow loading involves all the visible device, therefore you have to set all visible device to dynamic memory allocation.
Closing as issue not relevant anymore - sorry and thanks!
Not quite sure what's happening here - running CUDA 11.6 and TensorFlow 2.10.0. No matter what checkpoint I use, all available GPU memory is consumed. Minimum reproducible example here:
nvidia-smi results after (right before, it's only showing 4 MiB in use):