I have checked the training hf-training-example.py by default it trains the model using the cpu. Since I have two GPUs. If I enable the GPU in the above code. I get hte Cuda out of memory error. How can I limit it just like the inference example you provided for cuda?
I have checked the training hf-training-example.py by default it trains the model using the cpu. Since I have two GPUs. If I enable the GPU in the above code. I get hte Cuda out of memory error. How can I limit it just like the inference example you provided for cuda?