Closed nmerkle closed 4 months ago
IIUC, @karpathy used A100-80G but you seem to have 40G - have you tried reducing the batch size B to say 16 or 32? https://github.com/karpathy/build-nanogpt/blob/master/train_gpt2.py#L325
@andytwigg Thank you for your answer. I used again 1 GPU with 40 GB and decreased the batch size to 2 and then I got another error:
[rank0]: x, y = data_loader.next_batch() [rank0]: File "~/my_transformer/GPT.py", line 215, in next_batch [rank0]: x = (buf[:-1]).view(B,T) [rank0]: RuntimeError: shape '[2, 1024]' is invalid for input of size 104
I think the problem now is that in the "next_batch()" function (see line https://github.com/karpathy/build-nanogpt/blob/master/train_gpt2.py#L243), the reshaping fails because the token size does not match when the end of the buffer is reached. The code runs a while but then raises the error message mentioned above because just 104 tokens remain for processing:
buf = self.tokens[self.current_position : self.current_position+B*T+1]
x = (buf[:-1]).view(B, T)
Any idea how to address this? I was thinking to check with a modulo (%) operator whether the remaining tokens are divisble through (B*T+1). However, I think that would be a quick and dirty solution. Any other suggestions? I am wondering why it works in the tutorial. I guess I must have missed something.
Hi,
I have tried to implement GPT2 from scratch according to the Video tutorial. However, if I try to execute the code on 2 GPUs with:
My program fails with the following error message:
If I execute with just 1 GPU, I get another error:
Any ideas what could be the reason? I exactly followed the video tutorial and also checked the code in the repository. I should have enough memory. According to nvidia-smi I get the following output:
Thanks in advance.