lucidrains / PaLM-pytorch

Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
MIT License
821 stars 82 forks source link

cuDNN error: CUDNN_STATUS_INTERNAL_ERROR error #6

Open unwritten opened 2 years ago

unwritten commented 2 years ago

code segment below will report error as titled, under multi gpu training

    # rotary embeddings
    positions = self.get_rotary_embedding(n, device)
    q, k = map(lambda t: apply_rotary_pos_emb(positions, t), (q, k))
lucidrains commented 2 years ago

hmm, are you sure you aren't OOM?

conceptofmind commented 2 years ago

code segment below will report error as titled, under multi gpu training

    # rotary embeddings
    positions = self.get_rotary_embedding(n, device)
    q, k = map(lambda t: apply_rotary_pos_emb(positions, t), (q, k))

Are you using a specific library for parallel computing? Horovod, PyTorch Lightning, Fairscale, Deepspeed, or PyTorch distributed with model = nn.DataParallel(model)? I have tested parallel GPU use with both Deepspeed and model = nn.DataParallel(model) so far. cuDNN errors can be quite difficult to debug. Have you tried on CPU or using .detach()?