microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.96k stars 4.06k forks source link

different setting for same (num_gpus * batch_size * grad_accum_steps) output different loss and gradient norm #5583

Open SeunghyunSEO opened 4 months ago

SeunghyunSEO commented 4 months ago

hi i have observed significant performance degradation in multi-gpu with grad accum > 1 setting. I'm sorry not to upload profiling code (will be uploaded soon) but i tested 7B scale LLM using same size input data (fixed seed and expand same sequence in batch dimension)

i expected the loss and gradient are same for different training setting because their num_gpus batch_size grad_accum_steps) are all same. but each exps output all different loss and gradients. num_gpus batch_size grad_accum_steps) setting is as follows

i implement fwd+bwd 4 times using AdamW with lr=0.01 (i set lr high for strict profiling), CPU offloaded zero-3 (for 1gpu too because memory issue).

Here is my question. Is it correct outputs should all be the same when (num_gpus batch_size grad_accum_steps) is equal?

mhy9989 commented 4 months ago

I kept train_batch_size the same in different trainings, and found that increasing num_gpus would cause the loss to increase. I don't know why.

SeunghyunSEO commented 3 months ago

@mhy9989 Oh I forgot about this, now that I think about it it was a stupid question lol if you take something like loss.mean(), you can see that it's a different operation when you derive the backdrop. in addition to numerical errors, it will never be the same. of course, I think there should not be any performance degradation or divergence because of grad accum

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import copy

# Set random seed for reproducibility
import random
import numpy as np
def set_seed(seed_val: int = 42):
    random.seed(seed_val)
    np.random.seed(seed_val)
    torch.manual_seed(seed_val)
    torch.cuda.manual_seed_all(seed_val)

# set
seed = 42
vocab_size = 32768
d_embd = 1024
bsz = 32
seq_len = 512
n = d_embd
dtype = torch.bfloat16

# create input and target
set_seed(seed)
x = torch.randn((bsz, seq_len, d_embd)).cuda().to(dtype=dtype)
y = torch.randint(0, vocab_size, (bsz, seq_len)).cuda()
y[-1, -20:] = -100

class Model(nn.Module):
    def __init__(self, vocab_size, d_embd):
        super(Model, self).__init__()
        self.vocab_size = vocab_size
        self.ffn = nn.Linear(d_embd, d_embd, bias=False)
        self.unemb = nn.Linear(d_embd, vocab_size, bias=False)

    def forward(self, x, y, reduction):
        x = self.unemb(F.relu(self.ffn(x))).float()
        x = x.contiguous().view(-1, self.vocab_size)
        y = y.contiguous().view(-1).to(x.device)
        assert x.size(0) == y.size(0), f"x.size()({x.size()}) != y.size(){y.size()}"
        loss = nn.CrossEntropyLoss(reduction=reduction)(x, y)
        num_valid_tokens = (y != -100).sum()
        if reduction == 'sum':
            loss = loss / num_valid_tokens
        print(f'x.size(): {x.size()}, num_valid_tokens: {num_valid_tokens}')
        return loss

set_seed(seed)
model = Model(vocab_size, d_embd).cuda().to(dtype=dtype)
optimizer = optim.Adam(model.parameters(), lr=0.001)
set_seed(seed)
model_ = Model(vocab_size, d_embd).cuda().to(dtype=dtype)
optimizer_ = optim.Adam(model_.parameters(), lr=0.001)

reduction='sum'
# reduction='mean'

num_accum = 2

for epoch in range(5):
    loss = model(x, y, reduction)
    loss.backward()
    ffn_grad_cache = copy.deepcopy(model.ffn.weight.grad)
    unemb_grad_cache = copy.deepcopy(model.unemb.weight.grad)
    optimizer.step()
    optimizer.zero_grad()

    avg_loss = 0.0
    for accum in range(num_accum):
        x_ = x[accum * (bsz // num_accum):(accum + 1) * (bsz // num_accum), :, :]
        y_ = y[accum * (bsz // num_accum):(accum + 1) * (bsz // num_accum), :]
        loss_ = model_(x_, y_, reduction)
        avg_loss += loss_
        loss_.backward()

    avg_loss /= num_accum
    ffn_grad_cache_ = copy.deepcopy(model_.ffn.weight.grad)
    unemb_grad_cache_ = copy.deepcopy(model_.unemb.weight.grad)
    optimizer_.step()
    optimizer_.zero_grad()

    print(f'''
    reduction: {reduction}
    num_accum: {num_accum}
    loss (not accum): {loss}
    loss (accum): {avg_loss}
    loss diff? : {loss-avg_loss}
    ffn_grad allclose?: {torch.allclose(ffn_grad_cache, ffn_grad_cache_)}, abs diff max: {(ffn_grad_cache.abs()-ffn_grad_cache_.abs()).max()}
    ffn_grad allclose?: {torch.allclose(unemb_grad_cache, unemb_grad_cache_)}, abs diff max: {(unemb_grad_cache.abs()-unemb_grad_cache_.abs()).max()}
    ''')