OpenNMT / OpenNMT-py

Open Source Neural Machine Translation and (Large) Language Models in PyTorch
https://opennmt.net/
MIT License
6.67k stars 2.24k forks source link

OpenNMT v3.5.0 training fails using Multi headed attention #2574

Closed RakshaPRao closed 3 months ago

RakshaPRao commented 3 months ago

Distributed training with source features fails with the error.

File "/usr/local/lib/python3.8/site-packages/onmt/utils/distributed.py", line 177, in spawned_train
    process_fn(opt, device_id=device_id)
  File "/usr/local/lib/python3.8/site-packages/onmt/train_single.py", line 238, in main
    trainer.train(
  File "/usr/local/lib/python3.8/site-packages/onmt/trainer.py", line 319, in train
    self._gradient_accumulation(
  File "/usr/local/lib/python3.8/site-packages/onmt/trainer.py", line 533, in _gradient_accumulation
    raise exc
  File "/usr/local/lib/python3.8/site-packages/onmt/trainer.py", line 497, in _gradient_accumulation
    model_out, attns = self.model(
  File "/usr/local/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/onmt/models/model.py", line 278, in forward
    dec_out, attns = self.decoder(
  File "/usr/local/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/onmt/decoders/transformer.py", line 699, in forward
    dec_out, attn, attn_align = layer(
  File "/usr/local/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/onmt/decoders/transformer.py", line 186, in forward
    layer_out, attns = self._forward(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/onmt/decoders/transformer.py", line 391, in _forward
    self_attn, _ = self._forward_self_attn(
  File "/usr/local/lib/python3.8/site-packages/onmt/decoders/transformer.py", line 238, in _forward_self_attn
    return self.self_attn(
  File "/usr/local/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/onmt/modules/multi_headed_attn.py", line 656, in forward
    attn_output = scaled_dot_product_attention(
RuntimeError: _scaled_dot_product_attention: Explicit attn_mask should not be set when is_causal=True
vince62s commented 3 months ago

Hi, you need to provide you pytorch version as well as whether you installed flash-attn2 as well as your yaml file, otherwise difficult to help you out.

RakshaPRao commented 3 months ago

Hi, pytorch version = 2.0.1. flash-attn2 is not installed and the yaml is

accum_count: 8
accum_steps: 0
adam_beta1: 0.9
adam_beta2: 0.998
batch_size: 4096
batch_size_multiple: 1
batch_type: tokens
bucket_size: 32768
decay_method: noam
decoder_type: transformer
dropout: 0.2
early_stopping: 10
encoder_type: transformer
feat_merge: sum
heads: 12
hidden_size: 768
keep_checkpoint: 20
label_smoothing: 0.1
layers: 6
learning_rate: 2.0
max_generator_batches: 0
max_grad_norm: 0.0
n_src_feats: 1
normalization: tokens
optim: adam
param_init: 0.0
param_init_glorot: 'true'
pool_factor: 8192
position_encoding: 'true'
queue_size: 1024
report_every: 100
save_checkpoint_steps: 5000
seed: 1234
share_vocab: true
src_feats_defaults: N
src_seq_length: 600
src_vocab_size: 38000
tgt_seq_length: 600
train_steps: 1000000
transformer_ff: 3072
valid_batch_size: 16
valid_steps: 5000
warmup_steps: 8000
word_vec_size: 768

Thanks!

vince62s commented 3 months ago

You need 2.1 or 2.2 Sdpa is buggy with 2.0.1