microsoft / torchscale

Foundation Architecture for (M)LLMs
https://aka.ms/GeneralAI
MIT License
2.98k stars 201 forks source link

RuntimeError: The size of tensor a (5) must match the size of tensor b (2) at non-singleton dimension 0 #72

Closed codinglover0111 closed 9 months ago

codinglover0111 commented 9 months ago
python train.py \
/home/sc0111/ai/torchscale/wikitext-103/wikitextdone \
--num-workers 4 \
--arch retnet_base \
--task language_modeling \
--optimizer adam --adam-betas "(0.9, 0.98)" \
--max-update 5000 \
--max-tokens 1024
python interactive.py \
/home/sc0111/ai/torchscale/wikitext-103/wikitextdone \
--num-workers 2 \
--path /home/sc0111/ai/torchscale/examples/fairseq/checkpoints/checkpoint_best.pt \
--task language_modeling \
--buffer-size 1024 \
--max-tokens 1024 \
--device-id 0
2023-10-12 19:52:52 | INFO | fairseq_cli.interactive | {'_name': None, 'common': {'_name': None, 'no_progress_bar': False, 'log_interval': 100, 'log_format': None, 'log_file': None, 'tensorboard_logdir': None, 'wandb_project': None, 'azureml_logging': False, 'seed': 1, 'cpu': False, 'tpu': False, 'bf16': False, 'memory_efficient_bf16': False, 'fp16': False, 'memory_efficient_fp16': False, 'fp16_no_flatten_grads': False, 'fp16_init_scale': 128, 'fp16_scale_window': None, 'fp16_scale_tolerance': 0.0, 'min_loss_scale': 0.0001, 'threshold_loss_scale': None, 'user_dir': None, 'empty_cache_freq': 0, 'all_gather_list_size': 16384, 'model_parallel_size': 1, 'quantization_config_path': None, 'profile': False, 'reset_logging': False, 'suppress_crashes': False, 'use_plasma_view': False, 'plasma_path': '/tmp/plasma', 'log_nvidia_smi': False}, 'common_eval': {'_name': None, 'path': '/home/sc0111/ai/torchscale/examples/fairseq/checkpoints/checkpoint_best.pt', 'post_process': None, 'quiet': False, 'model_overrides': '{}', 'results_path': None, 'is_moe': False}, 'distributed_training': {'_name': None, 'distributed_world_size': 1, 'distributed_rank': 0, 'distributed_backend': 'nccl', 'distributed_init_method': None, 'distributed_port': -1, 'device_id': 0, 'distributed_no_spawn': False, 'ddp_backend': 'pytorch_ddp', 'bucket_cap_mb': 25, 'fix_batches_to_gpus': False, 'find_unused_parameters': False, 'fast_stat_sync': False, 'heartbeat_timeout': -1, 'broadcast_buffers': False, 'slowmo_momentum': None, 'slowmo_algorithm': 'LocalSGD', 'localsgd_frequency': 3, 'nprocs_per_node': 1, 'pipeline_model_parallel': False, 'pipeline_balance': None, 'pipeline_devices': None, 'pipeline_chunks': 0, 'pipeline_encoder_balance': None, 'pipeline_encoder_devices': None, 'pipeline_decoder_balance': None, 'pipeline_decoder_devices': None, 'pipeline_checkpoint': 'never', 'zero_sharding': 'none', 'fp16': False, 'memory_efficient_fp16': False, 'tpu': False, 'no_reshard_after_forward': False, 'fp32_reduce_scatter': False, 'cpu_offload': False, 'use_sharded_state': False, 'distributed_num_procs': 1}, 'dataset': {'_name': None, 'num_workers': 1, 'num_workers_valid': 0, 'skip_invalid_size_inputs_valid_test': False, 'max_tokens': 1024, 'batch_size': None, 'required_batch_size_multiple': 8, 'required_seq_len_multiple': 1, 'dataset_impl': None, 'data_buffer_size': 10, 'train_subset': 'train', 'valid_subset': 'valid', 'combine_valid_subsets': None, 'ignore_unused_valid_subsets': False, 'validate_interval': 1, 'validate_interval_updates': 0, 'validate_after_updates': 0, 'fixed_validation_seed': None, 'disable_validation': False, 'max_tokens_valid': 1024, 'batch_size_valid': None, 'max_valid_steps': None, 'curriculum': 0, 'gen_subset': 'test', 'num_shards': 1, 'shard_id': 0}, 'optimization': {'_name': None, 'max_epoch': 0, 'max_update': 0, 'stop_time_hours': 0.0, 'clip_norm': 0.0, 'sentence_avg': False, 'update_freq': [1], 'lr': [0.25], 'stop_min_lr': -1.0, 'use_bmuf': False}, 'checkpoint': {'_name': None, 'save_dir': 'checkpoints', 'restore_file': 'checkpoint_last.pt', 'finetune_from_model': None, 'reset_dataloader': False, 'reset_lr_scheduler': False, 'reset_meters': False, 'reset_optimizer': False, 'optimizer_overrides': '{}', 'save_interval': 1, 'save_interval_updates': 0, 'keep_interval_updates': -1, 'keep_last_epochs': -1, 'keep_best_checkpoints': -1, 'no_save': False, 'no_epoch_checkpoints': False, 'no_last_checkpoints': False, 'no_best_checkpoints': False, 'no_save_optimizer_state': False, 'no_save_optimizer_state_on_training_finished': False, 'symlink_best_and_last_checkpoints': False, 'best_checkpoint_metric': 'loss', 'maximize_best_checkpoint_metric': False, 'patience': -1, 'checkpoint_suffix': '', 'checkpoint_shard_count': 1, 'load_checkpoint_on_all_dp_ranks': False, 'write_checkpoints_asynchronously': False, 's3_upload_path': None, 'model_parallel_size': 1}, 'bmuf': {'_name': None, 'block_lr': 1.0, 'block_momentum': 0.875, 'global_sync_iter': 50, 'warmup_iterations': 500, 'use_nbm': False, 'average_sync': False, 'distributed_world_size': 1}, 'generation': {'_name': None, 'beam': 5, 'nbest': 1, 'max_len_a': 0.0, 'max_len_b': 200, 'min_len': 1, 'match_source_len': False, 'unnormalized': False, 'no_early_stop': False, 'no_beamable_mm': False, 'lenpen': 1.0, 'unkpen': 0.0, 'replace_unk': None, 'sacrebleu': False, 'score_reference': False, 'prefix_size': 0, 'no_repeat_ngram_size': 0, 'sampling': False, 'sampling_topk': -1, 'sampling_topp': -1.0, 'constraints': None, 'temperature': 1.0, 'diverse_beam_groups': -1, 'diverse_beam_strength': 0.5, 'diversity_rate': -1.0, 'print_alignment': None, 'print_step': False, 'lm_path': None, 'lm_weight': 0.0, 'iter_decode_eos_penalty': 0.0, 'iter_decode_max_iter': 10, 'iter_decode_force_max_iter': False, 'iter_decode_with_beam': 1, 'iter_decode_with_external_reranker': False, 'retain_iter_history': False, 'retain_dropout': False, 'retain_dropout_modules': None, 'decoding_format': None, 'no_seed_provided': False}, 'eval_lm': {'_name': None, 'output_word_probs': False, 'output_word_stats': False, 'context_window': 0, 'softmax_batch': 9223372036854775807, 'stats_path': None, 'max_valid_steps': None}, 'interactive': {'_name': None, 'buffer_size': 1, 'input': '-'}, 'model': None, 'task': {'_name': 'language_modeling', 'data': '/home/sc0111/ai/torchscale/wikitext-103/wikitextdone', 'sample_break_mode': 'none', 'tokens_per_sample': 1024, 'output_dictionary_size': -1, 'self_target': False, 'future_target': False, 'past_target': False, 'add_bos_token': False, 'max_source_positions': None, 'max_target_positions': None, 'shorten_method': 'none', 'shorten_data_split_list': '', 'pad_to_fixed_length': False, 'pad_to_fixed_bsz': False, 'seed': 1, 'batch_size': None, 'batch_size_valid': None, 'dataset_impl': None, 'data_buffer_size': 10, 'tpu': False, 'use_plasma_view': False, 'plasma_path': '/tmp/plasma'}, 'criterion': {'_name': 'cross_entropy', 'sentence_avg': True}, 'optimizer': None, 'lr_scheduler': {'_name': 'fixed', 'force_anneal': None, 'lr_shrink': 0.1, 'warmup_updates': 0, 'lr': [0.25]}, 'scoring': {'_name': 'bleu', 'pad': 1, 'eos': 2, 'unk': 3}, 'bpe': None, 'tokenizer': None}
2023-10-12 19:52:52 | INFO | fairseq.tasks.language_modeling | dictionary: 267744 types
2023-10-12 19:52:52 | INFO | fairseq_cli.interactive | loading model(s) from /home/sc0111/ai/torchscale/examples/fairseq/checkpoints/checkpoint_best.pt
2023-10-12 19:52:52 | INFO | fairseq.checkpoint_utils | load_model_ensemble_and_task is_moe=False
2023-10-12 19:53:02 | INFO | fairseq_cli.interactive | NOTE: hypothesis and token scores are output in base 2
2023-10-12 19:53:02 | INFO | fairseq_cli.interactive | Type the input sentence and press return:
hello?
Traceback (most recent call last):
  File "/home/sc0111/ai/torchscale/examples/fairseq/interactive.py", line 11, in <module>
    cli_main()
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq_cli/interactive.py", line 312, in cli_main
    distributed_utils.call_main(convert_namespace_to_omegaconf(args), main)
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq/distributed/utils.py", line 376, in call_main
    main(cfg, **kwargs)
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq_cli/interactive.py", line 227, in main
    translations = task.inference_step(
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq/tasks/language_modeling.py", line 335, in inference_step
    return generator.generate(
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq/sequence_generator.py", line 182, in generate
    return self._generate(sample, **kwargs)
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq/sequence_generator.py", line 321, in _generate
    lprobs, avg_attn_scores = self.model.forward_decoder(
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/fairseq/sequence_generator.py", line 775, in forward_decoder
    decoder_out = model.decoder.forward(
  File "/home/sc0111/ai/torchscale/examples/fairseq/models/retnet.py", line 251, in forward
    return super().forward(src_tokens, **kwargs)
  File "/home/sc0111/ai/torchscale/torchscale/architecture/retnet.py", line 366, in forward
    x, l_aux_i = layer(
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sc0111/ai/torchscale/torchscale/architecture/retnet.py", line 165, in forward
    x = self.retention(
  File "/home/sc0111/.pyenv/versions/ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sc0111/ai/torchscale/torchscale/component/multiscale_retention.py", line 190, in forward
    output = self.recurrent_forward(qr, kr, v, inner_mask, incremental_state)
  File "/home/sc0111/ai/torchscale/torchscale/component/multiscale_retention.py", line 102, in recurrent_forward
    scale = prev_scale * decay + 1
RuntimeError: The size of tensor a (5) must match the size of tensor b (2) at non-singleton dimension 0
sunyt32 commented 9 months ago

The problem is caused by beam search algorithm in fairseq package. The error comes from here: https://github.com/microsoft/torchscale/blob/main/examples/fairseq/models/retnet.py#L256 To reorder the intermediate beam search result, the model selects the prev_scale in a wrong way.

In our experiments, we don't use beam search. Instead, nucleus sampling is sufficient for LLMs.

codinglover0111 commented 9 months ago

So how can I fix this?

sunyt32 commented 9 months ago

If you want to use fairseq_cli.interactive, we can just modify reorder_incremental_state_scripting fuction above. When reordering the incremental_state, ignore incremental_state["scale"].