EleutherAI / gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
https://www.eleuther.ai/
Apache License 2.0
6.96k stars 1.02k forks source link

Update text_generation_utils.py to work with pipe_parallel_size of 0 #1316

Open markNZed opened 3 weeks ago

markNZed commented 3 weeks ago

During training on a single GPU to get the evaluation step running we set "pipe_parallel_size": 0 but this created a problem in text_generation_utils.py that was not handling the data structure returned when the parallel pipeline is disabled.