Lightning-AI / lightning-thunder

Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Apache License 2.0
1.15k stars 77 forks source link

unhashable type: slice for Thunder and Nous-Hermes-13b #1092

Closed mpatel31415 closed 3 weeks ago

mpatel31415 commented 3 weeks ago

šŸ› Bug

When running benchmark we get an error:

TypeError: unhashable type: \'slice\'

Models affected: 'dolly-v2-3b', 'Nous-Hermes-13b'

To Reproduce

Please use:

1 node with 8 GPUs. Image "INTERNAL_IMAGE:pjnl-20240830"

Then execute:

torchrun --standalone --max-restarts=0 --no-python --nproc-per-node=8 python /opt/pytorch/lightning-thunder/thunder/benchmarks/benchmark_litgpt.py \
    --model_name Nous-Hermes-13b \
    --distributed_mode fsdp \
    --shard_mode zero2 \
    --compile thunder \
    --checkpoint_activations False \
    --low_precision_mode none  \
    --micro_batch_size 1

Expected behavior

We should not get this error :)

Environment

system.device_product_name DGXH100 system.gpu_driver_version 535.129.03 libraries.cuda 12.6.1.005 libraries.pip.lightning 2.4.0.dev20240728 libraries.pip.lightning-thunder 0.2.0.dev0 libraries.pip.lightning-utilities 0.11.6 libraries.pip.litgpt 0.4.11 libraries.pip.nvfuser 0.2.10+git58dfdc1 libraries.pip.pytorch-lightning 2.4.0 libraries.pip.torch 2.5.0a0+git578b8d7 libraries.pip.torchmetrics 1.4.1 libraries.pip.torchvision 0.19.0a0+d23a6e1