Hi:
When I run convert_hf_checkpoint.py, I have the following error. It seems that I do not have "pytorch_model.bin.index.json" in the checkpoint folder, which causes config=[ ]. Could you help me? Thanks, Yao
Traceback (most recent call last):
File "scripts/convert_hf_checkpoint.py", line 105, in
convert_hf_checkpoint(
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "scripts/convert_hf_checkpoint.py", line 30, in convert_hf_checkpoint
config = ModelArgs.from_name(model_name)
File "/workspace/project/gpt-fast/model.py", line 48, in from_name
assert len(config) == 1, name
AssertionError: TinyLlama-1.1B-intermediate-step-480k-1T
Hi: When I run convert_hf_checkpoint.py, I have the following error. It seems that I do not have "pytorch_model.bin.index.json" in the checkpoint folder, which causes config=[ ]. Could you help me? Thanks, Yao
Traceback (most recent call last): File "scripts/convert_hf_checkpoint.py", line 105, in
convert_hf_checkpoint(
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "scripts/convert_hf_checkpoint.py", line 30, in convert_hf_checkpoint
config = ModelArgs.from_name(model_name)
File "/workspace/project/gpt-fast/model.py", line 48, in from_name
assert len(config) == 1, name
AssertionError: TinyLlama-1.1B-intermediate-step-480k-1T