I am trying to adapt TinyLlama to mistral tokenizer, and it shows:
Traceback (most recent call last):
File "/home/jue/zett/scripts/transfer.py", line 92, in <module>
load_params(args.target_model, revision=args.revision)
File "/home/jue/zett/zett/utils.py", line 736, in load_params
files = [cached_file(model_name_or_path, "flax_model.msgpack", **kwargs)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jue/miniforge3/envs/zett/lib/python3.11/site-packages/transformers/utils/hub.py", line 453, in cached_file
raise EnvironmentError(
OSError: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T does not appear to have a file named flax_model.msgpack. Checkout 'https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T/tree/main' for available files.
I am trying to adapt TinyLlama to mistral tokenizer, and it shows:
The command is: