cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.1k stars 1.42k forks source link

npx dalai llama: FileNotFoundError: [Errno 2] No such file or directory: 'models/7B//consolidated.00.pth' #13

Open miguelemosreverte opened 1 year ago

miguelemosreverte commented 1 year ago

When running npx dalai llama the following error is found:

downloading consolidated.00.pth 100%[======================================================================>] done      
downloading tokenizer_checklist.chk 100%[==================================================================>] done      
downloading tokenizer.model 100%[==========================================================================>] done      
python3 convert-pth-to-ggml.py models/7B/ 1
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ python3 convert-pth-to-ggml.py models/7B/ 1
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-06, 'vocab_size': 32000}
n_parts =  1
Processing part  0
Traceback (most recent call last):
  File "/Users/miguel_lemos/dalai/convert-pth-to-ggml.py", line 89, in <module>
    model = torch.load(fname_model, map_location="cpu")
  File "/Users/miguel_lemos/Library/Python/3.9/lib/python/site-packages/torch/serialization.py", line 771, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/Users/miguel_lemos/Library/Python/3.9/lib/python/site-packages/torch/serialization.py", line 270, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/Users/miguel_lemos/Library/Python/3.9/lib/python/site-packages/torch/serialization.py", line 251, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models/7B//consolidated.00.pth'

Hardware Overview:

Model Name: MacBook Pro Model Identifier: Mac14,6 Model Number: Z176000J2KS/A Chip: Apple M2 Max Total Number of Cores: 12 (8 performance and 4 efficiency) Memory: 64 GB System Firmware Version: 8419.80.7 OS Loader Version: 8419.80.7 Serial Number (system): VC41C9DXRY Hardware UUID: 5E24912F-6D1E-5098-81AE-841CE11FB23F Provisioning UDID: 00006021-0014199E0187401E Activation Lock Status: Enabled

ekp1k80 commented 1 year ago

same. I'm trying to download it again, but idk if going to work

marcuswestin commented 1 year ago

Try deleting the data directory and retry (rm -rf ~/dalai)

ekp1k80 commented 1 year ago

I downloaded #16 and move the models folder in ~/dalai to the #16 repo in the folder llama.cpp. This fix it for me. It's make you able to run it locally

alterx commented 1 year ago

Same issue for me here @miguelemosreverte

I'm running it on a 2019, 16-inch MBP, i9, 32 gb, AMD Radeon Pro 5500M 8 GB

alterx commented 1 year ago

If you find this issue, for the time being, running it locally seems to be the way to go:

git clone git@github.com:cocktailpeanut/dalai.git
cd dalai/
npm i
npm run dalai:llama install 7B
npm start

It seems like a new release is needed to make these changes available and be able to use it with npx