alexrozanski / LlamaChat

Chat with your favourite LLaMA models in a native macOS app
https://llamachat.app
MIT License
1.45k stars 56 forks source link

Error using pth format model #36

Open realalexsun opened 1 year ago

realalexsun commented 1 year ago

Please forgive my ignorance... Problem:

Exception: tensor stored in unsupported format

Full Error that popped up:

Traceback (most recent call last): File "/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert-pth-to-ggml.py", line 11, in convert.main(['--outtype', 'f16' if args.ftype == 1 else 'f32', '--', args.dir_model]) File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 1144, in main OutputFile.write_all(outfile, params, model, vocab) File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 953, in write_all for i, ((name, lazy_tensor), ndarray) in enumerate(zip(model.items(), ndarrays)): File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 875, in bounded_parallel_map result = futures.pop(0).result() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result return self.get_result() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in get_result raise self._exception File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 950, in do_item return lazy_tensor.load().to_ggml().ndarray File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 489, in load ret = self._load() File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 497, in load return self.load().astype(data_type) File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 489, in load ret = self._load() File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 695, in load return UnquantizedTensor(storage.load(storage_offset, elm_count).reshape(size)) File "/private/var/folders/dl/8yzjfxfn26sf347jggtzb30c0000gn/T/0D6EEEEF-7FD3-417C-AFC4-71D9A9E81116/convert.py", line 679, in load raise Exception("tensor stored in unsupported format") Exception: tensor stored in unsupported format

hyeonchang commented 1 year ago

Same issue.

hyeonchang commented 1 year ago

@realalexsun

Download these files

https://github.com/ggerganov/llama.cpp/blob/master/convert-pth-to-ggml.py https://github.com/ggerganov/llama.cpp/blob/master/convert.py

Copy the downloaded files to the below folder.

/Applications/LlamaChat.app/Contents/Resources/llama.swift_llama.bundle/Contents/Resources

RESOLVED!!!

realalexsun commented 1 year ago

@realalexsun

Download these files

https://github.com/ggerganov/llama.cpp/blob/master/convert-pth-to-ggml.py https://github.com/ggerganov/llama.cpp/blob/master/convert.py

Copy the downloaded files to the below folder.

/Applications/LlamaChat.app/Contents/Resources/llama.swift_llama.bundle/Contents/Resources

RESOLVED!!!

Thanks

LucaColonnello commented 1 year ago

These files are no longer available in the llama.cpp codebase, I went back in the history and grabbed them, but maybe this needs an update on LlamaChat side?