alexrozanski / llama.swift

Fork of llama.cpp, supporting Facebook's LLaMA model in Swift
MIT License
154 stars 10 forks source link

An error occurred while I was configuring #10

Open surgit opened 11 months ago

surgit commented 11 months ago

image

Complete error message

> python3 -u /var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert-pth-to-ggml.py /Users/wu/Library/Application Support/com.alexrozanski.LlamaChat/models/4CCA4C99-541C-418C-B5C2-F9A11CEE8896/7B 1
Loading model file /Users/wu/Library/Application Support/com.alexrozanski.LlamaChat/models/4CCA4C99-541C-418C-B5C2-F9A11CEE8896/7B/consolidated.00.pth
Loading vocab file /Users/wu/Library/Application Support/com.alexrozanski.LlamaChat/models/4CCA4C99-541C-418C-B5C2-F9A11CEE8896/tokenizer.model
Writing vocab...

Traceback (most recent call last):
  File "/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert-pth-to-ggml.py", line 11, in <module>
    convert.main(['--outtype', 'f16' if args.ftype == 1 else 'f32', '--', args.dir_model])
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 1144, in main
    OutputFile.write_all(outfile, params, model, vocab)
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 953, in write_all
    for i, ((name, lazy_tensor), ndarray) in enumerate(zip(model.items(), ndarrays)):
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 875, in bounded_parallel_map
    result = futures.pop(0).result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
    return self.__get_result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
    raise self._exception
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 950, in do_item
    return lazy_tensor.load().to_ggml().ndarray
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 489, in load
    ret = self._load()
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 497, in load
    return self.load().astype(data_type)
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 489, in load
    ret = self._load()
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 695, in load
    return UnquantizedTensor(storage.load(storage_offset, elm_count).reshape(size))
  File "/private/var/folders/nx/t5wcvmf92yg7n3l4_mnv98_r0000gp/T/FB2F69BC-BCF7-4DDD-965C-EDF15B81DD2F/convert.py", line 679, in load
    raise Exception("tensor stored in unsupported format")
Exception: tensor stored in unsupported format