shansongliu / MU-LLaMA

MU-LLaMA: Music Understanding Large Language Model
GNU General Public License v3.0
230 stars 16 forks source link

mert_path #7

Closed RevolGMPHL closed 1 year ago

RevolGMPHL commented 1 year ago

i got this error: LLaMA_adapter.init() missing 1 required positional argument: 'mert_path' which one is the mert_path?

shansongliu commented 1 year ago

Can you provide a more detailed error report?

RevolGMPHL commented 1 year ago

i see the error in this code:

model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, knn=knn, phase=phase)

but the class LLaMA_adapter have 3 path :init(self, llama_ckpt_dir, llama_tokenizer, mert_path,

shansongliu commented 1 year ago

i see the error in this code:

model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, knn=knn, phase=phase)

but the class LLaMA_adapter have 3 path :init(self, llama_ckpt_dir, llama_tokenizer, mert_path,

Can you copy and paste the full traceback error report by the Python interpreter?

RevolGMPHL commented 1 year ago

i see the error in this code: model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, knn=knn, phase=phase) but the class LLaMA_adapter have 3 path :init(self, llama_ckpt_dir, llama_tokenizer, mert_path,

Can you copy and paste the full traceback error report by the Python interpreter?

Traceback (most recent call last): File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/gradio_app.py", line 29, in model = llama.load(args.model, args.llama_dir, knn=True, llama_type=args.llama_type) File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/llama/llama_adapter.py", line 398, in load model = LLaMA_adapter( TypeError: LLaMA_adapter.init() missing 1 required positional argument: 'mert_path'

shansongliu commented 1 year ago

i see the error in this code: model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, knn=knn, phase=phase) but the class LLaMA_adapter have 3 path :init(self, llama_ckpt_dir, llama_tokenizer, mert_path,

Can you copy and paste the full traceback error report by the Python interpreter?

Traceback (most recent call last): File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/gradio_app.py", line 29, in model = llama.load(args.model, args.llama_dir, knn=True, llama_type=args.llama_type) File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/llama/llama_adapter.py", line 398, in load model = LLaMA_adapter( TypeError: LLaMA_adapter.init() missing 1 required positional argument: 'mert_path'

I will ask the owner to fix this issue, it seems that the gradio_app.py has not been updated.

RevolGMPHL commented 1 year ago

i see the error in this code: model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, knn=knn, phase=phase) but the class LLaMA_adapter have 3 path :init(self, llama_ckpt_dir, llama_tokenizer, mert_path,

Can you copy and paste the full traceback error report by the Python interpreter?

Traceback (most recent call last): File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/gradio_app.py", line 29, in model = llama.load(args.model, args.llama_dir, knn=True, llama_type=args.llama_type) File "/disk_1/aaa/MU-LLaMA/MU-LLaMA/llama/llama_adapter.py", line 398, in load model = LLaMA_adapter( TypeError: LLaMA_adapter.init() missing 1 required positional argument: 'mert_path'

I will ask the owner to fix this issue, it seems that the gradio_app.py has not been updated.

thx~~~

crypto-code commented 1 year ago

I have fixed the issue (9fd1dac727cc8e7e2ed00eea8cd57354a7388ba9) by adding the MERT argument to the load function in thellama_adapter.py file.

# The model files for MERT can be downloaded here in case of network issues:
# https://huggingface.co/m-a-p/MERT-v1-330M
# And set the MERT argument to directory with the model files
model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, "m-a-p/MERT-v1-330M", knn=knn, phase=phase)

I hope this will fixed the issue for you.

shansongliu commented 1 year ago

I have fixed the issue (9fd1dac) by adding the MERT argument to the load function in thellama_adapter.py file.

# The model files for MERT can be downloaded here in case of network issues:
# https://huggingface.co/m-a-p/MERT-v1-330M
# And set the MERT argument to directory with the model files
model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, "m-a-p/MERT-v1-330M", knn=knn, phase=phase)

I hope this will fixed the issue for you.

Great, do we still allow customizable input path for the pretrained model like MERT? Since some users may tend to use offline downloaded model.

crypto-code commented 1 year ago

I have fixed the issue (9fd1dac) by adding the MERT argument to the load function in thellama_adapter.py file.

# The model files for MERT can be downloaded here in case of network issues:
# https://huggingface.co/m-a-p/MERT-v1-330M
# And set the MERT argument to directory with the model files
model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, "m-a-p/MERT-v1-330M", knn=knn, phase=phase)

I hope this will fixed the issue for you.

Great, do we still allow customizable input path for the pretrained model like MERT? Since some users may tend to use offline downloaded model.

I have added command-line arguments for the path to the MERT model (ebc5c78032258710e288ff3d7b53acede8cdf2fd). This can be set to the path with the downloaded MERT model.

shansongliu commented 1 year ago

I have fixed the issue (9fd1dac) by adding the MERT argument to the load function in thellama_adapter.py file.

# The model files for MERT can be downloaded here in case of network issues:
# https://huggingface.co/m-a-p/MERT-v1-330M
# And set the MERT argument to directory with the model files
model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, "m-a-p/MERT-v1-330M", knn=knn, phase=phase)

I hope this will fixed the issue for you.

Great, do we still allow customizable input path for the pretrained model like MERT? Since some users may tend to use offline downloaded model.

I have added command-line arguments for the path to the MERT model (ebc5c78). This can be set to the path with the downloaded MERT model.

Good! See if this fixed your encountered problem @GMPHL . You can close the issue if this is resolved. We would appreciate you starring our repo 😊.

RevolGMPHL commented 1 year ago

I have fixed the issue (9fd1dac) by adding the MERT argument to the load function in thellama_adapter.py file.

# The model files for MERT can be downloaded here in case of network issues:
# https://huggingface.co/m-a-p/MERT-v1-330M
# And set the MERT argument to directory with the model files
model = LLaMA_adapter(llama_ckpt_dir, llama_tokenzier_path, "m-a-p/MERT-v1-330M", knn=knn, phase=phase)

I hope this will fixed the issue for you.

Great, do we still allow customizable input path for the pretrained model like MERT? Since some users may tend to use offline downloaded model.

I have added command-line arguments for the path to the MERT model (ebc5c78). This can be set to the path with the downloaded MERT model.

Good! See if this fixed your encountered problem @GMPHL . You can close the issue if this is resolved. We would appreciate you starring our repo 😊.

thx~~~it works