Closed svaidyans closed 4 months ago
is there news regarding this problem? I have the same problem
@svaidyans , @cheesgno : I am facing slightly different problem that is related to 'model_name'. Could you please tell me if you able to configure the model name properly.
I have specific model_name: "openlm-research/open_llama_7b" tokenizer_name: "openlm-research/open_llama_7b"
in the finetune.yaml file. Do I need to download open_llama_7b binary file and place it under /gpt4all-training
directory? What about tokenizer as well?
Closing this issue as stale. A lot has changed since Nomic last trained a text completion model.
Issue you'd like to raise.
Hi,
I am trying to fine-tune the Falcon model. Here are my parameters:
When running the suggested command for fine-tuning:
I am getting the KeyError on 'weight_decay' and below is the traceback:
Suggestion:
Can you please advise how to get this fixed. Thanks in advance.