-
when run: ./finetune_lora.sh,
got: TypeError: Object of type Tensor is not JSON serializable .
detail:
Traceback (most recent call last):
File "/home/david/qw/Llama2-Chinese0/train/sft/fine…
-
-
### The Feature
Capture function call from API that don't expose native function call (or tools) support.
### Motivation, pitch
When using API that supports function calling (or also called t…
Lunik updated
4 months ago
-
Traceback (most recent call last):
File "/home/maguoheng/anaconda3/envs/llama2/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compile…
-
Consider such (real) api response:
```
data: {"id":"chatcmpl-7ZZCd6bd5DwGT9UdoK9Ph4","model":"llama2-70b","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}],"usage":null}
d…
-
llama2_7b_qlora_alpaca_enzh_e3.py作为模板,qlora微调gsm8k,修改PROMPT_TEMPLATE.llama2_chat为PROMPT_TEMPLATE.llama3_chat,acc从62下降到28,可能是什么原因导致的?
-
Hola! I attempted to do a fine-tuning run against [Llama-2-70B-GPTQ](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ) (branch with file: `gptq_model-4bit--1g.safetensors`) this morning, and ran into…
-
If I want to use open-source model such as Llama2 or codellama. How can I change the code
-
When i run localGPT.py i get the below error:
```
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:221 - Running on: cuda
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:222 - Display Source D…
-
Hello mlcommons team,
I want to run the "Automated command to run the benchmark via MLCommons CM" (from the example: https://github.com/mlcommons/inference/tree/master/language/llama2-70b), but I d…