-
File "/opt/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/trainer.py", line 1628, in train
return inner_training_loop(
File "/opt/miniconda3/envs/textgen/lib/python3.10/si…
-
## ⚙️ Request New Models
- Link to an existing implementation (e.g. Hugging Face/Github): https://huggingface.co/TheBloke/guanaco-33B-GGML
- Is this model architecture supported by MLC-LLM? (the…
-
### Bug Description
When running this code:
```python
# Initialize the ApiClient globally
configuration = pos_client.Configuration(host=f"http://{IP}:{PORT}")
api_client = pos_clien…
hra42 updated
4 months ago
-
method 1:
```python
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'yahma/alpaca-cleaned' \
--output_dir './lora-alpaca' \
--batch_size 128 \
…
-
(lg) C:\Users\rsrikaan\localGpt>python run_localGPT.py
2023-12-20 15:17:05,282 - INFO - run_localGPT.py:241 - Running on: cpu
2023-12-20 15:17:05,282 - INFO - run_localGPT.py:242 - Display Source Do…
-
Current version:
commit 84156f179f91f519e48185414391d040112f2d34 (HEAD -> main, origin/main, origin/HEAD)
updated on Jun 3 2024
I tired to run the following script in example/scripts/stf.py:
…
-
Using the current codebase, on the local LLM it seems to be stuck in a loop. After the 4th iteration the console output looks like this:
```
Ensure the response can be parsed by Python json.loads.…
-
`python generate_4bit.py --model_path decapoda-research/llama-7b-hf --lora_path Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanaco --use_local 0`
上面的命令报错了。。。
/home/nano/.local/lib/python3.10/s…
-
Hello!
Is there a way to use the Alpaca template with Phi-3?
I'm struggling a bit to understand the documentation, I was wondering if anyone can help me understand how to use it here and define…
-
### Feature request
right now [the script](https://github.com/huggingface/optimum-habana/blob/main/examples/language-modeling/run_lora_clm.py) is hardcoded for either `"tatsu-lab/alpaca"` or `"timd…