-
I've installed :
`LlamaSharp : 0.4.0`
`LlamaSharp.Backend.Cpu : 0.4.2-preview`
With the following example :
```
string stModelPath = @"pathfoo\llama-2-7b-guanaco-qlora.ggmlv3.q4_1.bin";
v…
-
### Describe the bug
I am trying to push a model to my model repo and it is giving me an error saying I dont have write access. I have created a write token and I am passing that into hugging face …
-
Is training SalesForce [XGen](https://huggingface.co/Salesforce/xgen-7b-8k-base) supported w/qlora?
-
I used 2 T4 GPUs on Kaggle Notebook with a new version of autotrain-advanced.
It raised the error:
```
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================…
-
### Feature/Improvement Description
Petals is a project for collaboratively running llm's with high parameter counts on consumer hardware and works similar to torrents and crypto mining pools where e…
-
新手想问一个关于断点重训的问题:在重新训练的时候,resume_from_checkpoint设置为哪个目录呢?
我现在的finetune脚本是:
```bash
DATA_PATH="./sample/merge.json" #"../dataset/instruction/guanaco_non_chat_mini_52K-utf8.json" #"./sample/merge_sa…
-
### Description
I'm encountering an issue while fine-tuning `starcoder` using the provided script. The training seems to be stuck, and I'm getting an unexpected number of epochs. Here's the log inf…
-
https://kaiokendev.github.io/til#extending-context-to-8k
Someone had the clever idea of scaling the positional embeddings inversely proportional to the extended context length.
Adding
```
…
-
Hi, I am a bit new to Huggingface and finetuning, but how can I change the yaml config file to finetune on my custom txt dataset. For reference, this is what my txt dataset looks like:
Thank you…
-