-
I'm trying what looks like the "Hello World" of this repo: Running the basic training on a Runpod community cloud `2 x RTX 4090, (128 vCPU 125 GB RAM)` configuration. Normally I'd play around with thi…
Pugio updated
6 months ago
-
Here is my code:
```
model=/data/vicuna-13b/vicuna-13b-v1.5/
docker run --gpus all --shm-size 1g -p 8080:80 -v /data/:/data \
ghcr.io/predibase/lorax:latest --model-id $model --sharded tru…
-
Hello, I start deploy in one node with 4GPU, and set tensor_parallel 2. program is always wating for server to start
code is:
hostfile is:
127.0.0.1 slots=2
yunll updated
5 months ago
-
When I run
```shell
deepspeed fastchat/train/train_lora.py --model_name_or_path /root/autodl-tmp/cjk/Fast-Chat-main/Codellama-7B --lora_r 16 --lora_alpha 16 --lora_dropout 0.05 -…
-
Press "Run Filter" on Finetune tab.
**Expected result:**
Filtering is working
**Actual result:**
![Image](https://github.com/smallcloudai/refact/assets/140423660/31c844ea-f16e-42f6-9b6b-e7f537138ec9…
-
Hi, we're using the litgpt framework to train models and then would like to export them to huggingface format for continued tuning and evaluation.
The steps we're using after completing training ar…
-
# Trending repositories for C#
1. [**dotnet / runtime**](https://github.com/dotnet/runtime)
__.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.__
…
-
As an app developer who wants to add the AI Assist feature via VZCode, I want to use CodeLlama, so that I'm not locked into OpenAI.
See https://replicate.com/meta/codellama-34b/api?tab=node
```j…
-
Apologies if this is a rookie question, but my response times seem much slower using this add-on, versus the speed I get when prompting the AI in my terminal. I can't think of any reason for this sinc…
-
As indirectly suggested by @ElpadoCan on twitter:
https://x.com/frank_pado/status/1809179756249710794