-
lots of trobleshooting to try the app...
first i must to move the models outside the folder (better make a symlink) to the upper folder ../hf then i have to download the binaries from huggface from…
-
I was able to run the fine tuning script for the flan-t5-large model on a V100 and save the results without issues. Training was done with the example dummy conversations file with this command:
``…
-
Hi, I'm fine-tuning a fastchat-3b model with LoRA. The processes are getting killed at the `trainer.train()` step with the following log / error:
```
Loading extension module cpu_adam...
Time to lo…
-
Please add examples using local open-source models, like llama or chatGLM. Thanks
-
This code below is return an error, running in multi GPUs.
`python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.3 --num-gpus 3 --max-gpu-memory 6GiB`
USER: what can you do?
ASSISTANT:…
-
The main FastChat README references:
Fine-tuning Vicuna-7B with Local GPUs
Writing this up as an "issue" but it's really more of a documentation request.
I'd like an example that fine tunes a L…
-
How can I fix this error?
```
INFO:root:Loading the model vicuna-13b ...
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line…
-
# Trending repositories for C#
1. [**git-ecosystem / git-credential-manager**](https://github.com/git-ecosystem/git-credential-manager)
__Secure, cross-platform Git credential sto…
-
Hi, I'm training a FlanT5 network. The training completes successfully, but when I try to run a simple inference, I have a tensor of zeros, so the prediction is null.
Example:
```
tokenizer = Aut…
-
# Trending repositories for C#
1. [**git-ecosystem / git-credential-manager**](https://github.com/git-ecosystem/git-credential-manager)
__Secure, cross-platform Git credential sto…