-
### Describe the bug
I checked IlyaGusev/saiga_llama3_8b_gguf, in LM Studio I get around 45-49 tokens, while in webui I get only 21 tokens.
![image](https://github.com/oobabooga/text-generation-we…
-
Here's the command I ran:
```
python train.py \
--model_name meta-llama/Llama-2-70b-hf \
--batch_size 1 \
--context_length 1024 \
--precision bf16 \
--train_type hqq_lora \
--use_gradient_ch…
-
**Describe the bug**
When I try to run single GPU T5 Pretraining with the script `examples/pretrain_t5.sh`, it outputs the following error:
> ModuleNotFoundError: No module named 'scaled_softmax_c…
-
Hi there.
Very nice project!
Would it be possible to use an OpenAi compatible APi endpoint with a local LLM through [LM Studio](https://lmstudio.ai/) or [text-generation-webui](https://github.co…
-
https://github.com/alperyilmaz/dav-assignments/blob/aff59347e23297d4f7ad1bc76e9db44bbe2a8fdc/week05/assignment-next-week-ggplot2#L9
I couldn't properly understand the usage of these three function t…
-
Where does LM Studio appimage Linux version store downloaded LLMs?
-
The current two options of the plugin is Ollama and ChatGPT, but I was wondering if you could add support to use LM studio?
-
After downloading and installing I just run the demo and test files.
I encounter the following error:
```
Traceback (most recent call last):
File "train_lm.py", line 686, in
train()
…
-
When will you publish the code so that we can reproduce the work?
-
In the `generation_until` mode, the until argument should be the default EOS of the language model itself. However, currently it is set to `fewshot_delimiter`, which further defaults to `\n\n`, if it …