-
I'm seeing a few messages like this in the debug output - what do they mean?
```
DEBUG:faster_whisper:Processing segment at 07:27.560
DEBUG:faster_whisper:Compression ratio threshold is not met w…
-
Hi,
I've got things freshly installed and was able to create an account. However, when I try a prompt I receive an `Error occurred while generating` response.
I tried hitting the api directly in…
-
Hello! when I run the following script:
```shell
python -m lamorel_launcher.launch \
--config-path /home/ewanlee/Codes/lamorel/examples/PPO_LoRA_finetuning/ \
--config-name local_gpu_config \
r…
-
6B模型运行成功,Yi-34B-Chat-4bits运行失败。
![QQ截图20231220230139](https://github.com/01-ai/Yi/assets/31362171/e901a9fe-c644-4bec-89fd-31cdc25d051d)
### 执行代码:
```from transformers import AutoModelForCau…
jrd77 updated
8 months ago
-
I started using Ollama in the last 12hrs and I'm loving it... Why? Because I come from the CloudNative space, I've been working with Docker/Kubernetes Engineering for a while... I love the concept fro…
-
I have successfully compiled the llama 2 7b model and also setup all the required steps for for building the IOS app from source.
I have imported the MLCSwift package and also added all the paths tha…
-
When running below snippet -
python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks hellaswag \
--device cuda:0
Getting this error -
…
-
test
-
1. What is the actual meaning of [query_len](https://github.com/OpenGVLab/LLaMA-Adapter/blob/0703df9e0ab0851de00b0b1f168c4498a5963230/imagebind_LLM/llama/llama_adapter.py#L75C1-L76C1) here?
2. If …
-
I occasionally get this error when using the 'open-meteo-api' tool:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6536 tokens (6280 in yo…