-
**As the model was trained on a "scientific-looks" data and wiki, we need to be "more scientific" when prompting.**
Model: 30B, prompt:
```
Write the Python code with detailed comments to gener…
-
I am using ollama to run local models the output through chat is just horrible but the output through ollama is great
Example below-
Both have the same prompt "Why is the sky blue?"
**ChatbotUI…
-
Hi there,
Thanks so much for the connector.
Please could you describe how to get message streaming working?
I have set up the project accordingly and when attempting to stream i receive an err…
-
`server.log` shows the following:
```{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml.…
-
Currently, the ocean surface albedo is assumed to be a constant (0.06). In reality, the ocean surface albedo depends on the solar zenith angle and surface slope (which depends on the wind speed). See …
szy21 updated
4 months ago
-
running coodboga & nexusraven segfaults and makeing the host unresponsiv.
they load w/o pbls and crash "on the first token".
(zephyr works good)
I tried that with stock ollama 0.1.7, (linux insta…
-
Using:
`curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'`
Hardware:
Gpu: Nvidia RTX A5000
C…
-
When I try to access the local Mistral model using your code, the model always returns a 400 bad request. I deployed the model through Ollama on Win11. What could be the reason for this?
-
### System Info
Running TGI docker with command
`docker run --rm --gpus all --ipc=host -p 8080:80 -v /root/.cache/huggingface/hub:/data -e HF_API_TOKEN=hf_XXXX ghcr.io/huggingface/text-generatio…
-
I am using
* Platform/OS: Ubuntu 22.04
* CARLA version: 0.9.15 dev, branch of d6049290b335e71f8b3130ffa0b90bf0cc1f6686, but also the latest dev version (May24) https://github.com/carla-simulator/c…