-
**Describe the bug**
while running model from ovms server docker image, its not running properly. Logs suggested there was problem with model conversion. So here I am providing all the logs for debug…
-
## **General information**:
Essentially there is a problem where the bots has conflicting head wear this happens sometimes during raid (you can see by the screenshot its also a scav idk about pmc h…
-
### First, confirm
- [X] I have read the [instruction](https://github.com/Gourieff/comfyui-reactor-node/blob/main/README.md) carefully
- [X] I have searched the existing issues
- [X] I have updated t…
-
Here's the problem that arose: I tried to generate an API key from OpenAI (https://platform.openai.com/account/api-keys), which was successful. However, when testing my Flask application, I encountere…
-
Hi,
I'm new to Langchain and LLM.
I've recently deployed an LLM model using the Hugging Face text-generation-inference library on my local machine.
I've successfully accessed the model using …
-
### OpenVINO Version
2024.0.0 - Current
### Operating System
Windows 10 Professional 2004 [Version 10.0.19041.1415]
### Device used for inference
CPU (Intel Xeon E-2288G CPU [Coffee Lak…
-
Hi y'all, I thought about writing a notebook that showcases how to create a natural language interface for databases using Hugging Face models, Outlines for structured generation and Lark as a parser …
-
In face of generation situations, I guess we should set function name to ranking. And as so we should set function_params. That's not so obvious.
the following not works for me.
Can we have a c…
tfk12 updated
4 months ago
-
@hiyouga I've encountered a consistent issue where the logits score returns -inf during offline inference using models from Hugging Face/vLLM, even using the default inference example. How to solve th…
-
### System Info
2.2.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
On 4*H100:
```
docker sto…