-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
```
20141115 11:50:22 web.server:DEBUG server:563: AI model 1 summoned for Room 2 Seat 1
20141115 11:50:22 geventwebsocket.handler:DEBUG handler:69: Initializing WebSocket
20141115 11:50:22 geventwe…
-
### What is the issue?
I'm calling the generate API as follows:
```
url = 'http://localhost:11434/api/generate'
data = {
"model": model_name,
"stream": False,
"options": {
"tem…
-
Optimized the Mixtral model by using ipex_llm.optimize_model() to transform it to low-bit and then save it and then load it.
Set "max_length": 1024 yet getting a warning that `max_length` (=20) .
…
-
cc: @ishaan-jaff
```
@reliable_query(user_email='ishaan@berri.ai')
```
is perfect for my use-case. But I need to track different endpoints (one for query, one for ingest). Is there any way to t…
-
**Describe the bug**
After ending my turn, the enemy AI ended up stuck, maxing out a CPU core (in a possible loop ?), would not end its turn, and kept filling up the Client log until I ended up termi…
-
## 🚀 Feature
Create a workflow that collects and merges pytest results from CI runs across different operating systems, acclerators and software versions.
With this feature implemented, we will be…
-
We could easily add more models to the list of chat models used for summarization:
https://github.com/enjalot/latent-scope/blob/main/latentscope/models/chat_models.json
There are plenty of small o…
-
The new version, app-0.2.31 doesn' seem to use (offload to= nvidia rtx 3060 anymore.
The old one works flawlessly (app-0.2.29).
My spec i7 10700
rtx 3060 phoneix 12 gb
The Gpu Offload checkbox …
-
### Describe the bug
In Mugen, during Survival mode the round number will go up. In Ikemen, the match number will go up. In addition, RoundsExisted will increment in Mugen but always return 0 in Ikem…