-
Currently, from what I have tested, the plugin answers general questions about the web and not exclusively about the content-types defined in strapi.
For example, taking into consideration a conten…
-
### What is the issue?
Hi All,
I installed ollama both (on machine/docker) both with same behaviour of not detecting the GPU. Have LM Studio on the same machine which picks up GPU without any issu…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
_Note: this RFC is for the no-code frontend associated with the [proposed AI workflow framework RFC](https://github.com/opensearch-project/OpenSearch/issues/9213)._
## Proposal
In [the proposed …
-
I experimented using the settings provided in the example at https://huggingface.co/selfrag/selfrag_llama2_7b, but the prediction result I got was just a series of 'Model prediction: blank result'. H…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N…
jgen1 updated
1 month ago
-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
*Please make sure you are familiar with the SIP process documented*
[here](https://github.com/apache/superset/issues/5602). The SIP will be numbered by a committer upon acceptance.
## [SIP-144] Pr…
-
### *Project idea 3: Research about deploying LLM with Jina*
| info | details |
| ---------------- | ------------------------------…
-
## Overview
We need to add support for using [llama.cpp](https://github.com/ggerganov/llama.cpp) as an inference server in our project. llama.cpp is known for its speed, cross-platform compatibility,…