-
https://www.phoronix.com/news/AMD-Ryzen-AI-Open-Source-Demo
AMD Has Open-Source Ryzen AI Demo Code - But Only For Windows
-
My agent keep running into this error whenever I use any of the models locally (I tried llama2, openhermes, starling and Mistral).
The only model that didn't run into this problem is Mistral.
Ve…
-
**Describe the bug**
The model is correctly loaded and has previously answered large questions with context. Now after prompting a blank response is shown without clear error.
**Steps to reproduce…
-
Hi,
When I leave the "Agg to empty" option unchecked, api requests occur, but each time there are three requests, with the first and last requests being empty.
However, when I select the "Agg to…
-
### Version
VisualStudio Code extension
### Suggestion
Please add support for open LLMs compatible with endpoint API for LLM Studio / ollama / etc.
-
The purpose of this task is to create an ADR to describe our AI Connector strategy and how new AI Connectors can be contributed.
Below are a series of user stories that must be addressed by the ADR.
…
-
### Description
I only found this instruction for docker.
https://github.com/Mintplex-Labs/anything-llm/blob/master/server/storage/models/README.md#text-generation-llm-selection
How to add your mod…
-
It seems that when using the same approach in the PR #4753, consuming the Ollama API breaks.
Apparently with the error below:
```
Microsoft.SemanticKernel.HttpOperationException: json: cannot …
-
The following entry in the Docker log output provides the clue:
2024-01-05 12:22:08 TypeError: tokenChunks.slice is not a function
2024-01-05 12:22:08 at cannonball (/app/server/utils/helpers/…
-
I am getting different errors while uploading PDF files. Some files are working too.
The error comes to the top right and disappears and unable to capture it.
OrtRun(). error code = 6 and out of Bo…