-
# ComfyUI Error Report
## Error Details
- **Node Type:** UpscalerTensorrt
- **Exception Type:** polygraphy.exception.exception.PolygraphyException
- **Exception Message:** Could not deserialize …
-
### Llama2’s responses aren’t what Bosquet’s parsing _expects_
I’m running Bosquet against a local, small (3.8GB) Llama2 model that is wrapped by an OpenAI API service.
Some of the examples that a…
-
I would like the ability to use local models as "Prompt Drivers". Simple example is using LMStudio's Local Inference Server option where I can have a model deployed to an endpoint and call it using (c…
-
### Description / 描述
當於LM Studio載入"minicpm-2b-dpo-fp32.Q6_K.gguf" 時,報錯:"create_tensor: tensor 'output.weight' not found",不知道應該設定甚麼Preset
### Case Explaination / 案例解释
_No response_
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to fi…
-
https://llama.meta.com/llama3/
ICLXL updated
7 months ago
-
We need a way to set the OpenAI API Endpoint URL both in the examples and helper libraries.
This is usually refered to as the "BASE_URL" in other languages and defaults to `https://api.openai.com/v…
-
No matter the prompt, privateGPT only returns hashes as the response. This doesn't occur when not using CUBLAS.
Set up info:
NVIDIA GeForce RTX 4080
Windows 11
accelerate==0…
-
**Describe the bug**
I set it all for local but when trying to run memgpt load directory --name BearbedBirb --input-dir "E:\Birbs" --recursive I got
```
Could not load OpenAI model. If you intend…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
I load the ollama server in my local manjaro and then select it into AnythingLLM but when try do embed to the document, it sim…