-
### What is the issue?
![screenshot-ollama](https://github.com/user-attachments/assets/bc208fb6-34b7-4ac3-a19f-b7adfacdf269)
Disclaimer: I have no GPU (Integrated Graphics)
### OS
Linux
### GPU
…
-
Would be nice to have this work with Intel Arc gpus (a750, a770) using ipex or vulkan instead of cuda.
Links to stuff related to this:
https://github.com/intel/intel-extension-for-deepspeed
https…
-
## Why BuckyOS?
Services running on the Cloud (Server) are closely related to our lives today, and people can hardly live without services in their daily lives. However, there is no operating sys…
-
### Describe the bug
## Versions
I'm using Open WebUI + the Pipelines Langfuse filter + Ollama with Llama3:latest.
Langfuse is the v2.55.1
## Issue
I have created a new custom model price wit…
timoa updated
5 months ago
-
Hi!
this is only a draft and summary of all papers and implementations of mamba.
I will put my feedback here, from Orin AGX 64Gb
Original paper:
(arXiv 2024.01) Vision Mamba: Efficient Visual…
-
I gave Bumblebee a try today. The idea was to provide predictions on image captioning to classify an image so that a user can use/put pre-filled tags to easily filter his images.
It turns out that …
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to fi…
-
There are new APIs available for the text-generation-webui. I'd like to implement the the non-blocking / streaming API so text streams into the text field directly as the LLM outputs, but the current …
-
**问题描述 / Problem Description**
运行streamlit run webui.py后生成的external url,port为默认的本机Ip:8501,外部浏览器无法访问。
**复现问题的步骤 / Steps to Reproduce**
我按照文档流程三步走:
1. 运行本机fastchat服务: python server/llm_api.py
![I…
rabum updated
4 months ago
-
### What is the issue?
When using open-webui, I've noticed that long contextual messages sent to ollama consistently result in poor responses. After investigating the issue, it appears that the `/api…