-
There is `llm.output_messages` for LLM calls. There is `embedding.vector` for embedding model calls. There is `retrieval.documents` for retrieval results. But there is no `tool_call.output` for tool c…
tslmy updated
9 months ago
-
尝试重新安装没有解决,报错如下:
errorError: ENOENT: no such file or directory, open '/Users/fredchan/.config/enconvo/extension/llm/chat_enconvo_ai.js'
-
I noticed the recent integration of LLMs, local or remote, including RAG. This is a great feature.
The most requested feature nowadays is a real-time recommendation system, based on user events, li…
-
### Issue
Hi, does cache prompting still work when passing an Anthropic model through open router, like “—model openrouter/anthropic/claude-3.5-sonnet —cache-prompts”? When I’ve tested that previousl…
-
### What is the issue?
After ollama's upgrade to 0.27 from 0.20, it runs gemma 2 9b at very low speed. I don't think the OS is out of vram, since gemma 2 only costs 6.8G (q_4_0) vram while my lapto…
-
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
##…
-
Can you add OpenRouter
-
-
## Problems
most (all?) existing comparisons are purely quantitative (perplexity scores)
## Objectives
- qualitative comparison (prompt input & outputs)
## See Also
- https://arxiv.org/…
-
### Validations
- [X] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](https://githu…