-
### Describe the bug
it seems code is not compatible with llama response
### Steps to reproduce
Error logs
```zsh
2024-11-18 12:26:45.377 | DEBUG | ai_hawk.llm.llm_manager:parse_llmresult:38…
-
I would like to be able to use the same `templates` path on both mac and linux. Is this possible? It appears that they're different:
mac - `$HOME/Library/Application Support/io.datasette.llm/temp…
-
### Description
I am a .NET MAUI develpoer and I am interested in embeding a LLM inside my application with LLamaSharp. After build llama.cpp with ndk , How can I embed libraries on my application??
-
WARNING:[auto-llm]:[][AutoLLM][getReq][llm_text_ur_prompt]A superstar Flirting on stage.
WARNING:[auto-llm]:[][AutoLLM][getReq][Header]{'Content-Type': 'application/json', 'Authorization': 'Bearer lm…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
After doing some analysis using the LlamaDebugHandler, I noticed that all the vectors + …
-
**Describe the bug**
I have a remote LLM that is our internal proxy for Azure OpenAI and Google Gemini. I have configured it properly as it does occasionally work. However, I often get an error pop u…
-
Hello,
Thank you for your interesting project.
Can I use OnnxStream task in Llama2 -7b fp16 model??
-
### Software
Desktop Application
### Operating System / Platform
Linux
### Your Pieces OS Version
3.1.6
### Early Access Program
- [ ] Yes, this is related to an Early Access Program feature.
…
-
I am trying to try the LightRAG implementation of using NEO4J as Storage as stated in the README
[2024.11.04]🎯📢You can [use Neo4J for Storage](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#u…
-
- [ ] [Introduction to AI Agents - Cerebras Inference](https://inference-docs.cerebras.ai/agentbootcamp-section-1)
# Introduction to AI Agents - Cerebras Inference
## Overview
Cerebras Inference ho…