-
As an extra paid feature, I'd love to see automatic LLM-generated summaries, [like apricot](https://blog.theapricot.io/posts/extending-summarization/)
This would be a big time saver, allowing me to…
-
**Please describe the feature you want**
Use llm for basic code review before push to the remote git server.
**Additional context**
Use different or the same model of code completion using conf…
-
Please add examples using local open-source models, like llama or chatGLM. Thanks
-
- [ ] [HuggingChat](https://huggingface.co/chat/privacy)
# HuggingChat
**Description:** HuggingChat New Chat This week Model selection iDoDid Theme Models 7 Assistants New Settings About & Privacy …
-
@SpicyHabanero and I would like to request the development of a "Generic Generative Search" module that users can customize according to their specific needs. This would allow the developers to tailor…
-
Hi, I am trying to use `TestsetGenerator` to produce a synthetic dataset paired with `LlamaIndex` and 'Ollama', it successfully completes the embedding process, but before startin the generation proce…
-
thread summary: Pass the text of a thread and summarise it’s current status
- Press summarise button
- Sends content to Stakwork
- Summarises text
- Returns it back to the app
- Displays to user (sho…
-
- Is there any recommended way to use Firefox Labs, such as with an open-source, privacy-respecting model that you are aware of?
- Or would using it disrupt my current configuration, regardless of ho…
-
## Motivation
WasmEdge is a lightweight inference runtime for AI and LLM applications. The [LlamaEdge project](https://github.com/LlamaEdge) has developed an [OpenAI-compatible API server](https://gi…
-
"Feature Request: Enhance Project with Support for Additional Large Language Models (LLMs) - Including Local AI Assistants
I've been utilizing your project, and it's truly impressive! I wanted to p…