-
### 📦 Environment
- [ ] Official
- [ ] Official Preview
- [X] Vercel / Zeabur / Sealos
- [X] Docker
- [ ] Other
### 📌 Version
v0.162.0
### 💻 Operating System
- [X] Windows
- [ ] macOS
- [X] Ubunt…
-
### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am…
-
Current candidate
Mistral
https://mistral.ai/
RAG - LlamaIndex
https://github.com/mistralai/cookbook/blob/main/llamaindex_agentic_rag.ipynb
Can be tested out locally and seems to be a good ch…
-
Hello!
Mistral has recently released Codestral, which supports [fill-in-the-middle](https://docs.mistral.ai/capabilities/code_generation/#fill-in-the-middle-endpoint). In `gptel`, this could be imp…
-
Add Mistral models, instead of just being able to use OpenAI models allow people to choice other models ?
-
Got error "Error Building Component
Error building vertex Hugging Face API: Failed to resolve model_id:Could not find model id for inference server: https://api-inference.huggingface.co/models/mi…
-
Using manual completion (C-x) it just spams "Completion started" until I have to close nvim.
my config:
```lua
{
'maxwell-bland/cmp-ai',
config = function()
local cmp_ai = require('cmp_a…
-
I would be nice to be able to connect to Mistral AI API to utilize their server.
Setting would look like:
Url: https://api.mistral.ai/v1
Private key: xxxxxxxxxxx
-
I want to use 4-bit quantized mistral model in huggingface with semantic kernel so that I can run it on google colab free tier. But I am not able to find a way to pass this configuration while creatin…
-
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
https://arxiv.org/abs/2405.15071
https://public.flourish.studio/visualisation/18055935/
https://mist…