-
Only VCDM11 and VCDM20 are provided out of the box.
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to fin…
-
**Describe the bug**
context_chat_backend/models/__init__.py still contains "instructor" in its embedding models. This causes a 500 error when the backend attempts to load the models
Line 7: _embeddi…
-
### What is the issue?
Hi, I have created a custom model using llava and also I created a custom modelfile, however, after several requests or computer restarts the model looses the modelfile configu…
-
The tooltips in are managed by the trait `TToElementWithTooltip`.
When you add a tooltip, you use one of the following methods:
- tooltipString:
- tooltipText:
- tooltipContent:
All the metho…
-
### Issue
I got this warning msg when run aider, not sure why it can't match it in latest version while it used works.
Warning for ollama/qwen2.5-coder: Unknown context window size and costs, usin…
uzhao updated
8 hours ago
-
As of now, the GRANITE model is basic-block oriented (resp. trace-oriented), i.e. it doesn't use any information about code that was executed before or after the basic block. We believe that adding su…
-
**Is your feature request related to a problem? Please describe.**
Embedding models typically have smaller context windows than LLMs, which can limit the quality of embeddings generated for large con…
nkkko updated
3 weeks ago
-
An error occurred: Error code: 400 - {'object': 'error', 'message': "This model's maximum context length is 32768 tokens. However, you requested 32796 tokens in the messages, Please reduce the length …
-
[model_input.txt](https://github.com/user-attachments/files/17492230/model_input.txt)
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment inform…