-
### What happened?
# environment
* autogen 0.4
* litellm 1.53.1
* ollama version is 0.3.14
* ollama model is qwen2.5:14b-instruct-q4_K_M.
# Infomation
I use autogen+litellm+ollama for my lo…
-
**Is your feature request related to a problem? Please describe.**
we are exploring around using LaVague for accomplishing web automation but the limitation is using public facing models. can we supp…
-
Command: python -m main interactive /mistral-7B-v0.1/
Error:
Prompt: Hello
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/mistral/lib/python3.10/runpy.py", line 196, in _ru…
-
### System Info
TEI Image v1.4.0
AWS Sagemaker Deployment
1 x ml.g5.xlarge instance Asynchronous Deployment
Link to prior discussion: https://discuss.huggingface.co/t/async-tei-deployment-c…
-
When I run python llava_llama_v2_visual_attack.py --n_iters 5000 --constrained --save_dir results_llava_llama_v2_constrained_16 --eps 16 --alpha 1, I meet following problems.
model = /mnt/local/LL…
-
### 🚀 The feature, motivation and pitch
Is the deepseek-v2 AWQ version supported now? When I run it, I get the following error:
```
[rank0]: File "/usr/local/lib/python3.9/dist-packages/vllm/mo…
-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…
-
感谢分享!我有如下错误请您帮助:
Traceback (most recent call last):
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict
resolved_…
-
transformers==4.36.2
(awq1) clrobur@clrobur-WS-C621E-SAGE-Series:~/vila/llm-awq/tinychat$ python vlm_demo_new.py --model-path VILA1.5-13b-AWQ --quant-path VILA1.5-13b-AWQ/llm/vila-1.5-13b-w4-g128-a…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…