-
**Is your feature request related to a problem? Please describe.**
we are exploring around using LaVague for accomplishing web automation but the limitation is using public facing models. can we supp…
-
GPU: 2 ARC CARD
running following example,
[inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference)
**for mistral and codell…
-
I'm trying to make the model generate emojis using this command:
```
./run.sh $(./autotag local_llm) python3 -m local_llm.chat --api=mlc --model=NousResearch/Llama-2-7b-chat-hf --prompt="Repeat th…
-
For #4 (Milestone: 1)
Contribute DevOps Roadmap data in the format of [frontend.json](https://github.com/Open-Source-Chandigarh/sadakAI/blob/main/finetune_data/frontend_data.json), the file should be…
-
I only modified t6 instead of t4, t4 t5 both work well for this model,but if we set the thread=6,will always trigger the problem on my XIAOMI14Pro(SM8650 8Gen3)
please check it for resolve
thanks~
…
-
### System Info
TEI Image v1.4.0
AWS Sagemaker Deployment
1 x ml.g5.xlarge instance Asynchronous Deployment
Link to prior discussion: https://discuss.huggingface.co/t/async-tei-deployment-c…
-
感谢分享!我有如下错误请您帮助:
Traceback (most recent call last):
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict
resolved_…
-
no_gt retrieval metrics needs large amount of LLM processing.
So, use local LLM model to compute it.
+ ragas context precision need so much LLM calls. So, try to use tonic validate instead.
-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…
-
### Describe the issue
Ask what version of pyautogen will support 'register_for_llm' later, because I'm using the local model chatGLM, needs openai float:
if base_currency == quote_currency:
…