-
### 🚀 The feature, motivation and pitch
```
warnings.warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, …
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
> > Specify the local folder you have the model in instead of a HF model ID. If you have all the necessary files and the model is using a supported architecture, then it will work.
> > …
-
I'm trying to make the model generate emojis using this command:
```
./run.sh $(./autotag local_llm) python3 -m local_llm.chat --api=mlc --model=NousResearch/Llama-2-7b-chat-hf --prompt="Repeat th…
-
GPU: 2 ARC CARD
running following example,
[inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference)
**for mistral and codell…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
How to connect to the Neptune database through llama_index in my local machine?
**Bel…
-
### Do you need to file an issue?
- [ ] I have searched the existing issues and this bug is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
ToolCall is not generating from the response of llama 3.1 model from LM Studio, when using langchain framework connecting through ChatOpenAI ,
Same Tool call is working fine with ollama for the same …
-
For #4 (Milestone: 1)
Contribute DevOps Roadmap data in the format of [frontend.json](https://github.com/Open-Source-Chandigarh/sadakAI/blob/main/finetune_data/frontend_data.json), the file should be…
-
### What happened?
# environment
* autogen 0.4
* litellm 1.53.1
* ollama version is 0.3.14
* ollama model is qwen2.5:14b-instruct-q4_K_M.
# Infomation
I use autogen+litellm+ollama for my lo…