-
GPU: 2 ARC CARD
running following example,
[inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference)
**for mistral and codell…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I don't use huggingface because of proxy issue. So, I tried to make local embeddin…
-
### System Info
python=3.11.7
### 🐛 Describe the bug
import pandas as pd
from pandasai import SmartDataframe
from langchain_community.llms import Ollama
# Sample DataFrame
data = {
'Mont…
-
The DefaultOpenAiClient in langchain4j omits the `stream` parameter in chat completion requests, even when explicitly set to `false`. This causes compatibility issues with LLM providers that interpret…
-
I'm trying to make the model generate emojis using this command:
```
./run.sh $(./autotag local_llm) python3 -m local_llm.chat --api=mlc --model=NousResearch/Llama-2-7b-chat-hf --prompt="Repeat th…
-
### Describe the bug
Function __post_carryover_processing(chat_info: Dict[str, Any]) of chat.py in agentchat folder throw the above exception when running Google Gemini.
The cause of the problem w…
-
hey,
thanks for providing the torchtune framework,
I have an issue with a timeout on saving a checkpoint for Llama 3.1 70B LoRa on multiple GPUs,
I am tuning on an AWS EC2 with 8xV100 GPUs…
-
i saw this error :
```
value is not a valid list (type=type_error.list))
Evaluating: 33%|█████████████████████████████▎ | 1/3 [01:37 0
v…
-
no_gt retrieval metrics needs large amount of LLM processing.
So, use local LLM model to compute it.
+ ragas context precision need so much LLM calls. So, try to use tonic validate instead.
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…