-
Hi,
I am unable to import LlamaCpp in IPEX
CODE : from ipex_llm.langchain.llms import LlamaCpp
ERROR
Cell In[5], [line 1](vscode-notebook-cell:?execution_count=5&line=1)
----> [1](vscode-note…
-
### Bug Description
I did the following;
!pip install llama-index
!pip install llama-index-llms-sambanova
### Version
latest
### Steps to Reproduce
just followed LLM example for sambanova
### …
-
## 🐛 Bug
When running Phi-3.5-mini-instruct,Mistral-Nemo-Base-2407 and Qwen2.5-7B-Instruct with NeMo + ThunderFX and constant folding enabled we get error:
> File ".1546", line 7, in forwar…
-
environment:
python 3.9.20
datasets 3.0.1
langchain 0.3.3
langchain-community 0.3.2
langchain-core 0.3.10
langchain-openai 0.2.2
la…
-
### Summary
Enable CANN support for WASI-NN ggml plugin.
### Details
Adding CANN support to the WASI-NN ggml plugin is relatively straightforward. The main changes involve adding the following code…
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
I'm trying to make the model generate emojis using this command:
```
./run.sh $(./autotag local_llm) python3 -m local_llm.chat --api=mlc --model=NousResearch/Llama-2-7b-chat-hf --prompt="Repeat th…
-
### 🚀 The feature, motivation and pitch
```
warnings.warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, …
-
### Version
Command-line (Python) version
### Operating System
Linux (other)
### What happened?
When I try to run a project again, to add a new feature - I got an gpt-pilot crash.
```
[Tech Lea…
-
GPU: 2 ARC CARD
running following example,
[inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference)
**for mistral and codell…