-
## ❓ Questions and Help
Can we customize base_url for openai compatible LLM models instead of using openai models?
I didn't find this setting in .env example file. Appreciate if this can be supporte…
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
environment:
python 3.9.20
datasets 3.0.1
langchain 0.3.3
langchain-community 0.3.2
langchain-core 0.3.10
langchain-openai 0.2.2
la…
-
### Summary
Enable CANN support for WASI-NN ggml plugin.
### Details
Adding CANN support to the WASI-NN ggml plugin is relatively straightforward. The main changes involve adding the following code…
-
### 🚀 The feature, motivation and pitch
```
warnings.warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, …
-
GPU: 2 ARC CARD
running following example,
[inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference)
**for mistral and codell…
-
> > Specify the local folder you have the model in instead of a HF model ID. If you have all the necessary files and the model is using a supported architecture, then it will work.
> > …
-
no_gt retrieval metrics needs large amount of LLM processing.
So, use local LLM model to compute it.
+ ragas context precision need so much LLM calls. So, try to use tonic validate instead.
-
can ollama URL be configured to point to remote box?
or try use ssh tunnel to make remote ollama appear to be local
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
How to connect to the Neptune database through llama_index in my local machine?
**Bel…