-
Can you please export this jupiter notebook thing of Llama-3-PyTorch .ipynb to PURE PYTHON as Llama-3-PyTorch_model.py and Llama-3-PyTorch_tokenizer.py
Because I want to try to adapt this to work w…
-
### System Info
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |…
-
This is so good, just if this would work with local LLM such as phind also and not only with openAI API would be perfect.
IVIJL updated
2 months ago
-
Hi,
In this requiremets should be updated so that openai should also be installed.
phi interanally using this.
I am getting error
> Traceback (most recent call last):
> File "/ai/awesome…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
local LLM dont't work for subdomains and vulns
### Expected Behavior
local LLM
### Steps T…
-
### System Info
CPU x86_64
GPU NVIDIA L20
TensorRT branch: v0.13.0
CUDA: NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.5
### Who can help?
@kaiyux @byshiue
### Information…
-
### Describe your problem
my local API is similar in format to Openai like http://xxx.xx.xx.xxx:5000/v1 and has a api-key how can I use this API in RAGFlow?
-
1. How many LLMs are needed for `setting`? In your paper [PaperQA: Retrieval-Augmented Generative Agent for Scientific Research](https://arxiv.org/pdf/2312.07559.pdf), this paper seems to have employi…
-
Hello,
I changed batch size from 1 (default) to 8, 32 and saw no changes on paperQA behavioural (answer quality end speed), as follows :
```
settings=Settings(
llm=f"openai/mixtral:8x7b",…
-
### Summary
i.e. define a variable on the env file to point the Reaper to an ollama or LMStudio host exposing chat completion endpoints in OpenAI API format.
### Contact Email
thefoul@inwind.it