-
# Question
Can we get DeepseekV2 supported?
# Code to reproduce
```
from tensorrt_llm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
…
-
### Checked other resources
- [X] I searched the Codefuse documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that …
-
1. How many LLMs are needed for `setting`? In your paper [PaperQA: Retrieval-Augmented Generative Agent for Scientific Research](https://arxiv.org/pdf/2312.07559.pdf), this paper seems to have employi…
-
### The problem
I am using Ubuntu machine and getting it online with ngrok.
On the Host Address I tried to put all the available addresses:
http://localhost:8080
http://127.0.0.1:8080
http://12…
-
Hello,
I changed batch size from 1 (default) to 8, 32 and saw no changes on paperQA behavioural (answer quality end speed), as follows :
```
settings=Settings(
llm=f"openai/mixtral:8x7b",…
-
llama3.1
qwen2.5
phi3.5
Mistral-Large-Instruct-2407
DeepSeek-V2-Chat-0628
ollama gguf
-
Is it possible to use a local LLM via Ollama. If, what's the setup and what's the requirement for which LLM I can use (guessing it has to use openai api syntax)?
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
local LLM dont't work for subdomains and vulns
### Expected Behavior
local LLM
### Steps T…
-
The Tile prompter currently links to Huggingface.
It would be better to give users the customizability options, and capability, of local VLM & LLM models.
-
https://brandolosaria.medium.com/setting-up-metaais-code-llama-34b-instruct-model-fc009aa937f6
https://github.com/go-skynet/LocalAI