-
### System Info
gpu:
```nvidia-smi
Mon Apr 22 17:00:40 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 …
-
I started out with experimenting a bit with CTransformers. The device I have been using is
```
ASUS Laptop
16 GB RAM
6 GB NVIDIA RTX 3060
```
And I tried to install `mpt-7B chat` ggml file…
-
Hello,
I started a neuralchat server with following config with command ```neuralchat_server start --config_file ./server/config/neuralchat.yaml```
## Config
```
host: 0.0.0.0
port: # conf…
-
### Describe the bug
When attempting to run "interpreter --local" and choosing jan.ai as the llm provider, the model choice function crashes interpreter.
LM_Studio runs as expected. (I'm assumi…
-
# 1. Ollama
## 1. use Ollama CLI:
```
ollama serve
ollama run llama2:7b, llama3, llama3:70b, mistral, dophin-phi, phi, neural-chat, codellama, llama2:13b, llama2:70b
ollama list
ollama show
…
-
neuralchat already synced RESTful API with latest OpenAI protocol via 2e1c79d9b99db8bc004d67235fc6df51ca1d238e
But neuralchat frontend don't have field to assign system prompt.
**backend log**
``…
-
neuralchat already synced RESTful API with latest OpenAI protocol via 2e1c79d9b99db8bc004d67235fc6df51ca1d238e
But neuralchat frontend don't have field to assign system prompt.
**backend log**
``…
-
感谢项目组提供的模型,非常优秀,也因此我希望基于你们模型再微调以供后续使用。
在使用的时候遇到两个问题。
1> 模型调用,在 [https://huggingface.co/FlagAlpha/Atom-7B-Chat](url) 上开篇提到 Atom-7B-32k-Chat ,不知该模型本身是否已经支持32K?是否使用的时候直接加载即可,不需要额外修改文件或参数,能使用32k长度
2>…
hbj52 updated
9 months ago
-
I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env :
```
MODELS=`[
{
"name": "Mistral",
"chatPromptTemplate": "{{#each messages}}{{#ifUse…
-
I am trying to explore the backend server. After resolving dependencies issues, I tried to start the server but system doesn’t shows any running backend server neither logs helps out to identify the i…