-
Hi,
I am new to this, can you help me understand if i can use some sort of proxy like litellm or localllm to run without gpt3.5 api key. like open-interpreter connects with LM studio directly, i gues…
-
老师您好,我在本地复现过程中出现了一点问题,现在有一些问题想向您请教一下,我选择在本地部署了gemma-2b,在运行local_problem的runEOH.py代码时为什么还需要输入api_endpoint和api_key,修改了llm_use_local为True和本地的url还是无法运行,请您指教一下。期待您的回复
-
### Description
I see documentation for where to store gguf models here: https://github.com/Mintplex-Labs/anything-llm/blob/master/server/storage/models/README.md
saying that "/server/storage/mode…
-
Using the current codebase, on the local LLM it seems to be stuck in a loop. After the 4th iteration the console output looks like this:
```
Ensure the response can be parsed by Python json.loads.…
-
Traceback (most recent call last):
File "/home/g30097220/.local/lib/python3.10/site-packages/langchain/embeddings/huggingface.py", line 46, in __init__
import sentence_transformers
File…
-
In the examples.ipynb, I ran this cell (with a new DNS name):
import openai
openai.api_base = "https://localllm.dev01.datascience/v1"
openai.api_key = ""
prompt = "tell me a joke in more than …
-
### Describe the bug
For autogen with localllm
```
import autogen
config_list = [
{
"model": "cognitivecomputations/dolphin-2.6-mixtral-8x7b",
"base_url": "http://10.34.…
-
Hey @GreyDGL interesting approach to abstracting llm providers - by creating separate classes `chatgpt_api.py` and `gpt4all_api.py`
Why do it this way - vs. making the completion call inside the i…
-
trying to load a 7b model into iPhone 15 pro, since the model supports 32k context, if I set llama_context_params.n_ctx to 32k, it crashes, and here is the error:
-[MTLDebugDevice newBufferWithByte…
-
**Describe the bug**
On loads of old agents that used LocalLLMs, `DotDict` is present in the pickle file (even though we deprecated `DotDict` for `Box`):
```
File "/.../lib/python3.11/site-pa…