-
The VS Marketplace has 6 featured extensions which are shown prominently on the [MP site](https://marketplace.visualstudio.com/). We try to update this list every 1-2 months and we aim to showcase som…
-
Hi,
I want to work with ollama and am trying to use the "local LLM" node.
My Ollama server has started and llama3 is running (I can chat with it).
My Ollama server runs on the default setting adr…
-
Firstly, thanks to the author team for providing such a useful video llm evaluation benchmark. I have some doubts abouts the w/ sub settings.
According to my practice, it seems that not all videos …
-
I also get this error message, after exactly following your API instructions:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error occurred when executing Griptape Agent Config: Anthropic:
No API key pro…
-
Running locally using ollama with the following settings:
```llm_provider = "ollama"
ollama_base_url = "http://localhost:11434/v1/"
ollama_model_name = "llama3:8b"
openai_api_key = "123456"
```
…
-
Hi. I found the token generation speed of gemma2 in llama.cpp in `ipex-llm[cpp]` is slower than upstream llama.cpp. Can it be optimized?
`ipex-llm[cpp]`:
```
| model | …
-
This issue aims at keeping track of the models that would be interesting to get added to candle. Feel free to make a comment to mention a new model, or vote for a model already in the list.
- [musicg…
-
I have read your paper on AriGraph and love what you all are doing to adaptively learn from the environment.
I am creating an LLM-based Agentic Framework as well, TaskGen (https://github.com/simbia…
-
$ python test/twoagent.py
Traceback (most recent call last):
File "D:\workspace\luan\autogen\test\twoagent.py", line 1, in
from autogen import AssistantAgent, UserProxyAgent, config_list_fr…
-
I have the following code for qa with llamacpp and this is what I get, it keeps outputting llama_print_timings, what to make of that?
My code is
```
from paperqa import Docs
from langchain.llm…