-
### Issue you'd like to raise.
Langchain pandas agents (create_pandas_dataframe_agent ) is hard to work with llama models. (the same scripts work well with gpt3.5.)
I am trying to use local model Vi…
-
HI, love this tool but not being a dev by trade, I couldn't get it to work for me locally, even after trying a few hours. Installed deno via homebrew and read the docs online and followed the installa…
-
I use the llama 7B model.
I start it with
```
./main -m ./models/7B/ggml-model-q4_0.bin -n 128 -i
```
I can not get a chance to input.
![Screenshot 2023-04-13 at 4 40 07 PM](https://user…
4t8dd updated
2 months ago
-
### Version
VisualStudio Code extension
### Suggestion
Please add support for open LLMs compatible with endpoint API for LLM Studio / ollama / etc.
-
Hi I am wondering is there any documentation on how to run Llama2 on a CSV file locally? thanks
-
Hi,
according to this blog https://huggingface.co/blog/inferentia-llama2
it seems the expected ms/token is about 60 when running inference for llama-2 on inf2.xlarge.
I do get these results wh…
-
我使用docker run 为tabbyml/tabby:latest创建了一个容器;
在容器中我执行`/opt/tabby/bin/tabby serve --model TabbyML/CodeLlama-7B --device cuda`
报错如下:
```
2023-11-07T08:26:55.602708Z INFO tabby::serve: crates/tabby/sr…
-
**What problem or use case are you trying to solve?**
I changed the config.toml to the below (ollama); I refer to the readme section, Picking a Model.
LLM_API_KEY="11111111111111111111"
WORKSPA…
-
## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull…
-
If i want to change the base model to something else for fine tuning, what should I be aware of and modify? i see the codebase has a flash attention modeling code for llama2. i’m just curious if it wo…