-
The command `python3 torchchat.py where llama3` fails quietly presumably because I might not have the HF Token configured.
I assumed the code was broken, though because I got a backtrace of the pr…
-
### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues…
-
There's a model I'm interested in using with ollama that specifies a parameter no longer supported by ollama (or maybe llama.cpp). I'd like to be able to create a replacement with a Modelfile that ove…
-
Instead of using chat-gpt, I would like to try and use a local LLM. I am sure this would take some modifications, but I think we could potentially make this work, and would be an awesome addition to t…
-
before: 0.2978
after: 0.2383
-
**Describe the bug**
I've been trying to get the chat to work with llama3, llama3.1, mistral, codellama:7b-instruct, codegemma:7b-instruct but it always fails ("Sorry, I don't understand").
**To R…
-
感谢项目组提供的模型,非常优秀,也因此我希望基于你们模型再微调以供后续使用。
在使用的时候遇到两个问题。
1> 模型调用,在 [https://huggingface.co/FlagAlpha/Atom-7B-Chat](url) 上开篇提到 Atom-7B-32k-Chat ,不知该模型本身是否已经支持32K?是否使用的时候直接加载即可,不需要额外修改文件或参数,能使用32k长度
2>…
hbj52 updated
7 months ago
-
### Problem description
Hi, I have been doing some basic testing in a notebook after finding some strange behavior in my code.
Basically two things happen when running a model with `temperature=0` f…
-
Currently, there are two things that confuse me.
### Sending requests
Firstly, it's about sending a lot of request to server and waited for correct response. Tests for this cases looks like:
```py…
-
### Is your feature request related to a problem? / 你想要的功能和什么问题相关?
There are more models in [LMSYS Chatbot Arena](https://huggingface.co/spaces/lmsys/chatbot-arena) / [HuggingChat](https://huggingf…