-
### 📜 Description
Using a HuggingFace model from the Mistral family.
### 👟 Reproduction steps
with the parameters from the documentation:
`LLM_NAME=huggingface`
`EMBEDDINGS_NAME=sentence-transfor…
-
It looks like when setting up a default guidellm run, there is no error that the tokenizer needs to be set as well. After running my sweep, I noticed in the output that a Llama tokenizer was used when…
mgoin updated
1 month ago
-
Please remove the duplicated libraries from the project.
dash==2.17.1
dash_bootstrap_components==1.6.0
dash_daq==0.5.0
ipython==8.18.1
ipython==8.12.3
loguru==0.7.2
mistralai==1.0.0
nltk==3.8.1
numpy…
-
Mistral又重磅开源了,7月是一个适合开源的月份~。 Mistral large v2支持中文,特点是对编码和agent能力、推理能力做了很好的优化,110B模型可以与llama 3.1 405B分庭抗礼!
hf模型地址:https://huggingface.co/mistralai/Mistral-Large-Instruct-2407
试玩地址:https://chat.mi…
-
Documentation URL:
https://huggingface.co/docs/trl/en/sft_trainer#add-special-tokens-for-chat-format
In section **Add Special Tokens for Chat Format** the page encourages to use ``setup_chat_for…
-
Hi, I am experiencing an issue where the HuggingFaceInference is not removing stop sequences/tokens and has no clear way to specify what they should be.
**Packages used:**
"@langchain/community":…
-
i was able to start the application with 0.4.0 but when i try to start with 0.5.0, i am getting following output. Please help.
(gpt) C:\Users\genco\Desktop\docs\private-gpt-main>make run
poetry ru…
-
Hi! Thanks for your amazing project!
When I try to evaluate the gsm8k bench and log the output sample with "--log_sample", I find the output JSON file including the response twice.
For example, …
-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md)…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…