-
llm = HuggingFaceEndpoint(
repo_id="mistralai/Mistral-7B-Instruct-v0.3",
task="text-generation",
max_new_tokens=128,
temperature=0.7,
do_sample=False,
)
Can I use mistral llm from huggingface…
-
We should update the first few sections of the sample output to match the current behavior, imo. Feel free to close this issue is a no-op.
Current output for running mock-client.py is
…
-
**Describe the bug**
Tool call information in model diagnostics when using Mistral streaming.
**Expected behavior**
Tool call information should be presented in model diagnostics when using Mistr…
-
### Required prerequisites
- [X] I have searched the [Issue Tracker](https://github.com/camel-ai/camel/issues) and [Discussions](https://github.com/camel-ai/camel/discussions) that this hasn't alread…
-
**Is your feature request related to a problem? Please describe.**
Make sure tests using `common_tests.dart` are failing properly.
**Describe the solution you'd like**
All common test functions s…
-
```
(llm_venv_llamacpp) xlab@xlab:/mnt/Model/MistralAI/llm_llamacpp$ python convert_hf_to_gguf.py /mnt/Model/MistralAI/Mistral-Large-Instruct-2407 --outfile ../llm_quantized/mistral_large2_instruct_f…
-
Hello,
I tried out attention sink with the example command in the FAQ, and it works fine:
python generate.py --base_model=mistralai/Mistral-7B-Instruct-v0.2 --score_model=None --attention_sinks=Tr…
-
There is a tradeoff between fetching a remote file and have a descynchronized copy locally.
Solutions:
- caching (if remote)
- submodule
- ...
-
### Python -VV
```shell
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
```
### Pip Freeze
```shell
mistral_common==1.3.3
```
### Reproduction Steps
Example code:
```
from mistral_c…
aw632 updated
2 months ago
-
In order to configure a Crew with `memory=True` without a OpenAI API Key, one needs to configure an Embedder provider. Currently, AWS (Amazon Bedrock and Amazon SageMaker) are not supported providers.…