run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.28k stars 5.18k forks source link

[Bug]: Missing Mistral large 2 model ID in Bedrock and Bedrock-Converse #16697

Closed yhou-uk closed 39 minutes ago

yhou-uk commented 2 hours ago

Bug Description

The Mistral Large 2 model is available in the AWS US West region via Bedrock and Bedrock Converse. The model ID is not included in below files so far:

  1. llama-index-integrations/llms/llama-index-llms-bedrock/llama_index/llms/bedrock/utils.py
  2. llama-index-integrations/llms/llama-index-llms-bedrock-converse/llama_index/llms/bedrock_converse/utils.py

Version

0.11.20

Steps to Reproduce

Currently Mistral Large 2 model cannot be used in the LlamaIndex package. It gives an error: 'Unknown model: mistral.mistral-large-2407-v1:0'. ... A workaround is to add the model ID in the your local files, such as: ~/anaconda3/envs/python3/lib/python3.10/site-packages/llama_index/llms/bedrock_converse.utils.py

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 2 hours ago

Hello @yhou-uk! I'm here to help you with any bugs, questions, or contributions. Let's tackle this together!

To address the issue of the missing Mistral Large 2 model ID in the utils.py files for both Bedrock and Bedrock-Converse, you can manually add the model ID to the BEDROCK_MODELS dictionary in the respective utils.py files. This will allow the LlamaIndex package to recognize and use the Mistral Large 2 model.

Here's a general approach to resolve this:

  1. Locate the utils.py Files:

    • llama-index-integrations/llms/llama-index-llms-bedrock/llama_index/llms/bedrock/utils.py
    • llama-index-integrations/llms/llama-index-llms-bedrock-converse/llama_index/llms/bedrock_converse/utils.py
  2. Add the Model ID:

    • In each utils.py file, find the BEDROCK_MODELS dictionary.
    • Add an entry for the Mistral Large 2 model. For example:
      BEDROCK_MODELS = {
       # ... existing models ...
       "mistral.mistral-large-2407-v1:0": <context_size>,  # Replace <context_size> with the appropriate value
      }
  3. Test the Changes:

    • After making these changes, test the integration to ensure that the model can be used without errors.

This workaround should enable the use of the Mistral Large 2 model until an official update is made to the LlamaIndex package.

To continue talking to Dosu, mention @dosu.

logan-markewich commented 1 hour ago

Yea as dosu mentioned, please do feel free to make a PR!