In this updated version, I integrated GPT Turbo by making some modifications to the models/gpt.py file. Additionally, I incorporated two methods for utilizing Mistral to fully explore its capabilities: The local model and the Mistral API. The "mistralai" Python package has been included in the install.sh script.
Within the Mistral API, users have the option to choose from three models: tiny, small, and medium. The NLL for Mistral 7b in the local setup was obtained from llama, while it has not yet been implemented for the Mistral API.
In future iterations, the Mistral embedding API can be utilized for tokenization. Although the current version utilizes the existing tokenization method, an update using the Mistral embedding API may be implemented soon.
In this updated version, I integrated GPT Turbo by making some modifications to the models/gpt.py file. Additionally, I incorporated two methods for utilizing Mistral to fully explore its capabilities: The local model and the Mistral API. The "mistralai" Python package has been included in the install.sh script. Within the Mistral API, users have the option to choose from three models: tiny, small, and medium. The NLL for Mistral 7b in the local setup was obtained from llama, while it has not yet been implemented for the Mistral API. In future iterations, the Mistral embedding API can be utilized for tokenization. Although the current version utilizes the existing tokenization method, an update using the Mistral embedding API may be implemented soon.