Closed alpcansoydas closed 2 months ago
Hi! Inference Endpoints is billed per compute resources per minute, please check out the full pricing information at https://huggingface.co/docs/inference-endpoints/pricing
About your question for commercial use, please make sure to review the license information for the model(s) you wish to use. For https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3, it's under the apache 2.0 license -- details at https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md. Please let us know though if there are any other questions!
llm = HuggingFaceEndpoint( repo_id="mistralai/Mistral-7B-Instruct-v0.3", task="text-generation", max_new_tokens=128, temperature=0.7, do_sample=False, )
Can I use mistral llm from huggingfaceEndpoint free? Is it okay for commercial use without using huggingface spaces? I want to deploy an app with my infrastructure but it uses HuggingfaceEndpoint for Mistral LLM. Is it free totally? Are there any request limit or billing limit?