Closed brenkao closed 3 months ago
To update your groq_api_calculate_cost
function to support all the Llama 3 models as listed on the Groq website, you'll need to incorporate the pricing details for each model. Based on the information provided, here's how you can structure the function:
def groq_api_calculate_cost(model_name, token_count):
"""
Calculate the cost based on the model name and token count.
Parameters:
- model_name: The name of the Llama 3 model being used.
- token_count: The number of tokens processed.
Returns:
- The calculated cost.
"""
# Define the base prices per 1 million tokens for each model
model_prices = {
"Llama 3 70B": {"input": 0.59, "output": 0.79},
"Llama 3 8B": {"input": 0.05, "output": 0.10}
}
# Check if the model name exists in the dictionary
if model_name in model_prices:
# Calculate the cost for input and output separately
input_cost = (token_count * model_prices[model_name]["input"]) / 1000000
output_cost = (token_count * model_prices[model_name]["output"]) / 1000000
# Return the total cost
return input_cost + output_cost
else:
raise ValueError(f"Unsupported model: {model_name}")
# Example usage
print(groq_api_calculate_cost("Llama 3 70B", 50000)) # Replace with actual token count
This function takes the model name and the number of tokens processed as inputs. It then calculates the cost based on the base price per 1 million tokens for both input and output, as provided for each model. The function raises a ValueError
if the model name is not recognized, ensuring that only supported models can be used.
Remember to replace "Llama 3 70B"
and 50000
in the example usage with the actual model name and token count you're working with. This approach allows for easy addition of new models by simply updating the model_prices
dictionary with the corresponding pricing details.
Citations: [1] https://github.com/langgenius/dify/issues/3702 [2] https://llama.meta.com/llama3/ [3] https://www.reddit.com/r/LocalLLaMA/comments/1caal2v/llama_3_models_for_free_without_paying_whats/ [4] https://huggingface.co/docs/transformers/main/en/model_doc/llama3 [5] https://docs.anyscale.com/preview/endpoints/text-generation/supported-models/meta-llama-Meta-Llama-3-70B-Instruct/ [6] https://docs.llamaindex.ai/en/stable/examples/cookbooks/llama3_cookbook/ [7] https://ai.meta.com/blog/meta-llama-3/ [8] https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
This is fixed and released in v0.14.1 🎉
Description
Currently our groq_api_calculate_cost function is out of date and does not support any of their llama3 models as seen here https://wow.groq.com/. All supported models should be added such that when those models are used, there is proper cost tracking.