Open jzqin opened 2 hours ago
Hey @jzqin! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve this issue.
The KeyError: 'model_list'
indicates that the model_list
key is missing from the configuration data when using the rate limit functionality. The maybe_set_config_attribute
function expects the model_list
key to be present in the configuration data [1].
To resolve this issue, ensure that your configuration includes a model_list
key. Here's how you can modify your CLI command and Python code to include the model_list
:
For the CLI command, you can add a model_list
to the summary_llm_config
:
pqa --summary_llm_config '{"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}], "rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}}' ask 'Are there nm scale features in thermoelectric materials?'
For the Python environment, ensure that both llm_config
and summary_llm_config
include a model_list
:
answer = await agent_query(
QueryRequest(
query="What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm_config={
"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}],
"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}
},
summary_llm_config={
"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}],
"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}
},
),
)
)
This should prevent the KeyError
by ensuring that the model_list
key is present in your configuration.
To continue talking to Dosu, mention @dosu.
I use the following CLI command from the README for incorporating a rate limit into my response
and get the following error:
I also get the same error when trying to set a rate limit when using the package in a Python environment: