microsoft / graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system
https://microsoft.github.io/graphrag/
MIT License
17.86k stars 1.71k forks source link

tokens_per_minute seem not to reflect in engine #367

Closed eyast closed 2 months ago

eyast commented 3 months ago

The configuration of tokens_per_minutes in settings.yaml seems not to be adapted by the indexing engine. I've tried setting it to both 50000 and 50_000 (as per the commented example) but I see the same outcome in index-engine.log = 0. I repeatedly hit 429s, no matter what I do.

The content of settings.yaml:

llm:
  api_key: ${GRAPHRAG_API_KEY}
  type: azure_openai_chat # or azure_openai_chat
  model: gpt-4o
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: https://redacted.openai.azure.com/
  api_version: 2024-02-15-preview
  # organization: <organization_id>
  deployment_name: gpt4o
  tokens_per_minute: 50000 # set a leaky bucket throttle
  # requests_per_minute: 20 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made

contents of {run_id}\reports\index-engine.log:

    "llm": {
        "api_key": "REDACTED, length 32",
        "type": "azure_openai_chat",
        "model": "gpt-4o",
        "max_tokens": 4000,
        "request_timeout": 180.0,
        "api_base": "https://redacted.openai.azure.com/",
        "api_version": "2024-02-15-preview",
        "proxy": null,
        "cognitive_services_endpoint": null,
        "deployment_name": "gpt4o",
        "model_supports_json": true,
        "tokens_per_minute": 0,
        "requests_per_minute": 0,
        "max_retries": 10,
        "max_retry_wait": 10.0,
        "sleep_on_rate_limit_recommendation": true,
        "concurrent_requests": 25
    },
    "parallelization": {
        "stagger": 0.3,
        "num_threads": 50
    },
eyast commented 3 months ago

it seems this problem only occurs for llm.tokens_per_minutes but not for embeddings.llm.tokens_per_minutes - the setting of the latter is properly reflected in the indexlog.

eyast commented 3 months ago

It seems that no matter what I enter, the value that is leveraged by the index engine is the hardcoded one in configs\defaults.py in LLM_TOKENS_PER_MINUTE. To prove so, I commented this line - Pydantic would complain that tokens_per_minute is not int, because it's None. https://github.com/microsoft/graphrag/blob/daca75ff7925b93cd8b282ff067c6bdc76484e94/graphrag/config/create_graphrag_config.py#L122

KylinMountain commented 3 months ago

I find update it to tpm and rpm in setting.yaml will work.

eyast commented 3 months ago

thanks for the tip - but it seemed strange because the same key is used for llm.embeddings.tokens_per_minute and that seems to be working fine.

AlonsoGuevara commented 3 months ago

Hi folks!

Thanks for following up on this and providing workarounds. This has been fixed in #373 Will include it a part of the next version release, in the meantime, if you use the source directly please pull from latest main to address this.

Will leave the issue open until we release the next version.

AlonsoGuevara commented 2 months ago

0.2.0 is now live