stanford-futuredata / ARES

Automated Evaluation of RAG Systems
https://ares-ai.vercel.app/
Apache License 2.0
452 stars 49 forks source link

The local model needs to pass in a token in order to request it, May I ask if there is such a parameter #62

Closed starplatinum3 closed 2 months ago

starplatinum3 commented 2 months ago

The local model needs to pass in a token in order to request it, May I ask if there is such a parameter

from ares import ARES

ppi_config = { 
    "evaluation_datasets": ['nq_unabeled_output.tsv'], 
    "few_shot_examples_filepath": "nq_few_shot_prompt_for_judge_scoring.tsv",
    "llm_judge": "meta-llama/Llama-2-13b-hf", # Specify vLLM model
    "labels": ["Context_Relevance_Label"], 
    "gold_label_path": "nq_labeled_output.tsv",
    "vllm": True, # Toggle vLLM to True 
    "host_url": "http://0.0.0.0:8000/v1", # Replace with server hosting model followed by "/v1"
    "token":"token"  # May I ask if there is such a parameter 请问是否有这种参数呢
}

ares = ARES(ppi=ppi_config)
results = ares.evaluate_RAG()
print(results)
starplatinum3 commented 2 months ago

I modified the code, for example . it works

context_score = few_shot_context_relevance_scoring_vllm(
            context_relevance_system_prompt, query, document, model_choice, query_id, debug_mode, host_url, request_delay, failed_extraction_count, in_domain_prompts_dataset
            ,openai_api_key=openai_api_key)

add openai_api_key param

if VLLM_AVAILABLE:
    def few_shot_context_relevance_scoring_vllm(
        system_prompt: str, query: str, document: str, model_choice: str,
        query_id: str, debug_mode: bool, host_url: str, request_delay: int,
        failed_extraction_count: Dict[str,int] = {'failed': 0},
        few_shot_examples=None,openai_api_key = "EMPTY"
    ) -> int:

client = OpenAI(
                api_key=openai_api_key,
                base_url=host_url
                )