modelscope / data-juicer

A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
Apache License 2.0
2.59k stars 163 forks source link

How to add a metric to parse from HELM output in wandb_writer.py? #330

Closed Mr-lonely0 closed 2 months ago

Mr-lonely0 commented 3 months ago

Before Asking 在提问之前

Search before asking 先搜索,再提问

Question

I want to add a metric named Denoised inference time (s) to track the efficiency of my model in tools/evaluator/recorder/wandb_writer.py, so I changed the config file mymodel_example.yaml as follows:

project: ee_tune_eval
base_url: https://wandb.ai/liukui/EE-TUNE-EVAL
evals:
  - eval_type: helm
    model_name: llama2_ee
    source: helm
    helm_output_dir: /data3/lk/data-juicer/helm_output
    helm_suite_name: ee_test-llama2_ee
    token_per_iteration: <tokens per iteration in billions>
    benchmarks:
      - name: summarization_cnndm
        metrics:
          - ROUGE-2
          - Denoised inference time (s)
      - name: summarization_xsum
        metrics:
          - ROUGE-2
      - name: narrative_qa
        metrics:
          - F1
      - name: mmlu
        metrics:
          - EM

However, I got an error:

parsing summarization_cnndm.json
  parsing dataset_name: cnn-dm, sampling_min_length: 50, sampling_max_length: 150, doc_max_length: 512
Fail to parse summarization_cnndm: 'value'

I then checked the summarization_cnndm.json in HELM output dir, and there indeed isn't a value for Denoised inference time (s). What modifications should I make to parse Denoised inference time (s) from the HELM output?

I would greatly appreciate your help and look forward to your prompt response.

Additional 额外信息

No response

pan-x-c commented 3 months ago

Please refer to link

github-actions[bot] commented 2 months ago

This issue is marked as stale because there has been no activity for 21 days. Remove stale label or add new comments or this issue will be closed in 3 day.