AI-secure / DecodingTrust

A Comprehensive Assessment of Trustworthiness in GPT Models
https://decodingtrust.github.io/
Creative Commons Attribution Share Alike 4.0 International
262 stars 56 forks source link

Hydra override error when running evaluations after non-editable installation #12

Open ziyic7 opened 1 year ago

ziyic7 commented 1 year ago

Describe the bug If I install DecodingTrust using the second method under '(Conda +) Pip' section, and then I run an evaluation using the provided script, I'll get an Hydra override error.

To Reproduce Steps to reproduce the behavior:

  1. Install the DecodingTrust Note: there are two installation methods you can use to reproduce this error
    • a. Using the suggested method without editable mode
      git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
      pip install .
    • b. Using the second method
      conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
      conda activate dt-test
      pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
  2. Run the evaluation
    dt-run +ood=knowledge_2020_5shot \
    ++model=openai/gpt-3.5-turbo-0301 \
    ++key=[MyOpenAIKey] \
    ++ood.out_file=data/ood/results/gpt-3.5-turbo-0301/knowledge_2020_5shot.json
  3. You should be able to see the error
    omegaconf.errors.ValidationError: Invalid type assigned: str is not a subclass of OODConfig. value: knowledge_2020_5shot
    full_key: ood
    object_type=BaseConfig

Expected behavior There shouldn't be any error in getting the output config. Specifically in my case, the key 'ood' in the output config should have a dict value acquired from the config file of which the name is the command line arg for 'ood'.

Screenshots

Environment:

Additional context

danielz02 commented 1 year ago

Where did you run dt-run? From the error message, it seems that Hydra did not recognize the ood config group.

ziyic7 commented 1 year ago

Where did you run dt-run? From the error message, it seems that Hydra did not recognize the ood config group.

I ran dt-run in the repo's root dir. Sorry if I didn't make it clear. To take 1.b install as an example, after installing the dt, I cloned the repo, went to the root dir, and then I ran dt-run.

danielz02 commented 1 year ago

Low priority - things are working when we are executing dt-run in the repository root.

Arnold-Qixuan-Zhang commented 1 year ago

Hi. I am having the same issue when running toxicity assessment. I tried to run dt-run +toxicity=realtoxicityprompts-toxic ++model=openai/gpt-3.5-turbo-0301 ++toxicity.n=25 ++toxicity.template=1 under .\DecodingTrust.

jinz2014 commented 10 months ago

In the DecodingTrust directory,

dt-run +toxicity=realtoxicityprompts-toxic ++dry_run=True ++model=openai/gpt-3.5-turbo-0301 ++toxicity.n=25 ++toxicity.template=0 Error merging override +toxicity=realtoxicityprompts-toxic Invalid type assigned: str is not a subclass of ToxicityConfig. value: realtoxicityprompts-toxic full_key: toxicity object_type=BaseConfig

jinz2014 commented 10 months ago

@danielz02 You mentioned that you could run without errors. Do you think this is caused by our environment settings ?

peter-peng-w commented 9 months ago

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

danielz02 commented 9 months ago

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

peter-peng-w commented 9 months ago

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

Awesome! Editable install solved this issue. Also looking forward to the new version!

peter-peng-w commented 9 months ago

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

aU53r commented 9 months ago

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

Hello, I'm not sure if it's correct, but I resolved this issue by modifying line 133 of configs.py to model_config: ModelConfig = ModelConfig()

danielz02 commented 9 months ago

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

Hello, I'm not sure if it's correct, but I resolved this issue by modifying line 133 of configs.py to model_config: ModelConfig = ModelConfig()

Hi! model_config should be the model names. We will update the documentation shortly.

notrichardren commented 8 months ago

Hi! I'm still facing this same issue.

dt-run +key=<my openai api key> toxicity=realtoxicityprompts-toxic

returns:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig
danielz02 commented 8 months ago

Hi! Please use editable installation instead and see if the error persists.