facebookresearch / ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
https://parl.ai
MIT License
10.49k stars 2.09k forks source link

[GenderBias] Running detect_offensive on genderation FT Blender model gives "Key Error" #3685

Closed gabrielle-lau closed 3 years ago

gabrielle-lau commented 3 years ago

Bug description I finetuned the Blender 90M model using the genderation train datasets (ie. blended_skill_talk, wizard_of_wikipedia, convai2, empathetic_dialogues annotated with gender bins). I tried to run the detect_offensive command on the finetuned model's response to convai2's valid set, but got a "KeyError: 'text'" for the dict "self.model.act()" from line 63 of safety.py.

Reproduction steps Produce genderation datasets, example for convai2:

parlai display_data --task genderation_bias:controllable_task:convai2

Create custom tasks for each genderation dataset by following this tutorial. For example a new task called "blended_skill_talk_genderation".

Finetune Blender 90M model with genderation datasets with the command:

parlai train_model -t  \
blended_skill_talk_genderation,wizard_of_wikipedia_genderation,convai2_genderation,empathetic_dialogues_genderation \
--datapath $BDIR/genderation_data --model transformer/generator --multitask-weights 1,3,3,3 \
--init-model zoo:blender/blender_90M/model \
--dict-file zoo:blender/blender_90M/model.dict  --embedding-size 512 --n-layers 8 \
--ffn-size 2048 --dropout 0.1 --n-heads 16 --learn-positional-embeddings True --n-positions \
 512 --variant xlm --activation gelu --fp16 True --text-truncate 512 --label-truncate 128 \
 --dict-tokenizer bpe --dict-lower True -lr 1e-06 --optimizer adam --lr-scheduler \
 reduceonplateau --gradient-clip 0.1 -veps 0.25 --betas 0.9,0.999 --update-freq 1 \
 --attention-dropout 0.0 --relu-dropout 0.0 --skip-generation True -vp 15 -stim 60 -vme \
 20000 -bs 16 -vmt ppl -vmm min --save-after-valid True --num-epochs 5 --model-file \
 $CHECKPOINT/FT_90M_genderation &>> $LOG

Run detect offensive command on my finetuned model:

parlai detect_offensive --task convai2 -dt valid --display-examples True \
-mf $CHECKPOINT/FT_90M_genderation --dynamic-batching full

Expected behavior Print the offensive metrics for classifier and string matcher respectively -- classifier offenses % and string offenses %

Logs Running detect_offensive:

Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.8.2/bin/parlai", line 11, in <module>
    load_entry_point('parlai', 'console_scripts', 'parlai')()
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/__main__.py", line 14, in main
    superscript_main()
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/core/script.py", line 306, in superscript_main
    return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/core/script.py", line 89, in _run_from_parser_and_opt
    return script.run()
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/scripts/detect_offensive_language.py", line 129, in run
    return detect(self.opt)
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/scripts/detect_offensive_language.py", line 104, in detect
    classify(text, stats)
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/scripts/detect_offensive_language.py", line 93, in classify
    if text in offensive_classifier:
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/utils/safety.py", line 76, in __contains__
    pred_not_ok, prob = self.contains_offensive_language(key)
  File "/home/user/.pyenv/versions/3.8.2/lib/python3.8/site-packages/parlai/utils/safety.py", line 63, in contains_offensive_language
    response = self.model.act()['text']
KeyError: 'text'

Additional context My goal is to replicate the gender bias control method's results in the Recipe for Safety paper Table 15 (P.20), so ultimately I want to run detect_offensive controlled on a fixed gender bin eg. f0m0:

parlai detect_offensive --task genderation_bias:controllable_task:convai2 \
--fixed_control 'f0m0' -dt valid --display-examples True \
-mf $CHECKPOINT/FT_90M_genderation --dynamic-batching full

Versions parlai 1.2.0 Python 3.8.2 Pytorch 1.8.1+cu102

Thank you very much.

gabrielle-lau commented 3 years ago

Changing the train_model command's argument to --skip-generation False solved this problem.