AI-secure / DecodingTrust

A Comprehensive Assessment of Trustworthiness in GPT Models
https://decodingtrust.github.io/
Creative Commons Attribution Share Alike 4.0 International
264 stars 56 forks source link

How to evaluate toxicity task on local hf-llama2-7B? #19

Open AboveParadise opened 1 year ago

AboveParadise commented 1 year ago

Here is my code:

#!/bin/bash
dt-run +toxicity=realtoxicityprompts-toxic  \
    ++model=hf/../llama/llama-2-7b-hf \
    ++toxicity.n=25 \
    ++toxicity.template=1

and the bug is Traceback (most recent call last): File "/mnt/disk1/yg/DecodingTrust/src/dt/main.py", line 42, in main perspective_module.main(perspective_args(perspective_config)) File "/mnt/disk1/yg/DecodingTrust/src/dt/perspectives/toxicity/text_generation_hydra.py", line 29, in main generator = Chat.from_helm(OPTS, conv_template=args.conv_template, cache=dirname, api_key=args.key) File "/mnt/disk1/yg/DecodingTrust/src/dt/chat.py", line 41, in from_helm return HFChat(model_name.replace("hf/", "").rstrip("/"), kwargs) File "/mnt/disk1/yg/DecodingTrust/src/dt/chat.py", line 364, in init self.conv_template = get_conv_template(conv_template) File "/mnt/disk1/yg/DecodingTrust/src/dt/conversation.py", line 284, in get_conv_template return conv_templates[name].copy() KeyError: None

How can i fix it?

danielz02 commented 1 year ago

Thanks for your interest. To specify a local HF model, please use hf//path/to/local/hf/model.

AboveParadise commented 1 year ago

Thanks for your reply, but I have already used my local hf-llama2-7b model's location ../llama/llama-2-7b-hf, which is a folder downloaded from huggingface, is there any format mistake?

danielz02 commented 1 year ago

Please try this

#!/bin/bash
dt-run +toxicity=realtoxicityprompts-toxic  \
    ++model=hf//../llama/llama-2-7b-hf \
    ++toxicity.n=25 \
    ++toxicity.template=1
AboveParadise commented 1 year ago

Thx, but I've already tried this and I got this bug: Could not parse model name: '/../llama/llama-2-7b-hf'; Expected format: [namespace/]model_name[@revision] Error executing job with overrides: ['+toxicity=realtoxicityprompts-toxic', '++model=hf//../llama/llama-2-7b-hf', '++toxicity.n=25', '++toxicity.template=1'] Traceback (most recent call last): File "/mnt/disk1/yg/DecodingTrust/src/dt/main.py", line 42, in main perspective_module.main(perspective_args(perspective_config)) File "/mnt/disk1/yg/DecodingTrust/src/dt/perspectives/toxicity/text_generation_hydra.py", line 29, in main generator = Chat.from_helm(OPTS, conv_template=args.conv_template, cache=dirname, api_key=args.key) File "/mnt/disk1/yg/DecodingTrust/src/dt/chat.py", line 41, in from_helm return HFChat(model_name.replace("hf/", "").rstrip("/"), kwargs) File "/mnt/disk1/yg/DecodingTrust/src/dt/chat.py", line 361, in init raise ValueError("Unable to retrieve model config") ValueError: Unable to retrieve model config

It seems worse than the former one, how could I fix it? FYI, my ../llama/llama-2-7b-hf/ is as below:

(llmbench) [root@gpu24 DecodingTrust]# ls ../llama/llama-2-7b-hf/
config.json             gitattributes.txt                 pytorch_model-00002-of-00002.bin  special_tokens_map.json  tokenizer.json
generation_config.json  pytorch_model-00001-of-00002.bin  pytorch_model.bin.index.json      tokenizer_config.json    tokenizer.model
AboveParadise commented 1 year ago

I think the main problem is there is no conv_template in the args

danielz02 commented 1 year ago

Could you try using absolute path?