amazon-science / wqa_tanda

This repo provides code and data used in our TANDA paper.
Other
108 stars 26 forks source link

Transferred model on WIKI dataset #6

Closed liudonglei closed 3 years ago

liudonglei commented 3 years ago

I download the 'Models Transferred on ASNQ (BERT-Base ASNQ)' and use the command to test this Transferred model on WIKI dataset, but get this error: python run_glue.py
--model_type bert
--model_name_or_path ../../models/tanda_bert_base_asnq
--task_name ASNQ --do_train --do_eval --sequential --do_lower_case
--data_dir ../../data/WikiQACorpus --per_gpu_train_batch_size 150
--learning_rate 1e-6 --num_train_epochs 2.0
--output_dir ../../out2

Traceback (most recent call last): File "/home/ldl/406.tf/tanda-qa/transformers/transformers/configuration_utils.py", line 134, in from_pretrained resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/home/ldl/406.tf/tanda-qa/transformers/transformers/file_utils.py", line 182, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file ../../out2/config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_glue.py", line 561, in main() File "run_glue.py", line 494, in main model = model_class.from_pretrained(args.output_dir) File "/home/ldl/406.tf/tanda-qa/transformers/transformers/modeling_utils.py", line 333, in from_pretrained **kwargs File "/home/ldl/406.tf/tanda-qa/transformers/transformers/configuration_utils.py", line 146, in from_pretrained raise EnvironmentError(msg)

why the output_dir shoud not be empty?

ryanpram commented 3 years ago

i change the "model = model_class.from_pretrained(args.output_dir)" to "model = model_class.from_pretrained(args.model_name_or_path)" for quick fix. but i dont know its a right fix or not

ryanpram commented 3 years ago

i think remove "--sequential" is fine, since sequential argument is for finetunning from checkpoint

liudonglei commented 3 years ago

i change the "model = model_class.from_pretrained(args.output_dir)" to "model = model_class.from_pretrained(args.model_name_or_path)" for quick fix. but i dont know its a right fix or not

this fix can not work, there are also bugs:

$ python run_glue.py --model_type bert --model_name_or_path ../../data/models/tanda_bert_base_asnq --task_name ASNQ --do_train --do_eval --sequential --do_lower_case --data_dir ../../data/wiki-txt --per_gpu_train_batch_size 56 --learning_rate 1e-6 --num_train_epochs 2.0 --output_dir ../../data/wiki-out-1

Traceback (most recent call last): File "/home/ldl/406.tf/tanda-qa/transformers/transformers/configuration_utils.py", line 134, in from_pretrained resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/home/ldl/406.tf/tanda-qa/transformers/transformers/file_utils.py", line 182, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file ../../data/models/tanda_bert_base_asnq/config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_glue.py", line 570, in main() File "run_glue.py", line 490, in main cache_dir=args.cache_dir if args.cache_dir else None) File "/home/ldl/406.tf/tanda-qa/transformers/transformers/configuration_utils.py", line 146, in from_pretrained raise EnvironmentError(msg) OSError: Model name '../../data/models/tanda_bert_base_asnq' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed '../../data/models/tanda_bert_base_asnq/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.

It seems that the author offered Models Transferred on ASNQ [https://wqa-public.s3.amazonaws.com/tanda-aaai-2020/models/tanda_bert_base_asnq.tar] is not the desired format of the Fine-Tuning on Wiki-QA. The Author'code is still incomplete.

ryanpram commented 3 years ago

i fix thats error with change this : --model_name_or_path ../../models/tanda_bert_base_asnq to this : --model_name_or_path ../../models/tanda_bert_base_asnq/ckpt

liudonglei commented 3 years ago

i fix thats error with change this : --model_name_or_path ../../models/tanda_bert_base_asnq to this : --model_name_or_path ../../models/tanda_bert_base_asnq/ckpt

No, there are not ckpt dir in ../../models/tanda_bert_base_asnq

$ ll ../../data/models/tanda_bert_base_asnq 总用量 1.3G -rw-rw-r-- 1 ldl ldl 1.3G 9月 7 2019 model.ckpt.data-00000-of-00001 -rw-rw-r-- 1 ldl ldl 313 9月 7 2019 bert_config.json -rw-rw-r-- 1 ldl ldl 23K 9月 7 2019 model.ckpt.index -rw-rw-r-- 1 ldl ldl 3.9M 9月 7 2019 model.ckpt.meta -rw-rw-r-- 1 ldl ldl 227K 9月 7 2019 vocab.txt

ryanpram commented 3 years ago

oh sorry, my bad.. i'm using tanda roberta large asnq model.

liudonglei commented 3 years ago

oh sorry, my bad.. i'm using tanda roberta large asnq model. Thanks, you are right. The folder of [tanda roberta large asnq model] is in right format.

Hi, it seems that you have successfully run the program. Can you tell me how to get the results if this ASNQ model on WikiQA of metrics MRR and MAP. the code seems just report the acc metrics.

ryanpram commented 3 years ago

oh sorry, my bad.. i'm using tanda roberta large asnq model. Thanks, you are right. The folder of [tanda roberta large asnq model] is in right format.

Hi, it seems that you have successfully run the program. Can you tell me how to get the results if this ASNQ model on WikiQA of metrics MRR and MAP. the code seems just report the acc metrics.

Unfortunately i only get accuracy result as the code provided. Btw i'm the same person who create issue for asking the author how to get MAP and MRR result

liudonglei commented 3 years ago

oh sorry, my bad.. i'm using tanda roberta large asnq model. Thanks, you are right. The folder of [tanda roberta large asnq model] is in right format.

Hi, it seems that you have successfully run the program. Can you tell me how to get the results if this ASNQ model on WikiQA of metrics MRR and MAP. the code seems just report the acc metrics.

Unfortunately i only get accuracy result as the code provided. Btw i'm the same person who create issue for asking the author how to get MAP and MRR result

oh sorry, my bad.. i'm using tanda roberta large asnq model. Thanks, you are right. The folder of [tanda roberta large asnq model] is in right format.

Hi, it seems that you have successfully run the program. Can you tell me how to get the results if this ASNQ model on WikiQA of metrics MRR and MAP. the code seems just report the acc metrics.

Unfortunately i only get accuracy result as the code provided. Btw i'm the same person who create issue for asking the author how to get MAP and MRR result

ok, thanks very much. I think the code maybe not complete.

ryanpram commented 3 years ago

@liudonglei yass, I hope the author will clarify this soon

sid7954 commented 3 years ago

For computing MAP and MRR, you can use the following two functions which take as input the list of questions, list of labels and list of predictions (Note that questions has repeated question entries for each answer candidate):

'''
questions : list of questions in the dataset
answers : list of answers in the dataset
labels : list of 0/1 labels corresponding to if answer is correct for question
predictions : list of probability scores from the a QA model for question-answer pairs
'''

def mean_average_precision(questions, labels, predictions):
    question_results = {}
    #Aggregating (prediction, label) tuples specific to a question from all answer candidates
    for row in zip(questions, predictions, labels):
        if row[0] not in question_results:
            question_results[row[0]] = []
        question_results[row[0]].append((row[1], row[2]))

    sum_AP = 0.0
    for q in question_results:
        _scores, _labels = zip(*sorted(question_results[q], reverse=True))

        if sum(_labels) == 0: continue #All incorrect answers for a question
        if len(_labels) == 0: continue #No candidate answer for a question 
        if len(_labels) == sum(_labels): continue #All correct answers for a question

        sum_question_AP_at_k = num_correct_at_k = position=0
        while position < len(_labels):
            correct_or_incorrect = (_labels[position]==1)
            num_correct_at_k += correct_or_incorrect
            sum_question_AP_at_k += correct_or_incorrect * num_correct_at_k /(position+1)
            position+=1

        sum_AP+=(sum_question_AP_at_k/num_correct_at_k)

    MAP = sum_AP/len(question_results)
    return MAP

def mean_reciprocal_rank(questions, labels, predictions):
    question_results = {}
    #Aggregating (prediction, label) tuples specific to a question from all answer candidates
    for row in zip(questions, predictions, labels):
        if row[0] not in question_results:
            question_results[row[0]] = []
        question_results[row[0]].append((row[1], row[2]))

    reciprocal_ranks = []
    sum_RR = 0.0
    for q in question_results:
        _scores, _labels = zip(*sorted(question_results[q], reverse=True))

        if sum(_labels) == 0: continue #All incorrect answers for a question
        if len(_labels) == 0: continue #No candidate answer for a question 
        if len(_labels) == sum(_labels): continue #All correct answers for a question

        for idx, label in enumerate(_labels, 1):
            if label == True:
                sum_RR+=1.0/idx
                break

    MRR =  sum_RR/len(question_results)
    return MRR