thunlp / KnowledgeablePromptTuning

kpt code
206 stars 21 forks source link

实验相关问题 #9

Closed Hou-jing closed 2 years ago

Hou-jing commented 2 years ago

(1)在根据频次优化时(Frequency Refinement),体现在代码中,是否为: myverbalizer = KnowledgeableVerbalizer(tokenizer, classes=class_labels, candidate_frac=cutoff, pred_temp=args.pred_temp, max_token_split=args.max_token_split).from_file(f"{args.openprompt_path}/scripts/{scriptsbase}/knowledgeable_verbalizer.{scriptformat}"),其中的cutoff是否为实验中提到的阈值呢?不知道理解的对不对。 (2)在few-shot实验中,为何没有将support set的label值remove掉呢? 我看论文中提到的是“ . Our proposed Contextualized Calibration utilizes a limited amount of unlabeled support data to yield significantly better results”, 但是,在实验中是,却将其注释掉了,这里有些不解。 image **

ShengdingHu commented 2 years ago
  1. 是的,cutoff是阈值
  2. 我的代码中应该没有注释掉 https://github.com/thunlp/KnowledgeablePromptTuning/blob/main/zeroshot.py#L111,可能是你不小心注释了
Hou-jing commented 2 years ago

谢谢,我看到了,麻烦了