cmnfriend / O-LoRA

MIT License
126 stars 12 forks source link

llama2的结果比论文中的llama1的结果低 #22

Open pixas opened 1 month ago

pixas commented 1 month ago

作者您好,您的工作非常地好。我看您的论文中使用的是llama-7b,运行的order-1的standard cl的结果是76.8,但是我用llama2运行scripts_llama/order_1.sh的时候却得到如下的结果:

{
    "epoch": 0.99,
    "predict_exact_match": 43.2072,
    "predict_exact_match_for_SC": 14.9605,
    "predict_exact_match_for_TC": 52.6228,
    "predict_exact_match_for_agnews": 57.8553,
    "predict_exact_match_for_amazon": 14.9605,
    "predict_exact_match_for_dbpedia": 59.3553,
    "predict_exact_match_for_yahoo": 40.6579,
    "predict_gen_len": 420.8973,
    "predict_global_step": 62,
    "predict_loss": 0.0,
    "predict_rouge1": 53.6793,
    "predict_rouge1_for_SC": 42.9978,
    "predict_rouge1_for_TC": 57.2398,
    "predict_rouge1_for_agnews": 57.8553,
    "predict_rouge1_for_amazon": 42.9978,
    "predict_rouge1_for_dbpedia": 72.6939,
    "predict_rouge1_for_yahoo": 41.1703,
    "predict_rougeL": 53.6792,
    "predict_rougeL_for_SC": 42.9978,
    "predict_rougeL_for_TC": 57.2397,
    "predict_rougeL_for_agnews": 57.8553,
    "predict_rougeL_for_amazon": 42.9978,
    "predict_rougeL_for_dbpedia": 72.6939,
    "predict_rougeL_for_yahoo": 41.1698,
    "predict_runtime": 1397.0294,
    "predict_samples": 30400,
    "predict_samples_per_second": 21.76,
    "predict_steps_per_second": 0.68,
    "train_loss": 20.316666218542284,
    "train_runtime": 218.6764,
    "train_samples": 4000,
    "train_samples_per_second": 18.292,
    "train_steps_per_second": 0.284
}

似乎没有一个数字和论文中对的上。因此我有两个疑问:

  1. 我该看哪一个metric,才是论文中使用的metric?
  2. llama2理论上比llama1更强,但是我运行得到的结果却没有,这可能是什么原因?