chaoyi-wu / PMC-LLaMA

The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"
583 stars 52 forks source link

generation issue on PMC_LLAMA_7B #16

Open XZhang97666 opened 12 months ago

XZhang97666 commented 12 months ago

I try to use PMC_LLAMA_7B for text generation and MedQA. However, it may run into some issue, e.g. copy the pervious input without generate anything. One thing I notice there is the special tokens map is {}. Could you recheck the model in huggingface ?

XZhang97666 commented 12 months ago

In addition, I also want to check the QA benchmark setting. I utilized the greedy decoding on chatdoctor without any FT on MedQA training dataset. The performance gap is large from your results. I wonder did you utilize any other strategies e.g. COT, for the generation. Thanks.

WeixiongLin commented 11 months ago

Thanks for your interest. May I have your input prompt please. Open source LLMs are often sensitive to prompts, so it might have influence on the performance. Besides, you could try out diffrent decoding strategies (e.g. topk).

XZhang97666 commented 11 months ago

Thanks for your interest. May I have your input prompt please. Open source LLMs are often sensitive to prompts, so it might have influence on the performance. Besides, you could try out diffrent decoding strategies (e.g. topk).

I utilized the prompt you provide. For specific instruction, I used "If you are a doctor, please answer the medical questions based on the patient's description." for text generation and "Answer this multiple choice question and direct output final answer." for multiple choice. However, the 7-b model generate wired answer and is hard to stop.

PROMPT_DICT = {
    "prompt_input": (
        "Below is an instruction that describes a task, paired with an input that provides further context. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    ),
    "prompt_no_input": (
        "Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Response:"
    ),
}
WeixiongLin commented 11 months ago

We are still working on the instruction tuning of 7B model now, it's almost done. You could try it on 13B model for now.

shamanez commented 8 months ago

@WeixiongLin, can we have the instruction tuned to the 7B PMC llama checkpoint?