AI4Finance-Foundation / FinGPT

FinGPT: Open-Source Financial Large Language Models! Revolutionize πŸ”₯ We release the trained model on HuggingFace.
https://ai4finance.org
MIT License
13.48k stars 1.88k forks source link

About applying the headline classification task to the finLLaMA2 #121

Open JJYyyyy090 opened 9 months ago

JJYyyyy090 commented 9 months ago

I would like to perform headline classification using finllama2. Can anyone tell me what format of input I should use for this model? Where can I find explanations or examples on the input format for headline classification?

Noir97 commented 9 months ago

I have no idea what finllama2 refers to. If you want to perform headline classification using the llama2 base model, I don't think there's a specific format required.

You can either do a zero-shot/few-shot in-context learning in your favored ways or refer to FinGPT_Benchmark' task-specific training phase and just use our format for training (in this way you're actually using it as a bert-like model).

But if you want to use the llama2-chat model, you can refer to this https://gpus.llm-utils.org/llama-2-prompt-template/
Modify the template_dict here might help you.

JJYyyyy090 commented 9 months ago

Thank you for your tips!

Following your advice, I have attempted various approaches. However, perhaps due to my own limitations, I have not yet found a satisfactory solution. So i am reaching out again with a cautious request.

I am interested in presenting headline to a LLM sucah as llama2 or Qwen using zero shot or few shot learning methods. Could I get some examples of the inference format of the input data that should be provided for this purpose?

I look forward to hearing from you! Thank you:)

Sincerely, Jiyoung Jeon

2023λ…„ 11μ›” 17일 (금) μ˜€ν›„ 7:04, Noir97 @.***>λ‹˜μ΄ μž‘μ„±:

I have no idea what finllama2 refers to. If you want to perform headline classification using the llama2 base model, I don't think there's a specific format required.

You can either do a zero-shot/few-shot in-context learning in your favored ways or refer to FinGPT_Benchmark' task-specific training phase https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Benchmark#task-specific-instruction-tuning and just use our format for training (in this way you're actually using it as a bert-like model).

But if you want to use the llama2-chat model, you can refer to this https://gpus.llm-utils.org/llama-2-prompt-template/ Modify the template_dict here https://github.com/AI4Finance-Foundation/FinGPT/blob/master/fingpt/FinGPT_Benchmark/utils.py might help you.

β€” Reply to this email directly, view it on GitHub https://github.com/AI4Finance-Foundation/FinGPT/issues/121#issuecomment-1816077679, or unsubscribe https://github.com/notifications/unsubscribe-auth/BD5JC5ZHIFT52D4ZD6ML4LTYE4ZCDAVCNFSM6AAAAAA7KC6ZU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJWGA3TONRXHE . You are receiving this because you authored the thread.Message ID: @.***>