Open peppersaltyy opened 4 days ago
hi! Thank you for liking my project. Yes, you need to fine-tune llava first. I will release a dataset of 1500 pieces of data in the next few days that you can test. And I'll release a fine-tuned llava next month. Memory JSON is the data generated during your previous communication with ChatGPT, including human feedback. The picture is a scene picture for chatgpt to plan, and you can upload it to the cloud disk to get the URL link. For more detailed information, please refer to my appendix.
peppersaltyy @.***>于2024年11月22日 周五00:59写道:
Hi. I really like your work and try to reimplement it using your code. However, I am a little confused about the general pipeline of your code. After building the environment, I should have the LLaVA fine-tuned before running the line 'main.py --mode llava --img use_url', right? Would you publish the dataset you use to fine-tune the LLaVA? Or your fine-tuned LLaVA? Besides, in your main.py code. What thing should I provide here as 'path to your memory json' and 'url link for your camera image'?
— Reply to this email directly, view it on GitHub https://github.com/zxzm-zak/AlignBot/issues/2, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZBCPOLZU2TNT4TU4VYJKU32BYGOJAVCNFSM6AAAAABSHPFDHCVHI2DSMVQWIX3LMV43ASLTON2WKOZSGY4DAMRSGI2DINQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hi. I really like your work and try to reimplement it using your code. However, I am a little confused about the general pipeline of your code. After building the environment, I should have the LLaVA fine-tuned before running the line 'main.py --mode llava --img use_url', right? Would you publish the dataset you use to fine-tune the LLaVA? Or your fine-tuned LLaVA? Besides, in your main.py code. What thing should I provide here as 'path to your memory json' and 'url link for your camera image'?