Closed VictorOdede closed 1 year ago
yeah, sounds good
Hey @VictorOdede, I'd like to work on this.
Kindly mention this in the engineering channel on Discord then @mmirman or @cartazio will give you the green light.
Hey @VictorOdede, I'd like to work on this.
@bilal-aamer We certainly won't stop you from working on it :-) but it'd be good to get on a call with @abhigya-sodani to see what is useful to know before starting it!
Sure will do! Did some due diligence already, need some pointers from @abhigya-sodani.
sure bilal lets get on a call soon
If someone is not actively working on this, I can take up this task
Have at it!
On Mon, Aug 14, 2023 at 9:47 AM Hari Vamsi @.***> wrote:
If someone is not actively working on this, I can take up this task
— Reply to this email directly, view it on GitHub https://github.com/anarchy-ai/LLM-VM/issues/97#issuecomment-1677349544, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAABBQVSV53J3WQS3P3EZLTXVIT65ANCNFSM6AAAAAA2XRNFKI . You are receiving this because you were mentioned.Message ID: @.***>
https://github.com/anarchy-ai/LLM-VM/issues/7 is also applicable/related to this ticket
Is there any team working on this issue?
if youre confident you can bang something together, have at it! this one of the more complex open tickets, so be honest with yourself! that said, worst case you learn something!
Current onsite LLM class uses full parameter fine-tuning which costly. LoRA fine-tuning will require less memory and prevent overfitting by freezing the pretrained weights.