Open harikrishnaapc opened 10 months ago
Hi Team,
Read through the paper as well, great work.
Thanks
Hi Team,
Read through the paper as well, great work.
- I have a doubt, if have enough space in VRAM to load the model, will these optimizations helps?
- How much data % of train data is suggested for DejaVu 'Predictors' finding?
- How to obtain predictors from custom-trained models, should we again do inference using DejaVu or any other alternate method?
Thanks
Hello, thank you for your interest.
I have a fine tuned vicuna 7B model, i tried to convert into PowerInfer with 'LLaMA(ReLU)-2-7B' predictor, but the inference is not right? Is this because of a different predictor used rather than that of fine-tuned model one? How to obtain these weights?
I Todo section i see 'Release core code of PowerInfer, supporting Llama-2, Falcon-40B.' is marked as done.
Can we use PowerInfer for fine-tuned vicuna/ llama models?
Thanks
Prerequisites
Before submitting your question, please ensure the following:
- [x] I am running the latest version of PowerInfer. Development is rapid, and as of now, there are no tagged versions.
- [x] I have carefully read and followed the instructions in the README.md.
- [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
Question Details
Please provide a clear and concise description of your question. If applicable, include steps to reproduce the issue or behaviors you've observed.
Additional Context
Please provide any additional information that may be relevant to your question, such as specific system configurations, environment details, or any other context that could be helpful in addressing your inquiry.
Thank you for your interest. First, for now, we just support ReLU-based model. And every model has its own predictor. For now we do not support fine-tuned vicuna/ llama models because they are not ReLU-based models. By the way, we will release mistral-based model in the future. And we will SFT and DPO finetune this model.
Hi Team, Read through the paper as well, great work.
- I have a doubt, if have enough space in VRAM to load the model, will these optimizations helps?
- How much data % of train data is suggested for DejaVu 'Predictors' finding?
- How to obtain predictors from custom-trained models, should we again do inference using DejaVu or any other alternate method?
Thanks
Hello, thank you for your interest.
- Yes, when we have enough space in VRAM, we will fall back to Deja Vu.But currently, our code has not been optimized for complete offloading, and we will support this feature.
- Actually I use 1M data point for predictor training.
- For training predictors, we will open source a tool. At present, you can refer to the implementation of predictor training in Dejavu.
Dear Team,
I hope you're doing well. I'm following up on the discussion about the optimization for complete offloading and the fallback to Deja Vu.
Could you kindly provide any updates on the progress of this feature?
Thank you for your time.
I have a fine tuned vicuna 7B model, i tried to convert into PowerInfer with 'LLaMA(ReLU)-2-7B' predictor, but the inference is not right? Is this because of a different predictor used rather than that of fine-tuned model one? How to obtain these weights?
I Todo section i see 'Release core code of PowerInfer, supporting Llama-2, Falcon-40B.' is marked as done.
Can we use PowerInfer for fine-tuned vicuna/ llama models?
Thanks
Prerequisites
Before submitting your question, please ensure the following:
Question Details
Please provide a clear and concise description of your question. If applicable, include steps to reproduce the issue or behaviors you've observed.
Additional Context
Please provide any additional information that may be relevant to your question, such as specific system configurations, environment details, or any other context that could be helpful in addressing your inquiry.