andyzoujm / representation-engineering

Representation Engineering: A Top-Down Approach to AI Transparency
https://www.ai-transparency.org/
MIT License
716 stars 86 forks source link

Wizard-Vicuna-30B-Uncensored #11

Closed caesar-jojo closed 1 year ago

caesar-jojo commented 1 year ago

Since the 'ehartford/Wizard-Vicuna-30B-Uncensored' model file is approximately over 120GB, does it mean that my computer's RAM memory also needs to be greater than 120GB to load it? If so, are there any smaller models you would recommend?

justinphan3110 commented 1 year ago

Hi @caesar-jojo , we have honesty_mistral.ipynb notebook that used Mistral-Instruct-7B. You can try to lower the mem more by using 8-bit quantization, through we have not fully tested the control performance for 4-bit/8-bit mode. Let us know if you have any bugs with 8-bit mode

caesar-jojo commented 1 year ago

Thank you~