lucidrains / PaLM-rlhf-pytorch

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
MIT License
7.67k stars 668 forks source link

Help with computational power #17

Closed byteunix closed 1 year ago

byteunix commented 1 year ago

hi, i work at a company that wants to help. We've computational power and we would like to talk more about it, is it possible?

kalmzzz commented 1 year ago

@byteunix are you looking to pretrain it with someone else?

johndpope commented 1 year ago

I draw your attention to this "Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading" https://petals.ml/

https://github.com/bigscience-workshop/petals maybe worth considering if we can swap out the models under the hood.

byteunix commented 1 year ago

@kalmzzz no, I'm interested in using the computational power we have in the company to train it, but I would like to understand if it is possible to work with CPU instead of GPU, I've some idle teraflops but CPU.

lucidrains commented 1 year ago

@byteunix it isn't possible sadly