Closed byteunix closed 1 year ago
I draw your attention to this "Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading" https://petals.ml/
https://github.com/bigscience-workshop/petals maybe worth considering if we can swap out the models under the hood.
@kalmzzz no, I'm interested in using the computational power we have in the company to train it, but I would like to understand if it is possible to work with CPU instead of GPU, I've some idle teraflops but CPU.
@byteunix it isn't possible sadly
hi, i work at a company that wants to help. We've computational power and we would like to talk more about it, is it possible?