Open luciferlinx101 opened 10 months ago
@abcdabcd987 Your paper states Punica implementations consist of two parts: a Python li- brary on top of PyTorch that runs large language models on a single GPU and other system components to support model serving across a GPU cluster. Was looking for solution with a GPU cluster, Multi GPU or Multi GPU Nodes
Hey @abcdabcd987 any update on this?
+1
Sorry, I still hasn't got time to clean up the code. But here's some old code if you need it right now:
Both are not usable out-of-box.
I wanted to know how to use Multi-GPUs and Multi-Node solutions with the current Punica code. Also wanted to know about the runner and scheduler code which is mentioned in the paper, if it is implemented can you guide me about that.