Closed mkielo3 closed 2 weeks ago
Small change to allow user to specify a device for local LLMs. There doesn't seem to be a supported way to do this. Currently "pytorch_gemma_local.ipynb" defaults to cpu and runs in 19 seconds vs. 0.5 seconds on GPU. Apologies if I missed something.
Thanks! You can ignore that failed internal check, this will go in no probs once internally approved.
merged in 8a0e489930c73f6c5d24e82bdf4ea81594b36bb1
Small change to allow user to specify a device for local LLMs. There doesn't seem to be a supported way to do this. Currently "pytorch_gemma_local.ipynb" defaults to cpu and runs in 19 seconds vs. 0.5 seconds on GPU. Apologies if I missed something.