Open kaczmarj opened 1 year ago
for tests that run model inference, use the different available devices. this includes cpu, gpu, and mps. on github actions, only cpu will be available. but locally run tests can use the other devices.
for tests that run model inference, use the different available devices. this includes cpu, gpu, and mps. on github actions, only cpu will be available. but locally run tests can use the other devices.