Closed tiavila closed 2 years ago
Or a way to provide a list of input values and get a list of Wx back.
No built-in support for multi-GPU, but maybe torch/cupy can be told to use a particular device.
list of input values and get a list of Wx back.
Batched computation is supported, with separate inputs stacked along dim0:
x_batched = np.vstack([x0, x1]) # x0.ndim == x1.ndim == 1
Is there an environ setting to enable which GPU to use and the ability to use more than one for CUPY and Torch?
I am on a machine with 4 GPUs but it only uses GPU #1 which is in use by other processes.