A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.
Expose device ID selection to the user through the main Stoke interface.
Motivation
Currently Stoke assumes that all available GPU device IDs can be used when constructing the Stoke object. This is most likely valid for the majority of scenarios where Stoke is used, however arch like GANs etc. might need separate device placement. This would allow for finer grained control over device placement when needed by the user.
Proposal
Change the gpu flag within the Stoke interface to accept either boolean args (where True would use all available device) or a List of device IDs to use.
Feature
Expose device ID selection to the user through the main
Stoke
interface.Motivation
Currently Stoke assumes that all available GPU device IDs can be used when constructing the Stoke object. This is most likely valid for the majority of scenarios where Stoke is used, however arch like GANs etc. might need separate device placement. This would allow for finer grained control over device placement when needed by the user.
Proposal
Change the
gpu
flag within theStoke
interface to accept either boolean args (where True would use all available device) or a List of device IDs to use.