Closed unzvfu closed 5 years ago
Analysis of candidate libraries:
at::Tensor
type.Ideally we'd write stuff like:
A = array([a, b, c], dtype=modnum(modulus))
B = array([d, e, f], dtype=modnum(modulus))
C = A + B
and the arrays would be moved to/located on the device, and the addition would be modulo the modulus
specified in the type. Probably we'll have to do something slightly less elegant though. For example, to do the above in NumPy, I believe we'd have to subclass ndarray
and also subclass the relevant ufuncs. There doesn't appear to be a way to simply specify a new dtype which redirects the ufunc calls.
Consider the various different possible ways to make cuda-fixnum accessible within Python. Describe requirements of such an interface (e.g. easy integration with NumPy).