Right now, all types of Tensors are implemented separately, so ClTensors and Tensors have entirely separate implementations. There are plenty of other backends I would like to support:
Arrow
Cuda
Vulkan
Ideally, these should not require multiple implementations, but rather be created as interfaces that can all be developed in a single class.
This would be a major overhaul to existing implementations, but I think Tensor(T) should become Tensor(T, V), where T is a base Crystal data type, and V is a backend implementation.
This would also have to not break Num::Grad and Num::NN, which would mean that all gates would work across all backends, or at least raise unimplemented errors at compile time vs. runtime.
Right now, all types of Tensors are implemented separately, so
ClTensors
andTensor
s have entirely separate implementations. There are plenty of other backends I would like to support:Ideally, these should not require multiple implementations, but rather be created as interfaces that can all be developed in a single class.
This would be a major overhaul to existing implementations, but I think
Tensor(T)
should becomeTensor(T, V)
, whereT
is a base Crystal data type, andV
is a backend implementation.This would also have to not break
Num::Grad
andNum::NN
, which would mean that all gates would work across all backends, or at least raise unimplemented errors at compile time vs. runtime.