Open timotheecour opened 6 years ago
Yes in the long run I am considering it but for the moment the monolith approach is much easier to maintain.
It also seems to work well for PyTorch, Tensorflow and MxNet even at scale.
Another option that I kind of started doing is to use Nim submodules so that people can cherrypick for example:
import arraymancer/[tensor, ml]
to not import the NN part.
/cc @mratsim N-D tensor specific things is much more limited in scope than, say, neural network stuff (which can grow in scope pretty much infinitely) a better design would be to factor out the N-D tensor specifics into a separate nimble package (say, ndtensor). Where to draw the line is debatable but for a lot of things this should be clear.
happy to discuss this in more details if discussion is welcome
in scope for ndtensor:
$
out of scope for ndtensor, ie move to other nimble package that depends on ndtensor