Open yanndupis opened 2 years ago
We (@yanndupis and I) made a separate sheet for the logical layer only, to help assess user-facing functionality completeness. Some observations from that exercise:
We also made a sheet called slim-logical
. The idea behind this was to find the minimal set of kernels we feel need to be tackled to support parity across Fixed@Repl, Fixed@Host, and Float@Host just for the PyMoose predictors we have today. There are a few exceptions for kernels we felt were critical for any array-based language, mostly related to indexing and reshaping.
We also took a look at jax.lax
, the XLA primitives library in Jax, to get a sense of how close we are to being able to support a full-featured ML library with this list. We chose that one because it's significantly smaller than both TF and PyTorch, but still big enough to be interesting.
In the last few months, we have added lots of operation. We would like to assess the completeness for each operations in terms of types we support.
[TODO] give more detail to this issue.
The assessment can be found in this file: https://docs.google.com/spreadsheets/d/1mTYJdGzofBoWSzUAWF0zqHo8qSdy0qYagbTQaSh4RVU/edit#gid=0