apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.79k stars 6.79k forks source link

[Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0) #9686

Closed szha closed 4 years ago

szha commented 6 years ago

Let's start a discussion here about the roadmap towards MXNet 2.0. We are looking for:

If you have any item that you'd like to propose to have in the roadmap, please do:

Given that this would be a major release, we'd have the opportunity to make backward incompatible changes. This would allow us to visit some topics that require large changes such as dropping support for python2, transitioning fully to cmake, making the tensor library numpy-compatible, or even new programming models.


Now that we decided to follow semantic versioning for releases, it would be a good idea to coordinate features and API changes to make the best use of the next major release. Thus, I propose that we use this issue to track the APIs we'd like to change in the next major version.

The candidates I've collected so far:

  1. remove legacy ops such as batch-norm v1
  2. reorganizing namespace for utility functions such as download in #9671
  3. transform argument in the constructor of existing vision dataset API.

Once there are more of such requests, I will try to organize these API-breaking requests better.

zhanghang1989 commented 5 years ago

Upsampling bilinear mode is wrongly implemented. Suggest to change it to use an existing operator BlinearResize, which is compatible with PyTorch.

http://mxnet.incubator.apache.org/versions/master/api/python/ndarray/ndarray.html?highlight=upsamp#mxnet.ndarray.UpSampling

eric-haibin-lin commented 5 years ago

deprecate or promote APIs under mxnet.contrib, such as control_flow ops

eric-haibin-lin commented 5 years ago

Shall we have a new logo for MXNet 2.0

ptrendx commented 5 years ago

I would like to remove the reliance on topological ordering of inputs in communication between frontend and backend: #15362. The cleanest solution to this is to change the C API to pass dictionaries instead of lists to the backend (and get dictionaries back).

szha commented 5 years ago

9182

apeforest commented 5 years ago

Removing deprecate operators from code base (or at lease hide them from users).

Some operators, such as SoftmaxActivation has been made deprecate for a few minor releases. Ideally, we should have a process to remove deprecate operators in the next major release given users have had sufficient time to update them to the newer version in the minor releases.

larroy commented 5 years ago

I was thinking that I would like to drop Amalgamation and instead have a dynamic operator registry, imagine for a moment that you can register your operators in a set of yaml files that would do the same as NNVM_REGISTER, before build time you can configure which operators to compile and produce a very lean library in a similar way that almalgamation is doing but more clean and in a per operator basis, with a codegen step that would parse the operator registry, and also not compile the training code if you just want inference. This would make it possible to make "MXNet lite builds". Would this be desirable?

braindotai commented 5 years ago

I have some suggestions down here:-

The reason why I am so worried about the website is because "IT IS IMPORTANT", more we show to the user directly, better the understanding a user can have.(For instance ME!, when I opened the website first time, it was very difficult for me to find good tutorials and examples, instead, I had to rely on GitHub and had to ask in the forum separately about that.)

so why we are telling users how to use it if it's so dangerous and not recommended?

That's a lot to take, I know. Sorry if I've written something wrong above.

arcadiaphy commented 5 years ago

I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:

Now we have MXPredCreateMultiThread in C API, but it's buggy and we still need to create multiple executors for each thread.

cloudhan commented 5 years ago

I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:

  • One instance of the executor is enough for multi-threaded inference, which means it can be used simultaneously in different threads.
  • The immutable ndarray in executor should be shared in multi-threaded inference to save memory footprint.

Now we have MXPredCreateMultiThread in C API, but it's buggy and we still need to create multiple executors for each thread.

Sounds like refactoring execution engine with TBB and adding some buffering mechanism?

marcoabreu commented 5 years ago

Agree, the entire API (at the C-API level) should be designed to be entirely threadsafe for all requests - whether it's inference or training. This includes parallel calls from different threads - speak no locking or sticky threads.

marcoabreu commented 5 years ago

Could we get rid of all the different pre-processor statements in the codebase that evolved due to the different accelerators (USE_CUDA, USE_TVM, USE_MKLDNN, etc) and fully replace them with the accelerator API from @samskalicky ? This would heavily improve the maintainability.

In terms of operator definitions, we could use ONNX as standard (or derive from it if it's not sufficient). At the moment, there's a tight coupling between the operator definitions and the accelerator choice.

szha commented 5 years ago

different pre-processor statements ... replace them with the accelerator API

No, we cannot. They don't serve the same purpose.

we could use ONNX as standard

I don't believe we should. ONNX is only good for model exchange and not much for anything else. Also, community has already reached consensus to move towards numpy so it's probably not a good idea to get married to ONNX

samskalicky commented 5 years ago

@marcoabreu We can definitely remove some of these pre-processor statements with the accelerator API (MKLDNN) but not all of them like @szha points out. USE_CUDA needs to stay since we have GPU embedded pretty tightly. We might be able to push it out into an accelerator library, but not in time for 2.0.

I agree with @szha ONNX is not the way to do this. We need to keep our operator registration in NNVM for now. What we could separate out are the operator definitions (NNVM reg) from the compute functions (infer shape/type, fcompute, etc) though. But again I think we should take this slow, enable actual accelerators first. Then see if it makes sense for TensorRT/MKLDNN and then maybe GPU.

I would like to see the accelerator API (or a first pass at it) as part of 2.0 though. Is this feasible from your perspective @szha ?

szha commented 5 years ago

@samskalicky I'm not sure about the accelerator API. It seems that the existing subgraph API in combination with 1) better third-party operator support and 2) exposing graph partitioning as an API should be able to serve the same goal as the accelerator API. Those items are useful in other contexts too and deserve more attention, so I'd suggest those as an alternative to a specialized API just for accelerators.

samskalicky commented 5 years ago

@szha I agree the third-party operator support could be more useful to the broader community and have been continually working on it in my spare time. I would be interested in collaborating with others to move this along faster. Should we consider that as part of the 1.6 release?

But after discussing with @zheng-da today the subgraph API + operator support does not serve the same goal as the accelerator API. Some additional external APIs (like external subgraph properties for example, or supporting compiled binaries for subgraphs, or binding accelerators to subgraphs) would be needed to serve the same goal.

szha commented 5 years ago

I think we're talking about the same thing. (i.e. item 2 in my last response)

samskalicky commented 5 years ago

Maybe I misunderstood item 2, assumed that meant better APIs for partitioning. Could clarify what you mean by item 2? Do you mean third-party graph partitioning (similar to third-part operator support)?

Neutron3529 commented 5 years ago

(1)#10840 Add einsum since it is useful and it could simplify linear algebra ops. (it is now supported by tensorflow and pyThrch) (2)seperate MXNet.dll into small files to reduce the import time. nvcc will generate a HUGE file with -arch=(...a lot of sm_*...) it seems seperate the HUGE KNOWN_CUDA_ARCHS instruction may increase the import performance.

Zha0q1 commented 5 years ago

I would like to bring up one issue with profiler: currently, there is a flag kSimbolic that supposedly should control whether to profiler operators called in symbolic mode. However, there is rarely a use case where users would want to set it to False; also, in the backend, this flag is not used at all, meaning that even if it's set to False, the profiler will still profiler symbolic operators.

I have a issue about this: https://github.com/apache/incubator-mxnet/issues/15658.

I think maybe we should have this flag removed in 2.0 to avoid confusion?

anirudh2290 commented 5 years ago

module.bind API has a param named data_shapes which is misleading because the param is not limited to just shapes but they are data descriptors and accept DataDesc instances. I think this should be fixed in 2.0

iblislin commented 5 years ago

Julia-related issue

ShownX commented 5 years ago

Expect more image operations: adjust_colors (not random), rotate, and more

eric-haibin-lin commented 5 years ago

remove the deprecated infer_range argument for nd.arange. The option is actually not supported.

access2rohit commented 5 years ago

we need to fix this issue as well https://github.com/apache/incubator-mxnet/issues/16216. It's a breaking change

eric-haibin-lin commented 4 years ago
marcoabreu commented 4 years ago

Are there any plans to move the training logic (dataset handling, distributed training, etc) into the core to avoid having all that logic in the frontend languages?