Closed szha closed 4 years ago
Upsampling bilinear mode is wrongly implemented. Suggest to change it to use an existing operator BlinearResize, which is compatible with PyTorch.
deprecate or promote APIs under mxnet.contrib, such as control_flow ops
Shall we have a new logo for MXNet 2.0
I would like to remove the reliance on topological ordering of inputs in communication between frontend and backend: #15362. The cleanest solution to this is to change the C API to pass dictionaries instead of lists to the backend (and get dictionaries back).
Removing deprecate operators from code base (or at lease hide them from users).
Some operators, such as SoftmaxActivation
has been made deprecate for a few minor releases. Ideally, we should have a process to remove deprecate operators in the next major release given users have had sufficient time to update them to the newer version in the minor releases.
I was thinking that I would like to drop Amalgamation and instead have a dynamic operator registry, imagine for a moment that you can register your operators in a set of yaml files that would do the same as NNVM_REGISTER, before build time you can configure which operators to compile and produce a very lean library in a similar way that almalgamation is doing but more clean and in a per operator basis, with a codegen step that would parse the operator registry, and also not compile the training code if you just want inference. This would make it possible to make "MXNet lite builds". Would this be desirable?
I have some suggestions down here:-
MXNet only has 4 prebuild datasets, I think we should have more... Here are some important missing datasets- KMNIST, EMNIST, LSUN, STL10, SVHN, PhotoTour, SBU, Flickr8k, Flickr30k, Cityscapes, SBD and FakeData(A fake data generator that generates fake images, useful for GANS)
As we got the Estimator for gluon, (which will be supposedly released with MXNet 1.5), we can have many predefined estimators(very similar to Scikit Learn, with awesome gpu support), for example:-
from mxnet.gluon import estimator
model = estimator.linearregression
model = estimator.logisticregression
model = estimator.ridgeregression
model = estimator.lassoregression
model = estimator.knearestneighbors
model = estimator.kmeansclusttering
model = estimator.svm
....etc
These classical ML algorithms work better than DL for some specific tasks and many users want such ML algorithms with gpu support, so that'd be quite awesome.
We need to have a good and updated website design, for instance in the bottom of homepage, even the "copyright(2017 - 2018)" statement is not updated, which sounds like MXNet is no longer sponsored by the Apache Incubator(BTW is this True?).
The reason why I am so worried about the website is because "IT IS IMPORTANT", more we show to the user directly, better the understanding a user can have.(For instance ME!, when I opened the website first time, it was very difficult for me to find good tutorials and examples, instead, I had to rely on GitHub and had to ask in the forum separately about that.)
This Build Status available at the GitHub Homepage under "README.md" is not working .
KVStore API should note be public, because I haven't really seen any use of it in any tutorial or implementation, secondly gluon.Trainer
already handles the functionality of KVStore internally, thirdly at the top of KVStore Tutorial it's clearly written that
.. note:: Direct interactions with KVStore are dangerous and not recommended.
so why we are telling users how to use it if it's so dangerous and not recommended?
That's a lot to take, I know. Sorry if I've written something wrong above.
I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:
Now we have MXPredCreateMultiThread
in C API, but it's buggy and we still need to create multiple executors for each thread.
I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:
- One instance of the executor is enough for multi-threaded inference, which means it can be used simultaneously in different threads.
- The immutable ndarray in executor should be shared in multi-threaded inference to save memory footprint.
Now we have
MXPredCreateMultiThread
in C API, but it's buggy and we still need to create multiple executors for each thread.
Sounds like refactoring execution engine with TBB and adding some buffering mechanism?
Agree, the entire API (at the C-API level) should be designed to be entirely threadsafe for all requests - whether it's inference or training. This includes parallel calls from different threads - speak no locking or sticky threads.
Could we get rid of all the different pre-processor statements in the codebase that evolved due to the different accelerators (USE_CUDA, USE_TVM, USE_MKLDNN, etc) and fully replace them with the accelerator API from @samskalicky ? This would heavily improve the maintainability.
In terms of operator definitions, we could use ONNX as standard (or derive from it if it's not sufficient). At the moment, there's a tight coupling between the operator definitions and the accelerator choice.
different pre-processor statements ... replace them with the accelerator API
No, we cannot. They don't serve the same purpose.
we could use ONNX as standard
I don't believe we should. ONNX is only good for model exchange and not much for anything else. Also, community has already reached consensus to move towards numpy so it's probably not a good idea to get married to ONNX
@marcoabreu We can definitely remove some of these pre-processor statements with the accelerator API (MKLDNN) but not all of them like @szha points out. USE_CUDA needs to stay since we have GPU embedded pretty tightly. We might be able to push it out into an accelerator library, but not in time for 2.0.
I agree with @szha ONNX is not the way to do this. We need to keep our operator registration in NNVM for now. What we could separate out are the operator definitions (NNVM reg) from the compute functions (infer shape/type, fcompute, etc) though. But again I think we should take this slow, enable actual accelerators first. Then see if it makes sense for TensorRT/MKLDNN and then maybe GPU.
I would like to see the accelerator API (or a first pass at it) as part of 2.0 though. Is this feasible from your perspective @szha ?
@samskalicky I'm not sure about the accelerator API. It seems that the existing subgraph API in combination with 1) better third-party operator support and 2) exposing graph partitioning as an API should be able to serve the same goal as the accelerator API. Those items are useful in other contexts too and deserve more attention, so I'd suggest those as an alternative to a specialized API just for accelerators.
@szha I agree the third-party operator support could be more useful to the broader community and have been continually working on it in my spare time. I would be interested in collaborating with others to move this along faster. Should we consider that as part of the 1.6 release?
But after discussing with @zheng-da today the subgraph API + operator support does not serve the same goal as the accelerator API. Some additional external APIs (like external subgraph properties for example, or supporting compiled binaries for subgraphs, or binding accelerators to subgraphs) would be needed to serve the same goal.
I think we're talking about the same thing. (i.e. item 2 in my last response)
Maybe I misunderstood item 2, assumed that meant better APIs for partitioning. Could clarify what you mean by item 2? Do you mean third-party graph partitioning (similar to third-part operator support)?
(1)#10840 Add einsum since it is useful and it could simplify linear algebra ops. (it is now supported by tensorflow and pyThrch)
(2)seperate MXNet.dll into small files to reduce the import time.
nvcc will generate a HUGE file with -arch=(...a lot of sm_*...)
it seems seperate the HUGE KNOWN_CUDA_ARCHS
instruction may increase the import
performance.
I would like to bring up one issue with profiler: currently, there is a flag kSimbolic
that supposedly should control whether to profiler operators called in symbolic mode. However, there is rarely a use case where users would want to set it to False
; also, in the backend, this flag is not used at all, meaning that even if it's set to False
, the profiler will still profiler symbolic operators.
I have a issue about this: https://github.com/apache/incubator-mxnet/issues/15658.
I think maybe we should have this flag removed in 2.0 to avoid confusion?
module.bind
API has a param named data_shapes which is misleading because the param is not limited to just shapes but they are data descriptors and accept DataDesc instances. I think this should be fixed in 2.0
Julia-related issue
Expect more image operations: adjust_colors (not random), rotate, and more
remove the deprecated infer_range
argument for nd.arange
. The option is actually not supported.
we need to fix this issue as well https://github.com/apache/incubator-mxnet/issues/16216. It's a breaking change
Are there any plans to move the training logic (dataset handling, distributed training, etc) into the core to avoid having all that logic in the frontend languages?
Let's start a discussion here about the roadmap towards MXNet 2.0. We are looking for:
If you have any item that you'd like to propose to have in the roadmap, please do:
Given that this would be a major release, we'd have the opportunity to make backward incompatible changes. This would allow us to visit some topics that require large changes such as dropping support for python2, transitioning fully to cmake, making the tensor library numpy-compatible, or even new programming models.
Now that we decided to follow semantic versioning for releases, it would be a good idea to coordinate features and API changes to make the best use of the next major release. Thus, I propose that we use this issue to track the APIs we'd like to change in the next major version.
The candidates I've collected so far:
download
in #9671Once there are more of such requests, I will try to organize these API-breaking requests better.