Closed domdivakaruni closed 7 years ago
NumpyOP pythonop and ndarrayop are deprecated. We only support customop now
I will document "LinearRegressionOutput" API
I will document all samplers and randoms as I anyway enhanced or authored them, namely
random_gamma random_exponential random_poisson random_negative_binomial random_generalized_negative_binomial
sample_uniform sample_normal sample_gamma sample_exponential sample_poisson sample_negative_binomial sample_generalized_negative_binomial
I may need more than 2 business days but should be definitely done by end of next week.
I will document Functions from 125 to 147 in the rnn package.
If I get more time, I will also work on 161-168
@moontails that is tremendous thank you! Do you plan on making PR's or creating Issues with your functions in markdown? When you are done with your first function send it in so we can help with a review before you get rolling with the rest. great stuff!
@domdivakaruni I was thinking of creating PR's.
Thats a good idea! I will submit a PR when am done with the changes for the first function and gather all feedback before diving in on the rest :)
I can document some of ndarray operators:
102 gamma ndarray
103 gammaln ndarray
104 identity ndarray
105 load ndarray
106 moveaxis ndarray
Hi @moontails do you want to call ownership of all 22 rnn functions at once or do you want to stake your claim in batches as you make progress?
I will work on zeros_like, ones_like, softmax_cross_entropy, c_array, c_str
I can work on resize_short, scale_down.
@yuruofeifei @jhwei @zhenwendai good stuff!
@domdivakaruni either works fine for me. I had hoped to send out a PR yesterday to get some feedback before working on the rest, but ran into some issues setting up the repo (with the whole installing mxnet and make lint)
@moontails did you get it worked out? Let me know what you ran into. @nswamy fyi
@moontails whats the issue that you are running into? @domdivakaruni can we have folks picks few APIs at once and add more to their bucket as they make progress?
I can document 6~11.
Question : Should we use reST
or Markdown
for improvement through github issues? This page says Markdown
, but pull request pages requires reST
.
@Lyken17 please prefer to use reST
and submit a PR. If you want to just submit doc outside of the the code(python doc string or describe method of C++ code) use either reST
or Markdown
and submit an issue.
@zhenwendai, The docs for these APIs are already updated. Could you please pick a different set? 102 gamma ndarray 103 gammaln ndarray 104 identity ndarray
I will work on 58 color_normalize. BTW, where are the definitions of 78 fft & 79 ifft?
I can document 2-5.
I can share 130-145 with @moontails. Let me know which ones would you prefer that I do.
@piiswrong do we need to document 146 BaseRNNCell._get_activation and 147 FusedRNNCell._slice_weights? These are not intended for use other than by the APIs.
@moontails are you still working on these APIs? What can you share with @szha? can you please let us know as soon as you can? Thanks!
@domdivakaruni @szha I am sorry guys, I still havent been able to get it to work on my Macbook.
I will let @szha start checking things off from 130-145 and get back on track if I am able to unblock myself.
@moontails Don't know which problem you ran into but also had to resolve some hiccups on my mac. Resolved it by:
That got me going. I used a separate mxnet repo for standard building/installation and functional testing (I had to touch a bit more than just the "describe" section of the operators). Don't know whether this was necessary, but at least it worked that way.
Did punt on lint on my mac as I couldn't get it to work. Rather did submit the pull request and fixed the flagged lint issues based on the automatic tests. It's straightforward.
I'm going to take the four APIs for kvstore 178 - 181
@Lyken17 any luck updating the doc for the APIs that you picked ?
Hi @nswamy
I just updated GridGenerator and submit it to issue https://github.com/dmlc/mxnet/issues/6147.
I meet troubles when documenting ndarray.UpSampling
. Even myself cannot find the proper way to use it, according to issues https://github.com/dmlc/mxnet/issues/2823 https://github.com/dmlc/mxnet/issues/4134 https://github.com/dmlc/mxnet/issues/1412, it seems to be an incomplete implementation. I also looked up several FCN mxnet implementations, they all use deconvolution
instead of UpSampling
.
@Lyken17, Thanks for posting the issue, I will convert this to actual documentation in code. You can skip UpSampling for now, we might need to add a note that it is incomplete. @mli, @piiswrong what do you suggest?
For other APIs that you work on, can you please edit the docs in the code directly? If building and rendering takes time or you get into an issue, you can skip that step and just follow the tips to avoid any lint errors.
Hi @nswamy
I've just posted a pr for 118 and 119: https://github.com/dmlc/mxnet/pull/6314
One thing though is that I wasn't able to get rendering going on my Mac. "make lint" was fine, but building docs was always failing. I'll post a separate issue perhaps to get some advice on that.
Let's fix our docs!
The effort to spruce up MXNet's Python API docs is picking up steam but could use your help to add that extra oomph! Many of the APIs now have proper explanations and good examples, but we still have much more to go!
Here’s how you can join in and help:
Pick a few functions (or at least 1 :smile: ) from the list below
Look up current documentation
Write up:
Submit your work
API List
NDArrayOp.declarebackwardoperatorNumpyOpoperatorPythonOpoperatorPythonOp.backwardoperatorPythonOp.forwardoperatorPythonOp.get_symboloperatorPythonOp.infer_shapeoperatorPythonOp.list_argumentsoperatorPythonOp.list_outputsoperatorPythonOp.need_top_gradoperator