torch / cutorch

A CUDA backend for Torch7
Other
336 stars 208 forks source link

missing API against TH #70

Open soumith opened 10 years ago

soumith commented 10 years ago

The following math functions are missing in THC but present in TH:

When these are implemented, cwrap entries can be added that would make cutorch completely API compatible with torch

clementfarabet commented 10 years ago

Hey @soumith , so running cutorch.test() currently reports errors on cumsum and cumprod (missing). That's on purpose?

soumith commented 10 years ago

Yes they are yet to be implemented on cutorch. I'll get to that next week. They can be done with a thrust:: scan

clementfarabet commented 10 years ago

Ok cool.

On Sat, Nov 8, 2014 at 12:27 PM, Soumith Chintala notifications@github.com wrote:

Yes they are yet to be implemented on cutorch. I'll get to that next week. They can be done with a thrust:: scan

— Reply to this email directly or view it on GitHub https://github.com/torch/cutorch/issues/70#issuecomment-62265875.

dominikgrewe commented 9 years ago

Does anyone have an implementation of std and var yet? Otherwise I'll have a go next week. Afaict, we can't use THCudaTensor_reduceDim so we'll need something custom (but with a similar structure as reduceDim).

soumith commented 9 years ago

no, do not have it. this has been long overdue, but we should co-ordinate these, let me email you guys.

dominikgrewe commented 9 years ago

Any progress on cumsum and cumprod?

soumith commented 9 years ago

have not started them yet!

dominikgrewe commented 9 years ago

Okay, I might have a go soon then, because we'd like to use it. Just waiting for the THCState PR to be merged.

soumith commented 9 years ago

I will merge the THCState PR on Friday. All the patches have been prepared except for fbcunn, working on that as well.

dominikgrewe commented 9 years ago

@soumith For reductions along a single dimension, we still have the restriction that the tensor must not have more than 4 dimensions. Did you say you're working on a fix for that? If so, what's the progress on it? If you're not working on it, we could easily change the code to what we do for std and var, where there's no restriction on dimensionality at all.

soumith commented 9 years ago

@wickedfoo already has PR for that internally. Its all implemented. I'm on vacay till 26th, so I will sync those changes at the end of this month. On Feb 19, 2015 4:34 PM, "Dominik Grewe" notifications@github.com wrote:

@soumith https://github.com/soumith For reductions along a single dimension, we still have the restriction that the tensor must not have more than 4 dimensions. Did you say you're working on a fix for that? If so, what's the progress on it? If you're not working on it, we could easily change the code to what we do for std and var, where there's no restriction on dimensionality at all.

Reply to this email directly or view it on GitHub https://github.com/torch/cutorch/issues/70#issuecomment-75034377.

soumith commented 9 years ago

His PR is for apply (and apply2) along an arbitrary dimension. Not reductions. It generalizes the copy kernels and changes all the tensor math to use these apply kernels where appropriate instead of make contiguous + thrust

soumith commented 9 years ago

That's the status on that. He did not work yet on arbitrary reductions. If you want to tackle that, go for it.

dominikgrewe commented 9 years ago

Thanks for the update. I'll have a go at the reductions kernel then. Looking forward to the apply kernels!

wickedfoo commented 9 years ago

I'm on vacation too until next week but I don't think the generic apply stuff got pushed yet, I think only the old version of the copy kernel. I reimplemented all cutorch math (pointwise operators) in terms of it, so no newContiguous calls are needed. I don't yet support reductions but it shouldn't be too hard to add that in.

On Thursday, February 19, 2015, Dominik Grewe notifications@github.com wrote:

Thanks for the update. I'll have a go at the reductions kernel then. Looking forward to the apply kernels!

— Reply to this email directly or view it on GitHub https://github.com/torch/cutorch/issues/70#issuecomment-75082163.

dominikgrewe commented 9 years ago

If I remember correctly, you guys said you'd look into maskedFill etc, right? Any progress on that?

wickedfoo commented 9 years ago

for maskedFill etc. do you want the mask to be a float vector (because that's the only thing we have in cutorch at present), or do you want it to be 4 bytes packed into a float?

dominikgrewe commented 9 years ago

I guess float vectors make the most sense, because that's what logical functions (gt, ge etc) return.

wickedfoo commented 9 years ago

I have maskedFill/Copy/Select done, and sort() I have power-of-2 sizes at present (but on input with an arbitrary number of dimensions), so still working on that. maskedFill, maskedCopy and sort avoid newContiguous on the input, but maskedSelect I chickened out and just used two passes and temporary space with a Thrust prefix scan.

Re: "For reductions along a single dimension, we still have the restriction that the tensor must not have more than 4 dimensions. Did you say you're working on a fix for that? If so, what's the progress on it?" I have this fixed as well, took the copy kernel code and made a reduction kernel out of it, so no calls to newContiguous/copies etc. needed. Not a global reduction kernel (like a norm that reduces down to one point, but reduces along a dimension. sort() exploits similar code. I want to do the same shared memory optimization (so I can use coalesced reads) that you did if the reduction dimension is innermost/most contiguous though.

dominikgrewe commented 9 years ago

Cool, looking forward to that. Yes, using the shared memory approach for reductions along contiguous dimensions is vital.

dominikgrewe commented 9 years ago

When do you think you'll have a PR for maskedFill etc?

soumith commented 9 years ago

it's in review, and jeff is still working on revamping our code-base to the state argument based change. Hopefully sometime next week.

And the cutorch TensorMath changes that remove most of the sync points on non-contiguous cases will also land at the same time.

jaderberg commented 9 years ago

Any progress on the masked* functions?

soumith commented 9 years ago

Theyre implemented. We are working on refactoring our code and syncing.with master, we will try to merge them this week.

soumith commented 9 years ago

An update: Jeff has powered through our internal refactor, getting us back to parity with oss cutorch. There are three PRs coming up.

It will either be EOD today or most likely Monday/Tuesday.

soumith commented 9 years ago

Looks like what's left is the small fish.

dominikgrewe commented 9 years ago

Thanks Soumith. Looking forward to that!

What about convolution functions like conv2, conv3 etc? There's code in THCTensorConv.cu, but it's not exposed in Lua. Any idea why? If we want full API parity between Torch and cutorch, we should add those, don't you think?

soumith commented 9 years ago

ah yes you are right. not sure why I did not add them to the list. Writing conv2/conv3 kernels from scratch is going to be not worth our time. Maybe we can use the cu* API for that? Either that, or on GPU we use a buffer to unfold and do MM. What do you think?

dominikgrewe commented 9 years ago

If we can use cuDNN for this, then that would be easiest I guess.

soumith commented 9 years ago

Cudnn is still not shipped with the CUDA toolkit, so not everyone has it. So it falls into the murky territory of, do we really want to introduce a hard-dependency on cudnn.

i am okay with a pcall to cudnn and having an error on not-found, but i am not sure how it will go down with the others.

dominikgrewe commented 9 years ago

There are a number of linear algebra functions missing: symeig, eig, inverse etc. In Torch they seem to be implemented by wrapping Lapack. Could we do something similar for cutorch? There's MAGMA and CULA; does anyone have experience with these libraries?

soumith commented 9 years ago

MAGMA looks best, we built MAGMA internally and it looks reasonably good.

soumith commented 9 years ago

Also, on the CuDNN note, we can configure a header (like THGeneral.h.in ) if we find cudnn. Caffe has the cmake macros needed for finding cudnn already written: https://github.com/BVLC/caffe/blob/master/cmake/Cuda.cmake

dpfau commented 9 years ago

Hi guys. Noticed this thread when dealing with a script that needs diag, svd and eig on CudaTensors. I implemented diag myself in Lua using storage() and set(), but svd and eig are beyond my ken. What's the plan for that?

soumith commented 9 years ago

One of my colleagues @samgross is working on it by interfacing the magma cuda library. It'll happen over the next month or so when he finishes it up and sends a PR.

wickedfoo commented 9 years ago

This is not on this list, but I'm in the process of implementing THCudaTensor_multinomial as well.

nicholas-leonard commented 9 years ago

@wickedfoo Awesome. Would love to see multinomial in cuda.

hughperkins commented 9 years ago

Just to confirm, scatter/gather arent implemented in cutorch, right?

dominikgrewe commented 9 years ago

That's right. I meant to do it, but haven't had the time yet, sorry.

hughperkins commented 9 years ago

For gather, which I suddenly realize could be useful for implementing ClassNLLCriterion.forward, without needing a custom kernel, I guess?, I suppose a simple naive first-cut could be:

Does that sound about right? Anything else I should bear in mind if I write a naive gather along these lines? (I'll be targeting cltorch I confess, but it's quite easy to convert cutorch<->cltorch kernels I think?)

(Edit: what do you think is the most similar existing class/kernel to base this off? and/or thoughts on where to put this, ie filename(s)?)

hughperkins commented 9 years ago

One of my colleagues @samgross is working on it by interfacing the magma cuda library. It'll happen over the next month or so when he finishes it up and sends a PR.

Magma looks cool. Has opencl version too it seems :-)

hughperkins commented 9 years ago

Here is a gather implementation. It's not very tested yet..

hughperkins commented 9 years ago

Shoe-horned the lua wrapper into TensorMath.lua: https://github.com/hughperkins/cltorch/commit/0e469f430cfee8b07cc011057849357412ca679b

hughperkins commented 9 years ago

Did scatter too, since seems like more of the same?

(Edit: and scatterFill, in same files)

abyravan commented 8 years ago

Is there any update on adding these functions? I'm in need of the cross product on cudatensors.

I can code it up if someone can point me towards the things that need to be done. All the layers I've written so far are on the lua side and I'm not sure how to make the connection between cuda and lua. Thanks!

soumith commented 8 years ago

@abyravan i could get to cross next week.

abyravan commented 8 years ago

That would be great. Thanks a lot! Is there any sort of a tutorial or howto on adding new functionality for tensors? Would be useful to have :)

soumith commented 8 years ago

You can look at existing PRs that are cross-linked in this thread. Like: https://github.com/torch/cutorch/pull/120 https://github.com/torch/cutorch/pull/96 https://github.com/torch/cutorch/pull/75

soumith commented 8 years ago

nonzero is being implemented by FB, should be out in a few days.

pvtokmakov commented 7 years ago

Hi guys,

any hope on implementing the conv2 (or xcorr2 for that matter)?