google-deepmind / torch-randomkit

Provides and wraps the Randomkit library, copied from Numpy.
BSD 3-Clause "New" or "Revised" License
34 stars 25 forks source link

Discuss cloning policy #29

Closed jucor closed 11 years ago

jucor commented 11 years ago

Hi @d11, @akfidjeland

I was wondering: which policy do we want to follow:

  1. either avoid cloning at all cost, or
  2. we are allowed to use the input arguments as temporary workspace and that it is up to the caller to clone before calling if he wants to be safe?

While I've always used 1. in const-less other languages, 2. seems more in line with Torch's philosophy for speed and the typical use of NN. What do you think ? We might guarantee that we do not resize, by using a new Tensor pointing to the same storage when we need to resize, i.e. we would never do

function f(x)
x:resize(1,x:numel())
end

but instead

function f(x)
x = torch.Tensor(x):resize(1,x:numel())
end

so that the caller is guaranteed that his views of the storage are never changed -- though the content of the storage might change.

What's your opinion?

Julien

d11 commented 11 years ago

I was assuming we shouldn't ever modify inputs since it has the potential to lead to pretty confusing bugs, but if speed is the top priority then I can see there's an argument for doing it the other way. If we change this, note that we may need to update some of the tests so that they don't break.

On 14 Nov 2013, at 09:26, Julien Cornebise notifications@github.com wrote:

Hi @d11, @akfidjeland

I was wondering: which policy do we want to follow:

either avoid cloning at all cost, or we are allowed to use the input arguments as temporary workspace and that it is up to the caller to clone before calling if he wants to be safe? While I've always used 1. in const-less other languages, 2. seems more in line with Torch's philosophy for speed and the typical use of NN. What do you think ? We might guarantee that we do not resize, by using a new Tensor pointing to the same storage when we need to resize, i.e. we would never do

function f(x) x:resize(1,x:numel()) end but instead

function f(x) x = torch.Tensor(x):resize(1,x:numel()) end so that the caller is guaranteed that his views of the storage are never changed -- though the content of the storage might change.

What's your opinion?

Julien

— Reply to this email directly or view it on GitHub.

akfidjeland commented 11 years ago

As discussed off-line, I think we shouldn't completely disallow cloning. I think each function should have a clean version with a tidy and unsurprising interface that does cloning internally if necessary. This could be extended with an additional optional state table argument which can be used if the user wants to run in constant space.

jucor commented 11 years ago

Moved to jucor/torch-distributions#13