google-deepmind / torch-distributions

Probability distributions, wrapped for Torch.
BSD 3-Clause "New" or "Revised" License
61 stars 25 forks source link

Discuss cloning policy #13

Closed jucor closed 10 years ago

jucor commented 11 years ago

Issue by jucor from Thursday Nov 14, 2013 at 09:26 GMT Originally opened as https://github.com/jucor/torch-randomkit/issues/29


Hi @d11, @akfidjeland

I was wondering: which policy do we want to follow:

  1. either avoid cloning at all cost, or
  2. we are allowed to use the input arguments as temporary workspace and that it is up to the caller to clone before calling if he wants to be safe?

While I've always used 1. in const-less other languages, 2. seems more in line with Torch's philosophy for speed and the typical use of NN. What do you think ? We might guarantee that we do not resize, by using a new Tensor pointing to the same storage when we need to resize, i.e. we would never do

function f(x)
x:resize(1,x:numel())
end

but instead

function f(x)
x = torch.Tensor(x):resize(1,x:numel())
end

so that the caller is guaranteed that his views of the storage are never changed -- though the content of the storage might change.

What's your opinion?

Julien

jucor commented 11 years ago

Comment by d11 from Thursday Nov 14, 2013 at 10:52 GMT


I was assuming we shouldn't ever modify inputs since it has the potential to lead to pretty confusing bugs, but if speed is the top priority then I can see there's an argument for doing it the other way. If we change this, note that we may need to update some of the tests so that they don't break.

On 14 Nov 2013, at 09:26, Julien Cornebise notifications@github.com wrote:

Hi @d11, @akfidjeland

I was wondering: which policy do we want to follow:

either avoid cloning at all cost, or we are allowed to use the input arguments as temporary workspace and that it is up to the caller to clone before calling if he wants to be safe? While I've always used 1. in const-less other languages, 2. seems more in line with Torch's philosophy for speed and the typical use of NN. What do you think ? We might guarantee that we do not resize, by using a new Tensor pointing to the same storage when we need to resize, i.e. we would never do

function f(x) x:resize(1,x:numel()) end but instead

function f(x) x = torch.Tensor(x):resize(1,x:numel()) end so that the caller is guaranteed that his views of the storage are never changed -- though the content of the storage might change.

What's your opinion?

Julien

— Reply to this email directly or view it on GitHub.

jucor commented 11 years ago

Comment by akfidjeland from Thursday Nov 14, 2013 at 13:58 GMT


As discussed off-line, I think we shouldn't completely disallow cloning. I think each function should have a clean version with a tidy and unsurprising interface that does cloning internally if necessary. This could be extended with an additional optional state table argument which can be used if the user wants to run in constant space.