torch / cunn

Other
215 stars 174 forks source link

When should LookupTable allocate memory? #422

Closed juesato closed 7 years ago

juesato commented 7 years ago

I have a LookupTable layer that's allocating memory during random backwards passes (it happens ~once every 100 backwards passes).

I've narrowed the allocation down to this line:

   self.gradWeight.THNN.LookupTable_accGradParameters(
      input:cdata(),
      gradOutput:cdata(),
      self.gradWeight:cdata(),
      self._count:cdata(),
      THNN.optionalTensor(self._sorted),
      THNN.optionalTensor(self._indices),
      self.shouldScaleGradByFreq or false,
      self.paddingValue or 0,
      scale or 1
   )

but I'm not clear on what's going on inside the cuda kernel.

All the inputs to the layer should be the same type (torch.LongTensor) and shape.

Could somebody give me tips on what to look for?

soumith commented 7 years ago

it's doing that when a thrust call happens, i think. if you update to the latest cutorch/cunn, then you can get this commit that'll help you: https://github.com/torch/cunn/commit/1f8292f4f8334ab9e2433a0960607c780b29f848

juesato commented 7 years ago

Sorry for the slow turnaround. That indeed does fix the issue! Thanks.