Closed abdullahjamal closed 8 years ago
try adding this line to the top of your scripts:
jit.off()
still getting the same error .
So,the error pops up because of the extracting of sub-tensors. Let's say I have two feature tensors of 100x1000 and I want to find the gaussian kernel between the features. When I extract the first feature by using feature1[{1,{}}](I mean first row vector) and second row vector as feature2[{2,{}}], torch autograd gives an error. So, that means selecting the subtensors using these operators don't work in torch autograd. So I changed the indexing operator to feature[1] and feature[2] and now it is working
@abdullahjamal Hi is there MMD(maximum mean discrepancy) implementation available for Torch.????
I have used torch-autograd for MMD. Just write a forward function and it will calculate the gradient itself. I followed this link https://github.com/twitter/torch-autograd#creating-auto-differentiated-nn-modules
@abdullahjamal Hi can you share the MMD loss calculation code with me? Thanks :) :)
Hi, I'm having problem running a loss function using torch-autograd. A snippet of the code is given below. Loss function is defined as local autoEntropy = grad.nn.AutoCriterion('AutoEntropy')(entropy) local autoMMD = grad.nn.AutoCriterion('AutoMMD')(mmd) criterion:add(class_loss):add(autoEntropy):add(loss1) where loss1 is loss1:add(autoMMD) in the loop.
criterion and loss1 are nn.ParallelCriterion() and model is defined by simple nn model. local output = model:forward({traindata,unlabeldata}) local err = criterion:forward(cr_inputs,targets) local gradout = criterion:backward(cr_inputs,targets) local gradInput = model:backward({traindata,unlabeldata}, gradout)