Closed AYG-DL closed 6 years ago
I have no idea about it. Could you show me your modified code?
def main(image,batchSize): timer0 = time.time() model = LightCNN_9Layers(num_classes=79077) model.eval() model = torch.nn.DataParallel(model).cuda() checkpoint = torch.load("/scratch/user/ayu2224/CV/De-Occlude/dcgan_code_files/LightCNN/LightCNN_9Layers_checkpoint.pth.tar") model.load_state_dict(checkpoint['state_dict']) timer1 = time.time() transform = transforms.Compose([transforms.ToTensor()]) count = 0 input = torch.zeros(batchSize, 1, 128, 128)
input = input.cuda()
image.resize_as_(input)
input = image
#print("******************")
#print(type(input))
#print("******************")
input_var = torch.autograd.Variable(input, volatile=True)
_, features = model(input_var)
timer2 = time.time()
print ("Checkpoint: ", timer1 -timer0)
print ("Model: ", timer2-timer1)
#print("coming here")
#print(type(features))
#print(features.data.size())
#print(type(features.data))
return features.data.cuda()
//here is the snippet of the code that I am using. The batchsize that I used was 64. I load the pretrained model provided by you.
I really would like advice on this as I have a deadline to meet.
Hi,
I am trying to send a batch of 64 images and trying to extract the features all at once from the model. However, after a certain number of images I get erroneous image features. Any idea why could that be? I modified the extract_features.py file to do it.