Closed siddhantpathakk closed 1 year ago
Fixed the issue by changing the following lines!
Change the line to this in the get_omegas
function
pos_weight = torch.mul(self.constraint_mat['beta_uD'][users.cpu()], self.constraint_mat['beta_iD'][pos_items.cpu()]).to(device)
neg_weight = torch.mul(torch.repeat_interleave(self.constraint_mat['beta_uD'][users.cpu()], neg_items.size(1)), self.constraint_mat['beta_iD'][neg_items.cpu().flatten()]).to(device)
and change the line to this in the cal_loss_I
function
neighbor_embeds = self.item_embeds(self.ii_neighbor_mat[pos_items.cpu()].to(device)) # len(pos_items) * num_neighbors * dim
sim_scores = self.ii_constraint_mat[pos_items.cpu()].to(device) # len(pos_items) * num_neighbors
and lastly in the test
function
rating += mask[batch_users.cpu()]
I am trying to reproduce the UltraGCN paper using this repository, using a Linux SLURM scheduler for training the model on a remote server with access to NVIDIA GPUs. However, when I run the file, this runtime error occurs (for all datasets).
I am assuming the default gpu index should be 0 only, right?