Closed abedkurdi closed 1 year ago
Thanks for your comment and sorry for the late reply. I believe this error only occurs when the model is run using a CPU. In this case, switching out
losses += torch.sum((pred_p - y_p)**(2 + gamma))/pred_p.shape[0]/pred_p.shape[1]
with
losses = losses + torch.sum((pred_p - y_p)**(2 + gamma))/pred_p.shape[0]/pred_p.shape[1]
seems to solve the problem. Let me know if that doesn't work for you
Hello there, I have managed to make the package installation work, and while I am trying to reproduce the results in your paper using the following commands:
from gears import PertData, GEARS
# get data
pert_data = PertData('./data')
# load dataset in paper: norman, adamson, dixit.
pert_data.load(data_name = 'norman')
# specify data split
pert_data.prepare_split(split = 'simulation', seed = 1)
# get data loader with batch size
pert_data.get_dataloader(batch_size = 32, test_batch_size = 128)
# set up and train a model
gears_model = GEARS(pert_data, device = 'cpu')
gears_model.model_initialize(hidden_size = 64)
gears_model.train(epochs = 20)
When I reach the last command
gears_model.train(epochs = 20)
, the program is throwing the following error:RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
Do you have any idea how to overcome this problem?Note. I am using
cpu
as device, I don't have CUDA.Thanks in advance.