Open finalelement opened 2 years ago
Some updates, was able to successfully run generate_confs.py, but had to ensure that all was being put on cpu, ended up making the following change to inference.py, utils.py, and model.py.
#device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = 'cpu'
But I was not able to run it with the default given scripts. Looking forward to insights from y'all. :)
I was able to run generate_confs.py on gpu by making a few code modifications:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
import cupy as np
p_coords = torch.zeros([4, model.n_model_confs, 3], device=device)
q_reorder = np.argsort([np.where(a.cpu() == q_idx.cpu())[0][0] for a in torch.tensor(cycle_avg_indices)[q_coords_mask]])
Also made the following changes in generate_confs.py:
state_dict = torch.load(f'{trained_model_dir}/best_model.pt', map_location=device)
model.load_state_dict(state_dict, strict=True)
model.to(device)
data = Batch.from_data_list([tg_data]).to(device)
Hello,
I am facing an issue when trying to run the generate_confs.py using the given pretrained models. However I am running into the error shared below, please share your insights, if there is a preference between GPU and CPU when trying to run the inference.
I also tried switching between cpu and gpu for the model, but no luck so far.