peterbjorgensen / DeepDFT

Official implementation of DeepDFT model
MIT License
60 stars 8 forks source link

GPU #4

Closed Jia-12138 closed 8 months ago

Jia-12138 commented 10 months ago

This is an excellent work. Here is a question. If I want to train this model using ethylene carbonate dataset.,What model GPU do I need for training?

peterbjorgensen commented 10 months ago

We used a single Nvidia RTX 3090 GPU. It has 24 GB of GPU memory.

If you get CUDA errors due to out-of-memory, you can change these lines https://github.com/peterbjorgensen/DeepDFT/blob/a6bab4deb5cf05d9b46ae397b72253d04ea3c694/runner.py#L237-L245

The second argument 2 is the number of molecules per iteration and the numbers 1000 and 5000 are the number of probe points per iteration. You could decrease the number of probe points per iteration to see if it helps.

This should really be added as command line arguments.

    train_loader = torch.utils.data.DataLoader(
        datasplits["train"],
        2,
        num_workers=4,
        sampler=torch.utils.data.RandomSampler(datasplits["train"]),
        collate_fn=dataset.CollateFuncRandomSample(args.cutoff, 1000, pin_memory=False, set_pbc_to=set_pbc),
    )
    val_loader = torch.utils.data.DataLoader(
        datasplits["validation"],
        2,
        collate_fn=dataset.CollateFuncRandomSample(args.cutoff, 5000, pin_memory=False, set_pbc_to=set_pbc),
        num_workers=0,
    )