FAIR-Chem / fairchem

FAIR Chemistry's library of machine learning methods for chemistry
https://opencatalystproject.org/
Other
779 stars 243 forks source link

LMDBs fully deprecated after #753? #868

Open cmclausen opened 2 weeks ago

cmclausen commented 2 weeks ago

Hi there, I'm having trouble with my own LMDB datasets and BalancedBatchSampler after a recent update. It now requires the dataset to pass .metadata_hasattr("natoms") unless the UnsupportedDatasetError is thrown. I have previously made graph datasets for on-the-fly inference and have amended those to accommodate the change:

class GraphsListDataset(Dataset):
    """
    Make a list of graphs to feed into OCP dataloader
    """

    def __init__(self, graphs_list):
        self.graphs_list = graphs_list
        self._metadata = DatasetMetadata([g.natoms for g in graphs_list])

    def __len__(self):
        return len(self.graphs_list)

    def __getitem__(self, idx):
        graph = self.graphs_list[idx]
        return graph

    def metadata_hasattr(self, attr) -> bool:
        if self._metadata is None:
            return False
        return hasattr(self._metadata, attr)

    def get_metadata(self, attr, idx):
        if self._metadata is not None:
            metadata_attr = getattr(self._metadata, attr)
            if isinstance(idx, list):
                return [metadata_attr[_idx] for _idx in idx]
            return metadata_attr[idx]
        return None

However, the error also occurs when initializing training with LMDB datasets from main.py and a config file. Are LMDB datasets fully deprecated now or if not what is the new protocol for making these and passing them to the trainer?

Best regards Christian


Traceback (most recent call last):
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/../../main.py", line 8, in <module>
    main()
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/_cli.py", line 127, in main
    runner_wrapper(config)
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/_cli.py", line 57, in runner_wrapper
    Runner()(config)
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/_cli.py", line 38, in __call__
    with new_trainer_context(config=config) as ctx:
  File "/groups/kemi/clausen/miniconda3/envs/fairchem/lib/python3.11/contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/common/utils.py", line 1087, in new_trainer_context
    trainer = trainer_cls(**trainer_config)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/trainers/ocp_trainer.py", line 99, in __init__
    super().__init__(
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/trainers/base_trainer.py", line 208, in __init__
    self.load()
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/trainers/base_trainer.py", line 230, in load
    self.load_datasets()
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/trainers/base_trainer.py", line 333, in load_datasets
    self.train_sampler = self.get_sampler(
                         ^^^^^^^^^^^^^^^^^
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/trainers/base_trainer.py", line 286, in get_sampler
    return BalancedBatchSampler(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/common/data_parallel.py", line 168, in __init__
    raise error
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/common/data_parallel.py", line 165, in __init__
    dataset = _ensure_supported(dataset)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lustre/hpc/kemi/clausen/fairchem/src/fairchem/core/common/data_parallel.py", line 110, in _ensure_supported
    raise UnsupportedDatasetError(
fairchem.core.datasets.base_dataset.UnsupportedDatasetError: BalancedBatchSampler requires a dataset that has a metadata attributed with number of atoms.```
misko commented 2 weeks ago

hi @cmclausen! On this now! Its possible there is something wired slightly wrong in our code for default behavior to work :'(

I am working on resolving this and adding a test case.

In the meantime if you are blocked, I think it might be possible for you to add,

load_balancing_on_error: warn_and_no_balance

under the optim section to turn this error into just a warning

misko commented 2 weeks ago

Our code and tests look correct, I think what you are trying should be working. LMDB datasets are not deprecated, they are still used. There were changes made to batch balancing and code cleanup/reconsolidating which I think is causing your issues. I'm sorry :'(

In order to use batch balancing (to balance systems across multiple simultaneous GPUs) you need to have a valid dataset._metadata value that contains the 'natoms' field. Or you can just fully disable this by using the following,

optim:
  load_balancing: False

The implementation you have about looks like it should not trigger the error you are getting, because you clearly define ._metadata , https://github.com/FAIR-Chem/fairchem/blob/83fd9d21c4c0430746e7d3b49d99b33f02956660/src/fairchem/core/common/data_parallel.py#L112

Can you try adding some debug statements inside of BalancedBatchSampler._ensure_support and getting the value of dataset._metadata , most importantly finding out why it does not seem to have a 'natoms' field?

Hope this helps, please let us know how it goes!

cmclausen commented 2 weeks ago

Hi @misko, Thanks for your attention on this.

I used load_balancing and load_balancing_on_error but for some scenarios BalancedBatchSampler._ensure_support throws an AttributeError if _metadata is completely missing and that cannot be bypassed as it is currently.

Is it correct that the _metadata is supposed to originate from .npz-file as per core.datasets.base_dataset? Earlier I have just converted my structures using AtomsToGraphs from fairchem.core.preprocessing, added the relevant attributes, and saved them to an LMBD.

xianghumeng commented 6 days ago

hi, @cmclausen , i have the same trouble when using AtomsToGraphs to lmbd. run main.py --mode predict have the attribute error. have you solved the problem? @misko ,i test the load_balancing=false is not work. this is my config.yml dataset: test: a2g_args: r_energy: false r_forces: false format: lmdb src: /home/train_test_data/graph_input/data.0000.lmdb evaluation_metrics: metrics: energy:

xianghumeng commented 1 day ago

pip install fair-core is not the lastest version. so the load_balancing and load_balancing_on_error is not work