NVIDIA / modulus

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods
https://developer.nvidia.com/modulus
Apache License 2.0
798 stars 174 forks source link

[CorrDiff]: move patching logic to data loaders #448

Open nbren12 opened 2 months ago

nbren12 commented 2 months ago
          **not for this PR since this code already exists**

Eventually, the loss should not know anything about patches and just treat them like batches (hey this rhymes). This can be achieved by moving the patching logic either to the dataloader or training_loop.py. global_index can then be passed to the loss object. Let's open an issue for this refactor.

_Originally posted by @nbren12 in https://github.com/NVIDIA/modulus/pull/401#discussion_r1566566785_

tge25 commented 2 months ago

Patching operation for regression's output could be performed if the dataloader returns coordinate values along with the patched input and target