**not for this PR since this code already exists**
Eventually, the loss should not know anything about patches and just treat them like batches (hey this rhymes). This can be achieved by moving the patching logic either to the dataloader or training_loop.py. global_index can then be passed to the loss object. Let's open an issue for this refactor.
Eventually, the loss should not know anything about patches and just treat them like batches (hey this rhymes). This can be achieved by moving the patching logic either to the dataloader or training_loop.py. global_index can then be passed to the loss object. Let's open an issue for this refactor.
_Originally posted by @nbren12 in https://github.com/NVIDIA/modulus/pull/401#discussion_r1566566785_