Normalize class could have a divide by 0 problem, e.g. when some of the states or actions have a fixed value.
This can be fixed if https://github.com/huggingface/lerobot/blob/main/lerobot/common/policies/normalize.py#L150 is changed to
batch[key] = (batch[key] - min) / (max - min + 1e-8)
Information
[ ] One of the scripts in the examples/ folder of LeRobot
[X] My own task or dataset (give details below)
Reproduction
This occurs when I extend push T / Diffusion policy to a real robot, and start by fixing the non-XY states / actions.
Expected behavior
Should be able to compute loss even when there are states and actions that have a single fixed value in the entire dataset
System Info
Information
Reproduction
This occurs when I extend push T / Diffusion policy to a real robot, and start by fixing the non-XY states / actions.
Expected behavior
Should be able to compute loss even when there are states and actions that have a single fixed value in the entire dataset