Shern: “Want to know more: Why Normalize?” by ChatGPT is pretty wrong for this workshop. For RGB data the main reason is #2 and possibly #5. #1 is the same as #2 (the ML functions are both faster and more stable with inputs in [0-1]). #3 and #4 aren’t valid with the “min-max” normalization used, since this method wouldn’t equalize the variance between R G and B channels if they were very different in the dataset.
Shern: “Want to know more: Why Normalize?” by ChatGPT is pretty wrong for this workshop. For RGB data the main reason is #2 and possibly #5. #1 is the same as #2 (the ML functions are both faster and more stable with inputs in [0-1]). #3 and #4 aren’t valid with the “min-max” normalization used, since this method wouldn’t equalize the variance between R G and B channels if they were very different in the dataset.