We noticed that the shape of data samples in CIFAR 10 is 32x32x3, while for MNIST is 28x28x1. Moreover, the number of parameters in AlexNet is much bigger than LeNet. Combining these two observations, we arrived at why, with the same protocol (e.g., Falcon), the communication size of LeNet training MNIST is 485.90 GB while the communication size of AlexNet training CIFAR 10 is 382.18GB. Is there any optimization used in AlexNet?
We noticed that the shape of data samples in CIFAR 10 is 32x32x3, while for MNIST is 28x28x1. Moreover, the number of parameters in AlexNet is much bigger than LeNet. Combining these two observations, we arrived at why, with the same protocol (e.g., Falcon), the communication size of LeNet training MNIST is 485.90 GB while the communication size of AlexNet training CIFAR 10 is 382.18GB. Is there any optimization used in AlexNet?