Hello, I have been reading your paper "SpENCNN: Orchestrating Encoding and Sparsity for Fast Homomorphically Encrypted Neural Network Inference". But I am not very familiar with sub block pruning. What do the k1, k5, and k9 in the following figure represent. Why can reducing one sub block reduce 6 rotations. Why did (b) in the figure not calculate the areas that were not cropped.
Hello, I have been reading your paper "SpENCNN: Orchestrating Encoding and Sparsity for Fast Homomorphically Encrypted Neural Network Inference". But I am not very familiar with sub block pruning. What do the k1, k5, and k9 in the following figure represent. Why can reducing one sub block reduce 6 rotations. Why did (b) in the figure not calculate the areas that were not cropped.