SATA_Sim is an energy estimation framework for Backpropagation-Through-Time (BPTT) based Spiking Neural Networks (SNNs) training and inference with sparsity awareness.
Why does the SDCL architecture not consider the impact of LSTM? If we consider the energy consumption of calculating LSTM blocks, how should we design the code?
Why does the SDCL architecture not consider the impact of LSTM? If we consider the energy consumption of calculating LSTM blocks, how should we design the code?