GPGPU-Sim provides a detailed simulation model of contemporary NVIDIA GPUs running CUDA and/or OpenCL workloads. It includes support for features such as TensorCores and CUDA Dynamic Parallelism as well as a performance visualization tool, AerialVisoin, and an integrated energy model, GPUWattch.
Hello, i am trying to use the gpuwattch-McPAT model to estimate the area of GPU SMs with varying numbers of execution units. According to the manual at http://gpgpu-sim.org/gpuwattch/#4_2_Configuration_Options the (ALU, MUL, FPU)_percore parameters in the XML represent the SIMD width of the respective units and seem to be set to 32, 4, 32 respectively for most configurations regardless of each configurations num(int, sp, dp, sfu)_units (as defined in their gpgpusim.config files). However according to both measurements and source code, no other XML option appears to affect the reported area of the corresponding functional units. My question is, when i for example double the integer execution units from, am i to double the ALU_per_core value in the XML in order to appropriately model the change or is there a more suitable way? Thank you for your time
Hello, i am trying to use the gpuwattch-McPAT model to estimate the area of GPU SMs with varying numbers of execution units. According to the manual at http://gpgpu-sim.org/gpuwattch/#4_2_Configuration_Options the (ALU, MUL, FPU)_percore parameters in the XML represent the SIMD width of the respective units and seem to be set to 32, 4, 32 respectively for most configurations regardless of each configurations num(int, sp, dp, sfu)_units (as defined in their gpgpusim.config files). However according to both measurements and source code, no other XML option appears to affect the reported area of the corresponding functional units. My question is, when i for example double the integer execution units from, am i to double the ALU_per_core value in the XML in order to appropriately model the change or is there a more suitable way? Thank you for your time