Closed suyashbakshi closed 2 years ago
The provided csvs in the set_of_table_templates
have a limited set of placeholder components with specific attributes, which might not match the components in the architecture.
For example, the intmac
component in your architecture is 8b, whereas the provided intmac csv only has 16b mac: https://github.com/Accelergy-Project/accelergy-table-based-plug-ins/blob/master/set_of_table_templates/data/intmac.csv. Since the attributes do not match, the Accelergy backend will not use the numbers from the provided tables. To test if the plug-in can be triggered, you can try to add entries for your intmac
with the desired attributes (with dummy values if you do not have them available).
At a high-level, rather than a mature plug-in like the cacti plug-in, the table-based plug-in serves more as a guide for users to populate their own data in the tables.
Thanks. I did not realize the difference in bitwidth, after making the bitwidths similar for components in the architecture and table, the plug-in did get triggered.
Hello, I installed the table-based-plug-in by following the steps in Readme. However when I execute the "accelergy" command for the attached architecture, the ERT_summary.yaml reports that the properties for "RegisterFile" and "MACC" are still being read from the "dummy_table".
I understand that the plugin contains classes for "regfile" and "intmac" in the "set_of_table_templates/" directory, which matches the class names I've used in this architecture.
Further, in the accelergy log it shows that accelergy does identify this plug-in. But as you can see from the attached ERT_summary, the estimator used for "RegisterFile" and "MACC" is the "dummy_table".
Note that I also have the accelergy-cacti plug-in installed, which gets correctly identified by accelergy, and CACTI is used as an estimator for "GlobalBuffer" and "MainMemory".
I'd appreciate it if you could help me with this. arch.txt ERT_summary.txt accelergy_output.log