TILOS-AI-Institute / MacroPlacement

Macro Placement - benchmarks, evaluators, and reproducible results from leading methods in open source
BSD 3-Clause "New" or "Revised" License
220 stars 43 forks source link

Benchmarking against IBM-MSwPins, Faraday, MMS benchmarks #27

Open adyasaurabhn opened 2 years ago

adyasaurabhn commented 2 years ago

Very interesting project. Have you considered benchmarking against the IBM-MSwPins and Faraday benchmarks released in ICCAD 04. http://vlsicad.eecs.umich.edu/BK/ICCAD04bench/ Comparisons for the modern mixed size placement (MMS) benchmarks would be interesting too. https://home.engineering.iastate.edu/~cnchu/pubs/c53.pdf Mixed-size placement benchmarks have upto 2.4M std-cells and upto 1000's of macros. The macro placement complexity is not just in number but also in various sizes and aspect ratios of macros. Since RL cost function is also primarily HPWL, doing a HPWL/runtime comparison for a fully legal placement (macros + std-cells) from various techniques should be a good metric. HPWL as a first order metric has shown to correlate well to final routed WL.

ZhiangWang033 commented 2 years ago

Hi adyasaurabhn, we did consider these testcases when we started this project. But one problem is that the Circuit Training relies on the running physical-aware synthesis to generate clustering. We are not clear that how to run physical-aware synthesis on these benchmarks. If you have any idea about this, please let us know. Thanks.

abk-tilos commented 2 years ago

@adyasaurabhn -- Thanks for your suggestion. I will discuss with @ZhiangWang033 and others. (Obviously we can get a placement to seed (x,y) locations for macro and stdcell instances by means other than commercial physical synthesis.) The main reason we have not proposed to study ICCAD04 and other "venerable" testcases is that it's not possible to produce "Nature paper Table 1" metrics (WNS/TNS, power, routed WL) with them. This said, your point about proxy cost being a function of HPWL, density and congestion (i.e., no timing or power or detailed routing metrics) has been made in many other discussions. Please hang on while we assess feasibility of this study and resolve other details (e.g., which versions of which benchmarks are best to study with limited compute resources). Thanks again.

adyasaurabhn commented 2 years ago

Thank you @abk-tilos @ZhiangWang033 . Looking forward to this study. Agreed that power of a RL based solution is in modeling the hard to optimize metrics like congestion, wns, tns, DRCs, power

sakundu commented 1 year ago

We are sorry for the delay in responding, and thank you for your suggestion to run the ICCAD04 testcases.

Here is our current status related to ICCAD04 testcases:

Please let us know if you have any questions or suggestions.

adyasaurabhn commented 1 year ago

Thank you for the update. Looking forward to CT results on these benchmarks. Some comments, for ICCAD04, all designs have a whitespace of 20%. It is possible that different methods (CT, RePlace, SA+RePlace, CDNS mixed size placer) perform differently for different whitespace in the design. One suggestion would be to vary the amount of whitespace (eg. 20%, 30%, 40%) for each design by changing their core area. This would show us if there is sensitivity to whitespace for different algorithms. Also, variance of results (QOR) with different starting seeds would also tell us the stability of each algorithm for this problem. Thanks