Closed master-alileo closed 5 months ago
Dear Alileo,
Thanks a lot for your question.
I'm not quite sure what your question is. The placement results can just be obtained after each action trajectory. Each macro's position can be recovered from its concrete action (x, y).
Dear author, I would like to obtain macro layout results similar to adaptec1.pl. How can I achieve this?亲爱的作者,我想获得类似于 adaptec1.pl 的宏布局结果。我怎样才能做到这一点? Hello, I want to ask, have you realized this? I was wondering if you could share
Dear huichengyu,
I did not save the raw placement results because reinforcement learning has its stochastic nature, its results are different for each placement. But you can get similar .pl files with similar HPWL results with our code.
Dear huichengyu, 亲爱的慧成宇,
I did not save the raw placement results because reinforcement learning has its stochastic nature, its results are different for each placement. But you can get similar .pl files with similar HPWL results with our code.我没有保存原始的放置结果,因为强化学习具有随机性,每次放置的结果都不同。但是您可以使用我们的代码获得具有类似 HPWL 结果的类似 .pl 文件。
What do I need to change if I want to get a similar file, as if rademe only has the corresponding log file
I obtain the position information of the macros through get_remain_returns
, then scale and output them into a .pl
file @huichengyu
Dear author, I would like to obtain macro layout results similar to adaptec1.pl. How can I achieve this?