Chenghao-Yang / QoR-Prediction

3 stars 0 forks source link

Code #1

Open sjyjytu opened 1 year ago

sjyjytu commented 1 year ago

hello~ Thank you for your great work! Did your submission go well? I would appreciate it if you could release your code soon. Thank you a lot!

Chenghao-Yang commented 1 year ago

The related codes are available at https://github.com/Chenghao-Yang/awesome-ml4ls. I also appended code that uses reinforcement learning for logic optimization. I hope it is useful for you. If you have any questions, please feel free to tell me

sjyjytu commented 1 year ago

Thank you very much. The code is neat and comfortable to read. It helps me a lot. I notice that you extend the code of "DRiLLS" and "Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network" (GraphRL) for logic optimization with RL. I also have some researches on these two papers and their codes. However, when I reproduce GraphRL, I find that the GCN structure plays a very small role in improving the performances (i.e., removing it can get the similar results). I wonder whether you have any new finding by applying the pretrained graph models in the QoR prediction task on the logic optimization task. Thank you very much!

Chenghao-Yang commented 1 year ago

Sorry it took so long to see it.First, the prevailing view on the reason why the graph neural network did not play a significant role, as you said, is: Logic transformation in abc, such as rewrite, is based on subgraph optimization, while graph neural networks such as GCN are full graph when learning features. As a result, networks such as GCN cannot learn the state representation of DAG well. Therefore, if we can build up subgraph level learning, I believe graph neural networks will make it possible for reinforcement learning to move towards generalization in logic synthesis. In addition, based on some of my previous observations, although graph neural networks cannot improve optimization much for agents that iterate through multiple episodes. However, it seems that for it can accelerate the exploration of unseen circuits by previous optimization on other circuits. Finally, I have not tried to apply the pre-trained QoR model to RL.