alexfrom0815 / Online-3D-BPP-PCT

Code implementation of "Learning Efficient Online 3D Bin Packing on Packing Configuration Trees". We propose to enhance the practical applicability of online 3D Bin Packing Problem (BPP) via learning on a hierarchical packing configuration tree which makes the deep reinforcement learning (DRL) model easy to deal with practical constraints and well-performing even with continuous solution space.
246 stars 45 forks source link

The network structure #9

Closed TD-Jia closed 2 years ago

TD-Jia commented 2 years ago

Thank you very much for your selfless sharing. I noticed that you did not show the details of the network structure in the paper, but you showed your deep reinforcement learning network structure based on the ACKTR algorithm in the Online-3D-BPP-DRL(https://github.com/alexfrom0815/Online-3D-BPP-DRL) paper. Is the network structure of deep reinforcement learning the same in this PCT paper? Or do you have a suggestion for a quick way to see the structure of the PCT network? Thanks in advance for your reply. Best wishes for you!

alexfrom0815 commented 2 years ago

Hi, thanks for your attention. In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features.

TD-Jia commented 2 years ago

Hello, thank you very much for your reply. I noticed the difference and connection between the graph neural network in the PCT article and the CNN part in the DRL article.  Regarding the deep reinforcement learning network structure in the PCT article, I noticed that you still use the acktr algorithm and network structure in the DRL article.  Therefore,I would like to consult you about if you have tried other deep learning algorithms such as TRPO, SAC, ACER, etc. methods? Or are there obvious reasons these algorithms are inferior to the acktr algorithm and we don't need to try?

---Original--- From: @.> Date: Thu, Apr 28, 2022 13:08 PM To: @.>; Cc: @.**@.>; Subject: Re: [alexfrom0815/Online-3D-BPP-PCT] The network structure (Issue #9)

Hi, thanks for your attention. In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

JSA-458 commented 2 years ago

Hi, thanks for your attention. In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features.

Dear author: Thanks for your sharing! I would like to consult you about the network structure of DRL.Which part of the project is this part of the code mainly integrated into?Is the DRL structure same as your previous work in 2021? Thanks for your sharing again! Best wishes for you!

alexfrom0815 commented 2 years ago

In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features.

In this work, we use the graph attention network (GAT) to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Different from PCT, our previous work in 2021 mainly uses CNN to extract question features.

alexfrom0815 commented 2 years ago

Hello, thank you very much for your reply. I noticed the difference and connection between the graph neural network in the PCT article and the CNN part in the DRL article.  Regarding the deep reinforcement learning network structure in the PCT article, I noticed that you still use the acktr algorithm and network structure in the DRL article.  Therefore,I would like to consult you about if you have tried other deep learning algorithms such as TRPO, SAC, ACER, etc. methods? Or are there obvious reasons these algorithms are inferior to the acktr algorithm and we don't need to try? ---Original--- From: @.> Date: Thu, Apr 28, 2022 13:08 PM To: @.>; Cc: @.**@.>; Subject: Re: [alexfrom0815/Online-3D-BPP-PCT] The network structure (Issue #9) Hi, thanks for your attention. In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graph_encoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

Our experiment results show that ACKTR works the best among mainstream RL algs. But we also encourage you to try other SOTA RL algs, maybe you can get different and even better results.

TD-Jia commented 2 years ago

Hello, thank you very much for your reply. I noticed the difference and connection between the graph neural network in the PCT article and the CNN part in the DRL article.  Regarding the deep reinforcement learning network structure in the PCT article, I noticed that you still use the acktr algorithm and network structure in the DRL article.  Therefore,I would like to consult you about if you have tried other deep learning algorithms such as TRPO, SAC, ACER, etc. methods? Or are there obvious reasons these algorithms are inferior to the acktr algorithm and we don't need to try? ---Original--- From: @.**> Date: Thu, Apr 28, 2022 13:08 PM To: @.**>; Cc: @.**@.**>; Subject: Re: [alexfrom0815/Online-3D-BPP-PCT] The network structure (Issue #9) Hi, thanks for your attention. In the work of PCT, we use the graph attention network to extract the features of the problem. This part of the code is mainly integrated in 'attention_model.py' and 'graphencoder.py'. Unlike PCT, our previous work mainly uses CNN to extract question features. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @_.***>

Our experiment results show that ACKTR works the best among mainstream RL algs. But we also encourage you to try other SOTA RL algs, maybe you can get different and even better results.

Thanks for your reply.Best wishes to you!