cvxgrp / cvxpylayers

Differentiable convex optimization layers
Apache License 2.0
1.78k stars 159 forks source link

How to increase utilization of available computing power? #139

Open dmayfrank opened 1 year ago

dmayfrank commented 1 year ago

Hello, first of all, thank you very much for the very user-friendly package! Great work!

I am currently training a deep reinforcement learning agent that has a differentiable optimization as part of its policy. In principle, this works fine, but training the agent takes a very long time because the available computing resources are not used efficiently. When I use the GPU as PyTorch device, only around 10% of its capacity is used, for CPU around 40%. When I run the same code with an agent that does not have the differentiable optimization as part of it, utilization is always basically 100%. Of course, I tried increasing the batch size, but this does not change anything.

Do you have any ideas what I could do to resolve this issue? I saw that the qpth-package (https://github.com/locuslab/qpth) offers batch solving of QPs, instead of multiprocessing via pooling, so maybe switching to that package would be an option? However, due to the seemingly more active development on cvxpylayers and the user-friendliness, I would like to stick with cvxpylayers if possible.

Thank you very much for your help!

gy2256 commented 9 months ago

I'm having the exact issue. Have you try to use the qpth?