Open bethleegy opened 2 years ago
I do have the same problem when I try to allocate 32G of data in 2 A30 GPUs (with 24GB each), either using both GPUs or single one. So I get around by using CPU instead, although it's certainly slower.
You can put at the very beginning of your code:
import jax
jax.config.update('jax_platform_name', 'cpu')
Thank you! I tried that (I also have Colab pro) but it now says "Your session crashed after using all available RAM." Is the only workaround this to purchase more RAM through Colab?
Unfortunately... Unless you get a GPU with larger amounts of memory, there isn't an easy to reduce memory usage :(
We tried implementating crops during design, but this didn't work too well.
Final option would be to use "design_semigreedy" or "_design_mcmc" (the later might be broken, it was something I implemented but haven't tested yet). This does not backprop through the model (which requires 2X more memory).
@sokrypton I think you can use partial hallucination with template to crop the target protein, then use the resulting protein to design binder by providing hotspots away from hallucinated regions. Although it needs to set template=True in all steps.
Hi there! I found your colab coincidentally and wanted to try hallucinating some binders using it for the protein "8DYS" on PDB.
However, when using the recommended settings and initializing with "WEQLARDRSRFARR" (a known natural binder that we're trying to modulate), I get the following error:
Do you have any suggestions on how to get around the issues?
Thank you very much!