Closed Hhuangsj closed 4 months ago
Hi there, during inference, when you increase the length of protein, the parameter in model.inference.replica_per_batch is that one to be take care of, in order not to exceed the CUDA memory. You can simple use a smaller replica_per_batch
.
Hi, thanks for your work! When i attempted to simple a protein with a sequence length of 280 using Str2Str, I encountered a problem similar to this one(https://github.com/lujiarui/Str2Str/issues/4#issue-2237296761). Then I changed this(https://github.com/lujiarui/Str2Str/blob/48b9e8fda1de27e6b74a610859dd2eea5115e52f/configs/model/diffusion.yaml#L92) parameter, what problem does this cause? Good luck!