Open stan1233 opened 1 month ago
Batch processing should significantly speed up inference time for large-scale predictions. Please feel free to share your code and I will try to provide suggestions regarding how I would integrate batch processing.
Best, Gregory
Thank you for offering help.
I'm trying to modify def add_mol2_charges(pocket_mol2)
in functions.py, Use ChargeFW2 as an alternative to online requests. So it can run locally to reduce the latency caused by the network. The current result is approximately 2.79ligand/s.
When I try to use multiprocessing for multi-process batch processing, the following error occurs. Is it because a single GPU cannot perform multiprocessing?
Best Guangyuan
CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Hi!
I successfully deployed it on my computer, I tried to re-score the results of virtual screening, I only used a simple for loop, when there are thousands of results to be predicted, each prediction takes a few seconds, it will indeed take a lot of time, is there a better way to make large-scale predictions?
Thank you for sharing your work.