Open miquelduranfrigola opened 9 months ago
Are you running it from Docker or through conda/s3? I have set Docker to be rebuilt (if that's what was causing the issue). I can also change it to work with smaller chunks but I'm not sure I understand how it would be safer? The model can take upto 1000 smiles at a time.
Could you also tell me the size of the dataset and your machine configuration? On CPU machines it is supposedly slower.
Thanks @DhanshreeA
I fetched the model via s3. Running it on a Mac M2. It works perfectly fine with files of 10 rows, but when I try to run it on files of 100 rows it is slow. My total input file size is 2500 molecules.
I can obviously do it in chunks of 10 outside Ersilia, but it is not ideal.
Did you manage to actually run 1000 molecules at a time with this model?
Hi @DhanshreeA
At least in my computer, when multiple (more than 100) molecules are inputted, the model is very slow or doesn't finish calculations.
Do you think we could modify the
main.py
to work in chunks of 10 molecules? This will be slower but safer, generally.Please let me know what you think.