I've optimized code when library is used for RT prediction. In my case, when library contains 6 million lines, I have 80-90% reduction in time for the full DeepLC prediction workflow for both small (~10k) and large sets (~750k) of peptides in prediction.
The most important part is just deleting of "idents_in_lib = set(LIBRARY.keys())" line. However, the rest of the code optimization could be also important in some specific cases.
Hi everyone!
I've optimized code when library is used for RT prediction. In my case, when library contains 6 million lines, I have 80-90% reduction in time for the full DeepLC prediction workflow for both small (~10k) and large sets (~750k) of peptides in prediction.
The most important part is just deleting of "idents_in_lib = set(LIBRARY.keys())" line. However, the rest of the code optimization could be also important in some specific cases.
Regards, Mark