Open monprin opened 7 years ago
Can't we put an lru cache on this?
On Mon, Dec 26, 2016, 7:53 PM Joe Jeffers notifications@github.com wrote:
Currently have to make many 'getitem' calls to get a list of many terms (such as for enrichment computation). This introduces significant slowdown to other wise near instant functions (such as enrichment).
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/schae234/Camoco/issues/46, or mute the thread https://github.com/notifications/unsubscribe-auth/ACcpvwkJPH9OcFu40RxC1G_6vQlHk-HRks5rMG-hgaJpZM4LV-aX .
There is already, and it does help with subsequent requests, but it doesn't help with the first requests. It's not a huge deal. I was able to get the actual enrichment computation down to almost trivial, and I have a couple other optimizations in mind, so it should be under 20 seconds for any normal query now, which won't cause me to lose sleep if we publicize the server. Though it is still somewhat accidental DDoS prone.
BTW, Are you using the built in enrichment function inside of the Ontology object? That one should be pretty fast.
I did and actually did some tweaks to make it faster, but the biggest slowdown is still initially building the objects, it's fine if we dont, just thought I'd bring it up.
On Dec 28, 2016 4:55 PM, "Rob Schaefer" notifications@github.com wrote:
BTW, Are you using the built in enrichment function inside of the Ontology object? That one should be pretty fast.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/schae234/Camoco/issues/46#issuecomment-269554550, or mute the thread https://github.com/notifications/unsubscribe-auth/AGw_DpZMSSH2XfwF_yd9LnV_ePM3pZ7Qks5rMujNgaJpZM4LV-aX .
Currently have to make many 'getitem' calls to get a list of many terms (such as for enrichment computation). This introduces significant slowdown to other wise near instant functions (such as enrichment).