Open 0o120 opened 1 year ago
Was able to get that same 266 second export down to 47 seconds by disabling the export_manager database access inside the while
loop located inside clouddrive.common.service.export.ExportService.process_pending_changes
and saving the items after the loop.
You can see in these changes: https://github.com/0o120/script.module.clouddrive.common-1/commit/c5a5b7b7c10dff96992cc559fff06718ac62d8c2
I'm assuming export_manager.save_pending_changes() is used to save progress in the event of an interruption and not exactly necessary? Maybe adding a config option to disable save_pending_changes inside that loop in exchange for performance gain?
I'm up for adding a config option. Some thoughts on this would be great :)
Thanks for the improvement. It's been a long time and this project is been in KTLO mode for me. But I will merge your change if you confirm it's been working fine for you. Also, saving the progress is necessary for large catalogs. If you submit your other change with a config option, I could merge it too.
Google Drive Test Directory:
Before Changes Google Drive API Requests: 2317 Export Duration: 15 minutes and 40 seconds (940 seconds)
After Changes Google Drive API Requests: 28 Export Duration: 4 minutes and 26 seconds (266 seconds)
Could be faster, but I think it's slowed down with the way clouddrive is handling each google drive item. This is what I could do with minimal changes.
It fetches all the files\folders in the directory with minimal api requests and then just feeds it back to clouddrive with the providers instance cache.