Currently the job list command is extremely slow, this is likely because it is iterating through all of the jobs before taking a slice for the results. cf, demo db is currently \~7GB
We should instead iterate through the list of jobs using the offset and limit to stop processing as soon as possible. e.g. decrement an offset and ignore the record until we get to 0, then when adding to results decrement the limit until we reach 0. This gets much harder if we need to apply filters but we can at least improve the performance of the default list.
Currently the job list command is extremely slow, this is likely because it is iterating through all of the jobs before taking a slice for the results. cf, demo db is currently \~7GB
We should instead iterate through the list of jobs using the offset and limit to stop processing as soon as possible. e.g. decrement an offset and ignore the record until we get to 0, then when adding to results decrement the limit until we reach 0. This gets much harder if we need to apply filters but we can at least improve the performance of the default list.