I noticed that when error dataset grows (e.g. tenths of GB in a daily file), the arki-scan process tends to slow down by many times, while this is not noticed when importing to a regular dataset having daily files of the same size (I cannot say whether the slowdown affects also data that do not end in error, it could be the case, but I am not sure). Could this be related to how the error dataset is managed?
It is not a serious problem, because of course error dataset should not grow so much, but if it has a simple solution it would be welcome.
I noticed that when error dataset grows (e.g. tenths of GB in a daily file), the arki-scan process tends to slow down by many times, while this is not noticed when importing to a regular dataset having daily files of the same size (I cannot say whether the slowdown affects also data that do not end in
error
, it could be the case, but I am not sure). Could this be related to how the error dataset is managed? It is not a serious problem, because of course error dataset should not grow so much, but if it has a simple solution it would be welcome.