This feature will permits to prevent a drop of performances across time because of increasing of search time caused by an increasing size of the database.
By default, the retention period must be deactivated. The default value must be determined based on performances tests. They must permit to estimate the duration after which performances will start to decrease significantly. Of course, the longer tags can remains active and better the patch protection will be applied on server. Obviously, these tests must target realistic scenarios.
After a tag duration exceed the retention period, it must be deleted automatically. This means that a scheduled job must be executed frequently while the patch is enabled. The scheduled job should be dispatched only few times per days, and delayed if server performances are currently bad (typically TPS below 18 by default, configurable by server administrator).
Special care should be taken into account in the scenario when a deletion and a creation of tag are dispatched at the concurrent time (i.e. think about SQL transactions). Normally this is already covered by the existing code, but since there isn't any test yet to verify this... So if tests aren't yet written for this part, write them here. The use of a thread-weaver can be a great solution to force exploration of different execution paths across executions. Repeating several times a same test is as well a great idea to force potential bugs to surface.
Performances tests must be written as integration tests. The TDD must be followed as well, which means that performances must fail when performances drop is too important. Use of retention period should then perrmit to make them green.
So, this issue is composed of several steps:
[ ] Study of the period after which performances may start to decrease according to different scenerios
[ ] Testing tag creations and deletions at concurrent time
[ ] Retention period implementation with scheduled job
[ ] Configuration of retention period
Of course, if the study's conclusion is that retention period is pointless then it will not be required to implement it.
This feature will permits to prevent a drop of performances across time because of increasing of search time caused by an increasing size of the database.
By default, the retention period must be deactivated. The default value must be determined based on performances tests. They must permit to estimate the duration after which performances will start to decrease significantly. Of course, the longer tags can remains active and better the patch protection will be applied on server. Obviously, these tests must target realistic scenarios.
After a tag duration exceed the retention period, it must be deleted automatically. This means that a scheduled job must be executed frequently while the patch is enabled. The scheduled job should be dispatched only few times per days, and delayed if server performances are currently bad (typically TPS below 18 by default, configurable by server administrator).
Special care should be taken into account in the scenario when a deletion and a creation of tag are dispatched at the concurrent time (i.e. think about SQL transactions). Normally this is already covered by the existing code, but since there isn't any test yet to verify this... So if tests aren't yet written for this part, write them here. The use of a thread-weaver can be a great solution to force exploration of different execution paths across executions. Repeating several times a same test is as well a great idea to force potential bugs to surface.
Performances tests must be written as integration tests. The TDD must be followed as well, which means that performances must fail when performances drop is too important. Use of retention period should then perrmit to make them green.
So, this issue is composed of several steps:
Of course, if the study's conclusion is that retention period is pointless then it will not be required to implement it.