On the UI Bootstrap page there is a carousel that causes Landmarks to constantly increase the scan pause timer. This is working well, but it's probably a good idea to see what's causing the mutations to be seen as potential landmark changes.
There's a possible next step, which is to assess if we're constantly getting consistently very many mutation events—of a certain type?—over a certain time period and, if so, disconnect the mutation observer (with the intent being that's temporary). Currently the mutation observer is never disconnected, hence the pause timer is used as a "userspace" solution to the throttling issue; Landmarks decides if/when it will investigate a mutation event or not. Performance seems to be fine, but it's good to have an opportunity to asses this. Further: if a page is totally inappropriate for constant scanning (such as a game?) perhaps this should be considered.
The problems with any situation in which the mutation observer may be disconnected are:
It would stop Landmarks from "just working" and
Some UI and help info may have to be created for this, which adds significant user and development burden.
I was not able to reproduce #154 and I've no reason to think that performance is bad generally, but it would be good to get some decent profile data about this and actually check, now I've found a page that seems to offer that chance. It's worth testing it on a page that is a browser-based game too (and Google Docs, as it triggered the previous performance issues documented in #127). Relying on the speed of modern machines is not best practice (nor is obsessing over performance too much) :-).
There's also the elephant in the room: the possibility of re-writing the landmark detection to respond to mutation observers and work on small parts of the DOM, but that would be another complete re-write and thus very time-consuming and error-prone—and perhaps not even necessary as it should be possible to get great performance on pages that are appropriate, and maybe just auto-disable on pages that are not.
Finally, I don't like the idea of filtering out pages based on user preference lists—it adds burden on the user, and development, and pages can change. It would be much better to be able to detect, with good performance, if a page is suitable for Landmarks to run and, on a case-by-case basis.
Some things to consider:
Would a debugging mode where the number of mutation events per second is displayed in the Landmarks badge be of use? (This can't be done in the content script, as it would trigger mutations, as I found out a while back :-).) Or: in the popup, or just console?
Number of events over the last: one second; ten seconds.
Types of events: DOM tree mutations are a bigger deal than just CSS?
Criteria for unplugging the mutation observer:
Temporary or "permenant" (until a metric is met, or the user activates some UI to re-enable it)
Disconnect when the pause hits 60 seconds?
Disconnect when the number of events/events of a certain type over n seconds exceeds x?
Sometimes on the UI Bootstrap page, it doesn't manage to trigger a continual increase, but the number of mutations is about the same. Seems taking into account metrics such as the above—and/or number of actual scans being carried out per n seconds—may help.
On the UI Bootstrap page there is a carousel that causes Landmarks to constantly increase the scan pause timer. This is working well, but it's probably a good idea to see what's causing the mutations to be seen as potential landmark changes.
There's a possible next step, which is to assess if we're constantly getting consistently very many mutation events—of a certain type?—over a certain time period and, if so, disconnect the mutation observer (with the intent being that's temporary). Currently the mutation observer is never disconnected, hence the pause timer is used as a "userspace" solution to the throttling issue; Landmarks decides if/when it will investigate a mutation event or not. Performance seems to be fine, but it's good to have an opportunity to asses this. Further: if a page is totally inappropriate for constant scanning (such as a game?) perhaps this should be considered.
The problems with any situation in which the mutation observer may be disconnected are:
I was not able to reproduce #154 and I've no reason to think that performance is bad generally, but it would be good to get some decent profile data about this and actually check, now I've found a page that seems to offer that chance. It's worth testing it on a page that is a browser-based game too (and Google Docs, as it triggered the previous performance issues documented in #127). Relying on the speed of modern machines is not best practice (nor is obsessing over performance too much) :-).
There's also the elephant in the room: the possibility of re-writing the landmark detection to respond to mutation observers and work on small parts of the DOM, but that would be another complete re-write and thus very time-consuming and error-prone—and perhaps not even necessary as it should be possible to get great performance on pages that are appropriate, and maybe just auto-disable on pages that are not.
Finally, I don't like the idea of filtering out pages based on user preference lists—it adds burden on the user, and development, and pages can change. It would be much better to be able to detect, with good performance, if a page is suitable for Landmarks to run and, on a case-by-case basis.
Some things to consider: