shama / on-load

On load/unload events for DOM elements using a MutationObserver
113 stars 19 forks source link

onload performance #13

Open notenoughneon opened 8 years ago

notenoughneon commented 8 years ago

I'm building a chat app using choo. I found that after the app has been running several hours, performance degrades. Profiling showed it is spending a lot of time in the eachMutation function. I think this may be related to onload, which I am using to scroll the message buffer after rendering each new message: https://github.com/notenoughneon/nekocafe/blob/master/lib/client.js#L99

Below is taken from a window with 100 messages, after its been running 7 hours: eachmutation The profile captured during receiving a message.

Not sure if this is something I am doing wrong or actually a bug.

shama commented 8 years ago

My guess is because we're not cleaning up the listeners as they are unloaded. I'll try running your example with a delete watch[index] after this line: https://github.com/shama/on-load/blob/master/index.js#L49

notenoughneon commented 8 years ago

I tried adding the delete, but it didn't fix the performance for me.

notenoughneon commented 8 years ago

I also managed to work around the issue a bit by removing onload from elements that no longer need it, and limiting the rate of updates to the application state.

pfrazee commented 8 years ago

I'm having the same issue. I'll try what @notenoughneon tried.

screen shot 2016-09-07 at 4 44 05 pm

paul-veevers commented 7 years ago

I had a similar issue where eachMutation was causing a memory leak when there's a large number of elems in body. A reduced case that still causes leakage:

function eachMutation(nodes, fn) {
    for (var i = 0; i < nodes.length; i++) {
      if (nodes[i].childNodes.length > 0) {
        eachMutation(nodes[i].childNodes, fn);
      }
    }
  }

Removing eachMutation(nodes[i].childNodes, fn); fixes the leak, but I'm not really sure why it's causing one in the first place. For now, I've done the following to fix it (and it passes the tests):

function eachMutation (nodes, fn) {
  var keys = Object.keys(watch)
  for (var i = 0; i < nodes.length; i++) {
    var a = [nodes[i]]
    if (nodes[i].childNodes.length > 0 && nodes[i].querySelectorAll) {
      var c = nodes[i].querySelectorAll('[' + KEY_ATTR + ']')
      a = a.concat(Array.prototype.slice.call(c))
    }
    for (var j = 0; j < a.length; j++) {
      if (a[j] && a[j].getAttribute && a[j].getAttribute(KEY_ATTR)) {
        var onloadid = a[j].getAttribute(KEY_ATTR)
        keys.forEach(function (k) {
          if (onloadid === k) {
            fn(k, a[j])
          }
        })
      }
    }
  }
}

I'm going to assume that's not ideal either :) Any ideas? I can do a pull request with the above if you think it's the best option.

paul-veevers commented 7 years ago

As a note, the below implementation also removes memory pressure. I was prompted to try it by this https://github.com/yoshuawuyts/nanomorph/issues/25 .

function eachMutation (nodes, fn) {
  var keys = Object.keys(watch)
  var children = []
  for (var i = 0; i < nodes.length; i++) {
    if (nodes[i] && nodes[i].getAttribute && nodes[i].getAttribute(KEY_ATTR)) {
      var onloadid = nodes[i].getAttribute(KEY_ATTR)
      keys.forEach(function (k) {
        if (onloadid === k) {
          fn(k, nodes[i])
        }
      })
    }
    var node = nodes[i].firstChild
    while (node) {
      children.push(node)
      node = node.nextSibling
    }
  }
  if (children.length > 0) eachMutation(children, fn)
}