I don't know why we do it the way we do. Maybe it's because we need to scrape the dom, but I suspect doing this with weblisteners would fix issues like #235 (unable to upload big docs), and #239 (other extensions screwing up the HTML).
This was suggested by a Mozilla developer:
If you want to upload a copy of clean initial HTML for a page, your best
bet is probably to use a webRequest listener. That won't get you
JS-generated content, but trying to capture JS-generated content is
heavily timing-dependent anyway, and you'll never be able to capture it
without the possibility of interference from other extensions.
I don't know why we do it the way we do. Maybe it's because we need to scrape the dom, but I suspect doing this with weblisteners would fix issues like #235 (unable to upload big docs), and #239 (other extensions screwing up the HTML).
This was suggested by a Mozilla developer: