Closed zcorpan closed 4 years ago
What is "the viewport" as specified here? Layout viewport or visual viewport? Needs to reference the non-existent CSS Viewport spec (which I'm working on).
+Scott Little sclittle@google.com
On Fri, Mar 27, 2020 at 8:40 AM Simon Fraser notifications@github.com wrote:
What is "the viewport" as specified here? Layout viewport or visual viewport? Needs to reference the non-existent CSS Viewport spec (which I'm working on).
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/whatwg/html/issues/5408#issuecomment-605068277, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITEO32KQWFMJ6GF3ZCWT3RJTCGJANCNFSM4LVCHWRQ .
Hi! I think it should be configurable, like the Intersection Observer API. In this case, we could use vh
instead of px
.
How would you configure it? Would you configure it differently for different situations? What should the default be?
How would you configure it?
One way is to have attitrubes that mimic what the intersectionObserver API provides. That's what a lot of JS based lazyloaders do.
I could also imagine a more configurable API that could be media query - esque but respond to different effective connection speeds.
But at minimum, parity with IntersectionObserver would go a long way.
Ok, but I meant, if it was possible to configure the margin, how big would you make it?
Ok, but I meant, if it was possible to configure the margin, how big would you make it?
By default, I typically do one viewport height's distance.
Would you configure it differently for different situations?
If I was trying to be super smart about it, I'd factor in:
Especially number 3&4. If the image is just below the fold on initial pageload and the user hasn't scrolled at all yet, I'd want to load the image at the earlier of 2 events: the user begins to scroll or window.onload has fired.
Besides above mentioned things, another factor to include is scrolling speed. At the extreme, how fast the user is actually scrolling, but in a simpler form, how fast users typically scroll on a given device.
Just guessing, but I'd expect that, usually, users would scroll more than a viewport height faster on a typical mobile device than a typical desktop/laptop device.
I'm not sure whether waiting for the load event before loading any images below the fold is right. Often the slowest outliers in a page are ad frames, and users can read and scroll without waiting for those.
Often the slowest outliers in a page are ad frames, and users can read and scroll without waiting for those.
Correct, that's why I said to wait for load OR scroll, whichever is earlier. But if ad frames are a sticking point, amend my earlier statement with window.load (net of subresources).
The reason to wait is that while users often do scroll before "above the fold" is completely loaded, it's less likely, and you don't want bandwidth contention between above the fold images, CSS, and JS vs. below the fold images.
getBoundingClientRect()
and “intelligent prefetch/Intelligent resource prioritization”getBoundingClientRect()
and a configurable threshold; default is 0.loading=lazy
.threshold
, by default 300. This default is used as rootMargin
..offset()
getBoundingClientRect()
. Configurable offset, default 0.getBoundingClientRect()
. Configurable offset, default 100.I haven't yet looked at httparchive to see how web pages typically configure the rootMargin
(or equivalent) when using these librarires. My hypothesis is that most use the default or whatever the examples suggest, i.e. 0-300px, but some use other values like 100vh. There won't be many that do something elaborate like different settings for different connection types or change the margin in response to scrolling etc.
See this twitter thread: https://twitter.com/bocoup/status/1243580618811666432
A few points:
I tried to figure out how common these libraries are used in httparchive. This was a bit tricky, and the actual usage might be different from this, but I hope this gives an indication.
Row | num | lib |
---|---|---|
1 | 162796 | lazysizes |
2 | 64042 | lazy load |
3 | 17938 | blazy.js |
4 | 13792 | layzr |
5 | 12622 | jquery unveil |
6 | 9722 | lozad.js |
7 | 5485 | vue-lazyload.js |
8 | 248 | echo |
9 | 51 | react-lazyload |
This roughly matches with number of stars in GitHub, though -- lazysizes is most common, followed by lazyload.
Looking at only pages that configure the expand
for lazysizes, the values they set it to are (rounded to nearest 100). 6917 pages do this (4.25% of pages using lazysizes).
Row | num | expand |
---|---|---|
1 | 3238 | 300 |
2 | 1299 | 1200 |
3 | 695 | 200 |
4 | 689 | 0 |
5 | 447 | 1000 |
6 | 348 | 100 |
7 | 91 | 500 |
8 | 47 | 400 |
9 | 38 | 800 |
10 | 13 | 700 |
11 | 4 | 1500 |
12 | 3 | 1300 |
13 | 2 | 600 |
14 | 2 | 2000 |
15 | 1 | 8000 |
lazysizes also allows setting expand
on a per-image basis with the data-expand
attribute. 59 pages do this (so pretty rare to do at all, 0.03% of pages using lazysizes):
OK, so, what can we conclude?
rootMargin
that can be configuredexpand
based on idleness and scrolling, I think. The default expand
can be configured and set per image.expand
, ~0% set data-expand
per image.I think the browser is usually in a better position to determine when to load images based on the user's connectivity and scrolling pattern and such. But this should be in the same ballpark as what web developers are doing, and should be consistent between browsers, so that web developers want to use the native feature over JS librarires. Ideally the behavior should be smart enough so users aren't annoyed by seeing images start loading after they scroll (which JS libraries are often failing at, as far as I can tell).
I think the browser should have some margin also for images in element scroll containers (for image carousels) and iframes, not just the top-level page scrolling.
In some situations the web developer is in a better position to predict when it's a good time to load an image (because the page might be driving scrolling, e.g. image carousel). There is an API already for "please load this image now", though -- set loading = "eager"
. Providing a way to override the browser's lazy logic with a per-document or per-image rootMargin
seems like it could regress the user experience, if we assume that the browser managed to implement something better than a static rootMargin
.
@zcorpan what amazing research you've done here. Indeed, lazysizes has the lions share of usage here but I'd hesitate to draw any conclusions about its default configuration being a signal to constrain what browsers make available to developers.
What makes lazysizes so awesome is:
Intelligent prefetch/Intelligent resource prioritization: lazysizes prefetches/preloads near the view assets to improve user experience, but only while the browser network is idling (see also expand, expFactor and loadMode options). This way in view elements are loaded faster and near of view images are preloaded lazily before they come into view.
But what makes it even more awesome is the expand
attribute. Even though only 4% of sites using lazysizes use it, it feels critical to be able to give the browser additional info as to how sensitive lazyloading should be, and feels critical to me that you get that control without JS. To me, it seems there really isn't a perfect one size fits all here, no matter how smart the browser default is, and devs will continue to use IntersectionObserver instead (or no lazyloading at all!) when the default fails them.
@zcorpan asked me wether I can describe the rationale and the functionality of lazySizes flexible expand feature.
The rationale of this feature is the idea that lazy loaded elements that are currently not inside of the viewport should not consume network bandwidth while other in viewport elements are currently loading. At the end it should give you a better UX. On one hand we preload things before the user can see it, so the user doesn't have to wait. On the other hand as soon as the user sees something that needs to load we don't preload because this would divide the bandwidth for currently unneeded elements.
I can describe some mechanics because they might be interesting for some implementation ideas.
Depending on the loading state of the document and how many lazy elements are currently loading lazysizes switches between those visibility checks and expands. For example until the page is not loaded and not scrolled (you had the same idea with ad frames as me) we use the shrink expand and do all visible checks. After that we switch between them based on how many elements are currently loading. So first we start with the most conservative check (0margin + all visible checks). After that if we have no currently loading elements we expand our search.
About scroll speed: If a user scrolls extremely fast it is always impossible to get it right and find the sweet spot between preloading the right amount so the user doesn't see any image loading. Also if the user scrolls faster then the viewport height it means he wants to jump somewhere without seeing the middle part. You should not preload for this use case. What you can do instead is to try not to load so many elements in the middle. lazySizes has a queue in front of the browsers download queue, which makes sure that if there are more than 6/8 elements loading all checks are idling.
About the scroll container check: I would argue that 99% of carousels have a width of 100vw so it is in most situations aligned to the page viewport.
I'm currently on nicotine detox so I really have difficulties to concentrate, sorry for that.
Sounds like we need more attributes to make it more configurable and cover more use cases, but with sensible defaults that different browser vendors reach a consensus about.
Warning assumptions ahead. A lot of ecommerce and news sites have footer image that is never seen. Lazy loading this and, in fact in practice, never loading this unseen image would be best. I doubt it would affect SEO. There would be other cases like this too where we don't want the browser to download the image at all unless it's going to be seen soon. DataSaver is a factor too.
Other times we might want an image to download eventually. Sometimes we might even change our mind after DOMContentLoaded and want to set an attribute to say this image that was flagged as only download if about to be seen should now be also download after onLoad and the lazy load thread would notice this flag.
Product images below the fold might have SEO juice and so would want to be loaded after the onLoad event, although I am not sure about this. The bots would still have the image URL, alt text, title text etc.
Responsive images can further complicate matters on whether the designer needs the image to be there to hold the layout together, although they shouldn't be doing this.
Will the picture element have loading="lazy" for each of it's different media queries attributes or just for the tag itself?
Nowadays Microsoft has thousands of HTML/CSS tests. Has anyone heard from them about their defaults, or are they only using what Chromium provides?
Sorry for the rambling, I just want this to be really useful in different cases.
@aFarkas , thank you, that is very useful!
If a user scrolls extremely fast it is always impossible to get it right and find the sweet spot between preloading the right amount so the user doesn't see any image loading. Also if the user scrolls faster then the viewport height it means he wants to jump somewhere without seeing the middle part. You should not preload for this use case. What you can do instead is to try not to load so many elements in the middle. lazySizes has a queue in front of the browsers download queue, which makes sure that if there are more than 6/8 elements loading all checks are idling.
So I think there are two common cases for fast scrolling on touch devices:
For the first case, I think the browser already knows where the scroll position will end up, and could start loading those images as soon as the scrolling momentum is known. For the second case, it seems a bit harder to get right.
On desktop browsers (without touch), the scrolling patterns are probably different. If the user uses the scrollbar thumb to quickly scroll somewhere, there is no scrolling momentum to predict the final scroll position.
Sounds like we need more attributes to make it more configurable and cover more use cases,
I'm not convinced of this. I think we should improve the defaults first, and then see what the remaining problems are (if any).
but with sensible defaults that different browser vendors reach a consensus about.
Yes. 🙂
Warning assumptions ahead. A lot of ecommerce and news sites have footer image that is never seen. Lazy loading this and, in fact in practice, never loading this unseen image would be best. I doubt it would affect SEO. There would be other cases like this too where we don't want the browser to download the image at all unless it's going to be seen soon. DataSaver is a factor too.
Could they remove the entire footer?
Other times we might want an image to download eventually. Sometimes we might even change our mind after DOMContentLoaded and want to set an attribute to say this image that was flagged as only download if about to be seen should now be also download after onLoad and the lazy load thread would notice this flag.
When would you want to do this? Do you have a URL where this is done today?
Product images below the fold might have SEO juice and so would want to be loaded after the onLoad event, although I am not sure about this. The bots would still have the image URL, alt text, title text etc.
I think this doesn't change anything for this issue.
Responsive images can further complicate matters on whether the designer needs the image to be there to hold the layout together, although they shouldn't be doing this.
You can set the right aspect ratio for the image with the width
and height
attributes on img
. There is still an open issue for when different sources have different aspect ratio, though: https://github.com/whatwg/html/issues/4968
Will the picture element have loading="lazy" for each of it's different media queries attributes or just for the tag itself?
The loading
attribute on img
applies to all sources in the picture
.
Nowadays Microsoft has thousands of HTML/CSS tests.
Which tests do you mean?
Has anyone heard from them about their defaults, or are they only using what Chromium provides?
I assume the latter for this case.
Warning assumptions ahead. A lot of ecommerce and news sites have footer image that is never seen. Lazy loading this and, in fact in practice, never loading this unseen image would be best. I doubt it would affect SEO. There would be other cases like this too where we don't want the browser to download the image at all unless it's going to be seen soon. DataSaver is a factor too.
Could they remove the entire footer?
There are other things, such as links in the footer that some people want to see and will scroll to the bottom. I was just giving an example of a image that most of the time wouldn't be needed to be downloaded, but would be if it is going to be seen soon.
Other times we might want an image to download eventually. Sometimes we might even change our mind after DOMContentLoaded and want to set an attribute to say this image that was flagged as only download if about to be seen should now be also download after onLoad and the lazy load thread would notice this flag.
When would you want to do this? Do you have a URL where this is done today?
I'm not sure, just another scenario I thought of. Maybe something like this :- Some user interaction would cause the browser to scroll into view an element that is way down the page, while nearby is an image was set to lazy load if going to be seen soon, (and would be loaded if the user manually scrolled down there) but now because of some earlier interactions you are confident that the scroll into view is likely to happen and so want the image to download after onLoad as a sort of preload. Pretty contrived example, and probably not worth worrying about and I don't have any URL examples.
Nowadays Microsoft has thousands of HTML/CSS tests.
Which tests do you mean?
It's so long ago, I can't remember any real details. I think when they were working on IE8 and trying to be better with standards, mostly CSS. Around the ACID3 era I think. Some people there developed lots of tests to check they were following standards and they found some issues with the descriptions/explanations of some of the standards and in doing so help make them better. I have never worked for Microsoft, so I don't know any internal details.
I was just giving an example of a image that most of the time wouldn't be needed to be downloaded, but would be if it is going to be seen soon.
Ok, then I think a normal loading=lazy
should handle this case.
I'm not sure, just another scenario I thought of. Maybe something like this :- Some user interaction would cause the browser to scroll into view an element that is way down the page, while nearby is an image was set to lazy load if going to be seen soon, (and would be loaded if the user manually scrolled down there) but now because of some earlier interactions you are confident that the scroll into view is likely to happen and so want the image to download after onLoad as a sort of preload.
You can tell the image to load by changing the loading
attribute to eager
.
As for tests, ok. We'll write new tests for this issue in https://github.com/web-platform-tests/wpt when we change the spec. 🙂
Cc @gregwhitworth for any input from MS.
Regarding footers, I think the web platform is missing lazy CSS images
For WebKit the current approach is to use compositor information (https://bugs.webkit.org/show_bug.cgi?id=203557).
On my 15" macbook pro this typically gives values around 1800px on my test page (https://mathiasbynens.be/demo/img-loading-lazy) and on iPhone ES (simulator) around 800px.
Thanks, @rwlbuis . Can you give a summary of the approach taken in your patch, and rationale?
I must re-iterate on this. No matter wether you have a fixed "margin" of 100px, 300px, 800px or 1800px. A flexibel/adaptive value is always much more powerful.
Think of the default situation during the onload phase you have two images in view but due to your extended margin value of 100 - 1800px you are loading for example 6 images in parallel. Those 4 unnecessary image downloads are cutting the bandwidth literally in half. Of course as soon as those two images are loaded you can start to preload those 4 images.
Also in earlier versions of lazysizes I had a much higher extended margin values than now and a lot of developers where complaining about it (partially because they did not understand how the adaptive margin is speeding up in view images compared to out of view images). By cutting it down to max of the innerHeight - 1
/innerWidth -1
most complaints went away.
I agree that this needs to be specified more precisely, rather than just saying "it's based on something implemented in WebKit". WebKit changes the compositor coverage for scrollables based on scrolling velocity, in ways that could change in future. I don't think web-facing behavior should be built on top of it (sorry, I did suggest it initially, but now think that was a mistake).
When to start loading a lazy-loaded image is a key aspect of the feature, but the spec doesn't give advice beyond what is quoted above.
Hey folks. I wanted to provide some background for how we arrived on the current thresholds in Chromium in case it helps with alignment on the question "when should we consider an image is about to intersect with the viewport".
Scroll speed : We believe how fast users typically scroll on a given device matters (perhaps similar to @othermaciej?)
We attempted to optimize for perceived performance by setting conservative thresholds we believed would minimize how often users would quickly scroll down to an image that has not yet loaded - ideally, you shouldn't be staring at some blank pixels.
Part of this is to workaround a platform limitation: you cannot easily configure a placeholder for a natively lazy-loaded image, without using JavaScript. JavaScript lazy-loaders often have more flexibility here. It's often possible to say use a generic placeholder image, LQIP, SQIP etc...but the platform doesn't exactly solve for this. We can reserve dimensions for the image, maybe even set some UA specific background-color, but nothing as close (yet) to what's possible in userland.
Network quality: As captured in our implementation, we adjust thresholds based on the user's effective connection type.
Given how widespread Chromium is used in regions where network quality can be highly variable, we wanted to balance giving users on a fast connection different thresholds (i.e load more images on 4G) while keeping in mind quality and data-plan costs and loading less if you're on say, slow 2G/3G.
Now I personally believe Chromium's current thresholds are different enough to what users get by default with libraries like LazySizes that they can sometimes come across as unintuitive. Like @mikesherov, I often configure my JS lazy-loading libraries to use one viewport height's distance for rootMargin. The data savings here can be significant (e.g ~40-50%). In contrast, Chromium's current thresholds might get you ~10-15%.
it feels critical to be able to give the browser additional info as to how sensitive lazy-loading should be
+1 I would separate this out into two questions: what should the defaults be? what should the API surface for supporting configuration be?
Fwiw, I would personally love to give developers control over lazy-loading sensitivity, whether this is done in a preset manner (e.g <loading=very-lazy>
or via a model that follows IntersectionObserver and provides very granular customization).
If I was throwing longer-term questions and ideas out there...
<img lowsrc>
or an <img placeholder>
attribute to address the "...avoid users who scroll too fast looking at empty pixels" problem?Think of the default situation during the onload phase you have two images in view but due to your extended margin value of 100 - 1800px you are loading for example 6 images in parallel. Those 4 unnecessary image downloads are cutting the bandwidth literally in half. Of course as soon as those two images are loaded you can start to preload those 4 images.
I can echo that feedback. Here's a scenario I'm seeing on a website I currently maintain (and I believe this is a common pattern):
<link rel="stylesheet" href="stylesheet.css">
<!-- In viewport -->
<div style="background-image: url(hero.jpg)">
</div>
<!-- Below viewport -->
<img loading="lazy" src="product1.jpg" alt="">
<img loading="lazy" src="product2.jpg" alt="">
<img loading="lazy" src="product3.jpg" alt="">
<img loading="lazy" src="product4.jpg" alt="">
Browsers will start to download the stylesheet and the product images. Once the stylesheet is downloaded and layout performed, hero.jpg will start downloading but it is now competing for bandwidth with images that are irrelevant at the moment. During the initial load, Firefox's current behaviour has my preference.
How would we feel about bringing back or an attribute to address the "...avoid users who scroll too fast looking at empty pixels" problem?
Would prefer a <picture>
element solution, preferably with some styleability based on state.
How much do other vendors care about the empty pixels problem?
We do somewhat, but existing JS solutions show empty pixels often enough that maybe having defaults that match them is good enough. Aggressive fetching seems worse than empty pixels.
I do think that giving authors some customizability of lazy loading would be reasonable, but I'm not sure what that would look like declaratively. Maybe it would be OK for authors who want something more than the default behavior to fall back to Intersection Observer.
To keep this on track, I'd like to scope this issue to getting consistency in the behavior for the feature as-is. New API for placeholder image or customizing the thresholds should be separate issues.
Scrolling vertically & horizontally for:
iframe
The things that an implementation could use as input for the decision:
The implemented behavior should not expose information about the user that the page doesn't already have access to otherwise. For example, if the implementation doesn't expose battery levels, the battery level should not be an input to the model. The "typical scrolling speed on the current device" shouldn't be so precise as to help finger-print a user.
Scrolling vertically & horizontally for:
- Element scroll container scrolling
So, I'm not sure how this would work. In particular, for the image carousel use case, using only the implicit root (which I think browser implementations do now) would mean that there is no threshold for the element scroll container case, so those images would only start loading after they are partially in view.
There is likely a performance hit to observing all scrollable elements when lazy images are used. Is there a good way to make it "do what I want" without adding more API surface? Or is the web developer explicitly setting the root the best way to solve this? Edit: filed https://github.com/w3c/IntersectionObserver/issues/431
- Nested browsing context scrolling i.e.
iframe
Should lazy images in iframes use the implicit root, or the images' node document as the intersection root? The former takes away the rootMargin if the origins aren't similar-origin, per IntersectionObserver spec.
Edit: in https://github.com/whatwg/html/pull/5510 we've set root
to the image's node document the implicit root.
It seems that Chrome currently has (for toplevel document) logic that if the lazy image is within ~3000px of the visible viewport, start loading it. Firefox starts to load it once it should be already rendering the first row or column of the pixels within the image. Clearly Firefox logic is always going to cause visibly delayed rendering. On the other hand Chrome will often load the whole page.
How about keeping track of a preload margin per site instead? Maybe start with 500px but keep a log about how many pixels you had extra margin at the time the image was fully loaded; if you had more than say 50px extra margin, reduce the margin. If you had less than 50px extra margin, increase the margin. How much to change margin at once? I'd suggest trying to target the 50px extra margin and do a binary search towards it. For example, start loading image by default when it's closer than 500px from the viewport. Once that image is fully loaded and user is still 400px from the image you can compute that image took "100px" worth of loading time and your preload margin should be closer to 150px (includes 50px extra margin above). Split the difference and use 0.5*(500px - 150px) = 175px as the new safety margin.
This would result in pretty fast converging algorithm needing only one integer value per site of memory. I think one value per site is required because different sites have so huge variance in loading speed. Being logically a binary search it should be able to quickly adjust to scrolling speed changes even within a single infinite scroll page.
The extra margin above is needed to combat the issue that different images will have different byte sizes even if the pixel dimensions were the same. With suitable tuning the above algorithm should be pretty good at getting the images just-in-time unless user changes scrolling speed very rapidly.
If user is currently scrolling fast it might be sensible to load only one lazy loaded image in parallel to be able to skip more images if user scrolls so fast that all images cannot be loaded in any case. That should reduce the latency to start loading the visible images once user slows down enough.
Thanks @mikkorantalainen , that sounds like an interesting approach. It's difficult to evaluate how well it would work in practice without an experimental implementation. It seems to me though that it may get too small margin if the user scrolls slowly for a while and then scrolls quickly, for example. If we'd like images to be available when users scroll a screen length or so quickly, I think the implementation needs to work from the assumption that the user can do so at any time.
Sorry for the late reply here, I'd just like to add some more explanation for Chrome's choices of thresholds to what zcorpan and Addy mentioned earlier.
Chrome currently uses relatively conservative thresholds, as other folks have mentioned above - typically 3000px on a fast network, and larger thresholds on slower networks (since the images are expected to need longer to load in). These current thresholds used for loading=lazy are the same ones that were developed for the Automatic LazyLoad behavior that Android Chrome users who've turned on Lite Mode will see, which attempts to lazily load page content where suitable (even if it's not marked loading=lazy) in order to reduce data usage and speed up critical content.
The main regression metric that we've focused on in Chrome for these thresholds is what we're calling image visible load time, which measures how long an image is in the viewport before it finishes loading. The goal was to choose thresholds large enough that we can minimize visible load time regressions, such that typically the user experience would match what they'd see without lazy load, plus the data savings and speedups of critical content.
The initial thresholds are purposely overly conservative, since that way the user experience errs more on the side of matching what users would see without any lazy loading.
I am experimenting with more aggressive thresholds (1250px on 4G-speed networks) that get some additional data savings without any significant regressions in visible load time. I'm hoping to launch these more aggressive thresholds for Chrome soon.
I've also experimented with even more aggressive thresholds (750px on 4G-speed networks), but at that point the visible load time regressions start to become more noticeable. I've also experimented with using less conservative thresholds for slow networks (e.g. 2G-speed networks), but from the data so far, it looks like there isn't much room to get more aggressive for slow networks.
I've implemented my earlier comment in https://github.com/whatwg/html/pull/5917 except not including this:
current network saturation - images not currently in view can be lower priority than images currently in view (assuming no current fast scroll)
This has to do with fetch priority, and I think is a bit orthogonal to this issue. Image priority could depend on in-viewportness regardless of the loading
attribute. But also the ideal logic for this could be counter-intuitive: if the user scrolls while 3 images that are currently in view are still loading, and the "next screen" will have 3 other images that will be in view when the scroll is done, it would be better to have the new images be fetched with higher priority.
The issues in IntersectionObserver that are related to this issue:
This was discussed a few days ago in the WHATWG TPAC breakout session. Minutes at https://www.w3.org/2020/10/26-whatwg-minutes.html#lazy
zcorpan: want to discuss different approaches between browsers with regard to when they're going to load the image. Specifically, the rootMargin on the IntersectionObserver. E.g. Firefox uses 0 rootMargin. Chromium uses a network-dependent rootMargin, 1250px to 8000px.
zcorpan: Open questions: 1. Are people happy with the Chromium behavior? 2. There are suggestions in the HTML Standard about what information to consider.
emilio: Firefox update: currently shipping 0 margin default (but user-configurable). Actively looking into updated strategies with the performance team for better defaults. Developer feedback is that they like the control JS lazy-loading gives them. Maybe a different topic, but worth discussing… would the value be global, or per image, or what?
vmpstr: IntersectionObserver doesn't apply to nested scrollers. Any way to deal with that?
zcorpan: yes, this is an open issue with the IntersectionObserver spec. No non-hacky way to really ground this on IntersectionObserver. It'd be ideal for an IO to opt in to specifying a rootMargin that applies to all scrollable containers. https://github.com/w3c/IntersectionObserver/issues/431
domenic: a bit surprised we're using IO as the basis given this and many other mismatches.
emilio: agreed, but it's getting better.
zcorpan: browsers do use IO to implement lazyloading, so probably worth keeping this layering and resolving the IO issues.
fantasai: Authors might want to adjust the rootMargin based on their guesses as to the user’s scrolling behavior, but it seems more likely that they need to adjust the timing due to differences in resource sizes. So maybe providing hints as to the size of each resource would be more useful more of the time (and would avoid interfering with user prefs or UA smarts as to scrolling behavior and network speed/latency).
emilio: need more data on what authors need, what they’re doing now
zcorpan: some JS libraries allow per-image customization. Most seem to have small rootMargin values (but that often results in the images not being loaded by the time they're seen).
There will be another TPAC breakout session tomorrow (30 October 14:00–15:00 UTC) to discuss changes to IntersectionObserver to better support lazy-loading use cases.
https://www.w3.org/2020/10/TPAC/breakout-schedule.html#intersectionobserver
https://html.spec.whatwg.org/multipage/images.html#updating-the-image-data:lazy-loading-attribute
https://html.spec.whatwg.org/multipage/rendering.html#intersect-the-viewport
When to start loading a lazy-loaded image is a key aspect of the feature, but the spec doesn't give advice beyond what is quoted above. Right now, different implementations do different things: Chromium starts loading early (I think currently 3000px to 8000px before entering the viewport, depending on effective network speed and latency), Gecko and WebKit start loading late (when at least 1px is visible). See https://www.ctrl.blog/entry/lazy-loading-viewports.html -- they argue that the implemented extremes are too early and too late; nobody has the goldilocks "just right" behavior, yet.
From my experiments, it seems Chromium only applies the "margin" for top-level page scrolling. For images that are in scrollable elements, or in iframes, the loading starts when the element is at least 1px visible. The spec doesn't differentiate between different cases of "about to become visible". The element scroll container case is common for image carousels.
See this demo: https://lazy-img-demo.glitch.me/
To view the same demo in an iframe: https://glitch.com/edit#!/lazy-img-demo - click "Show" and then "Next to The Code".
I'm curious what JS libraries that implement lazy-loaded images do. Have they iterated on this, and know something we could apply here?
Usually, details like this are left to the UA to optimize. However, I think it's important to get some consistency in implementations for web developers to be able to use the feature and know that browsers won't load all images anyway (because their scrollable area is smaller than the browser's lazy margins) and won't load images too late, resulting in users always seeing images load after they're within the viewport.
cc @domfarolino @bengreenstein @emilio @smfr @othermaciej @rwlbuis