Closed marcoscaceres closed 8 years ago
The requirement is that developers should be able to control which resources are critical and which are not... not the "reading system". The whole notion of a "reading system" should be discarded, IMO in favor of "user agent" or "browser".
Non-essential content, which is not required to be available in certain states, should have a predefined fallback that will allow the user to continue consumption (even in a potentially degraded, but author-controlled manner) (see also 4. States of a PWP).
It's trivial to do the above in the platform. Looking ahead, please do not be tempted to make this into some declarative format (i.e., some XML or JSON thing).
It is up to the reading system to support different behaviors for essential versus non-essential content, ensuring that an entire package [is] only downloaded when necessary.
Again, the above presupposes packages. Please stop having that as a presupposition as it undermines this whole effort.
It is also NOT "up to the reading system to support different behaviors", it's up to the developer to build those behaviors into the reading experience.
It is also NOT "up to the reading system to support different behaviors", it's up to the developer to build those behaviors into the reading experience.
Agreed—I think statements like this should not be included in description of the user use cases at all. The user does not care what part of the the system provides support for different behaviors.
And no matter what, it is ultimately up to developers to meet the needs of their user in whatever way they think best. No new ”reading system” needs to be assumed that automatically does some new magic for users. As far as I can see, all of the use cases described in this section can be addressed by developers using existing features of the Web runtime.
I must admit I do not really see the issue of the author providing information (call it a 'hint') whether some resource is really essential or not. Clearly, the reading system... oops, the user agent (or whatever:-) can make its own (programmatic) decision whether the resource (like a specific font) can be downloaded or not, but I do not believe that it can alone make this decision in all cases without an additional hint. The example of fonts came up several times during our discussions: some of the fonts may have an aesthetic value only and can, hence, be skipped, whereas some fonts (eg, to display some formulae) are essential and cannot be skipped. I doubt that a user agent can find this out on its own.
I doubt that a user agent can find this out on its own.
That's the point @sideshowbarker and I are trying to make. But this doesn't need a requirement, because the web platform already gives you a way of doing this (via custom cache response using the cache API, for instance).
We are trying to avoid, "oh, we totally need a new manifest format for hinting which files are special - let's get some RDF+JSON+XML all up in here".
(First of all, nobody is talking about RDF or XML or even JSON. Can we get that off the table?)
At that point, we are collecting use cases and requirements. If it so happens that current technology can solve this requirement, then be it. Everybody is happy and we can move on, but that does not invalidate the requirement itself.
I must admit, but that may be only my limited understanding of the world, that I do not see how the cache API can decide, by itself, that it can safely drop a resource in a specific environment without jeopardizing the information content that the publication is trying to convey. But again, if that can be solved, fine!
(First of all, nobody is talking about RDF or XML or even JSON. Can we get that off the table?)
I can smell a new format from a mile away :) But ok.
At that point, we are collecting use cases and requirements. If it so happens that current technology can solve this requirement, then be it. Everybody is happy and we can move on, but that does not invalidate the requirement itself.
Sure, but each requirement seems to strongly hint at a presupposition.
What would be cool would be to add after each requirement: "Currently addressed by: Web Spec X".
Where there is no "Web Spec X" found, then we've hit the standardization jackpot.
I must admit, but that may be only my limited understanding of the world, that I do not see how the cache API can decide, by itself, that it can safely drop a resource in a specific environment without jeopardizing the information content that the publication is trying to convey. But again, if that can be solved, fine!
It can't on its own... but JavaScript code that decides what gets put into, or pulled out of, the cache can make those decisions.
@marcoscaceres
What would be cool would be to add after each requirement: "Currently addressed by: Web Spec X".
Of course. That is exactly why we suspended the work on the pwp draft because we realized that we need a stable UCR to go back to it, eventually, to do something like that. But bear with us: one step at a time...
At this point, we are collecting use cases and requirements. If it so happens that current technology can solve this requirement, then be it. Everybody is happy and we can move on, but that does not invalidate the requirement itself.
Sure, but each requirement seems to strongly hint at a presupposition.
(General comment) yeah, it seems to me also that this document is not just about “collecting use cases and requirements”. As I have said in other comments, it instead seems to be presupposing a very specific solution: a new packaging format—and then sort of back-formulating a set of use cases that align with the presupposition of a new packaging format.
And meanwhile the document never even once mentions Service Workers, despite the fact that we seem to already have wide agreement that Service Workers need to be a core part of the solution—I think to the degree that at this point there is an obligation in the editors of this document to explicitly identify any use cases that can not be addressed by Service Workers or other existing standard features of the Web runtime.
In other words, it seems very strange for this document to omit any mention of Service Workers from while at the same time just assuming the need for a new packaging format—a format that Service Workers may completely obviate the need for.
@marcoscaceres wrote:
And meanwhile the document never even once mentions Service Workers, despite the fact that we seem to already have wide agreement that Service Workers need to be a core part of the solution—I think to the degree that at this point there is an obligation in the editors of this document to explicitly identify any use cases that can not be addressed by Service Workers or other existing standard features of the Web runtime.
I believe most members of DPUB know that Service Workers will likely be a large part of the solution to these use cases. But we did deliberately try avoid mentioning specific technologies in this particular case document. And if we slipped up, Leonard usually reminded us :)
Service workers are a bit scary for us, given that we can only hope that all major browsers will support them. We've been burned before (cough, CSS Regions, cough).
But we did deliberately try avoid mentioning specific technologies in this particular case document.
The document is littered with mentions of HTML tho. But I know what you mean.
Service workers are a bit scary for us, given that we can only hope that all major browsers will support them. We've been burned before (cough, CSS Regions, cough).
Understood. But note that CSS Regions is different in that it's not a progressive enhancement (though could be used as one with CSS's @supports
media query, I guess). Service workers are progressive enhancements: the page should work if they are not there.
I believe most members of DPUB know that Service Workers will likely be a large part of the solution to these use cases. But we did deliberately try avoid mentioning specific technologies in this particular case document. And if we slipped up, Leonard usually reminded us :)
Service workers are a bit scary for us, given that we can only hope that all major browsers will support them. We've been burned before (cough, CSS Regions, cough).
The problem is that what the document is doing instead puts you at even greater risk of getting burned than mentioning specific existing technologies does—in that it seems to be presupposing native UA support will emerge for an as-yet-non-existing feature of the Web runtime with zero browser-engine support now: A “packaged“ publication that UAs need to “pack” and “unpack”.
We have already have implementations of Service Worker in multiple browser engines, and I think I’m not going out on a limb by saying we’ll eventually see support for Service Workers in all major browser engines. But I’ve yet to see clear indications that any browser-engine implementors are interested in supporting the new “packaged publication“ proposal this document presupposes support for. And if they do not, then this entire effort as currently imagined in this document is going to be burned much more than it risks doing by clearly pinning its solution to Service Workers.
@sideshowbarker Is the term "package" just off-putting? We can find another way of wording this. The idea is that a user needs to be able to download, share, transfer a "thing" (much like an app "thing"). We have been finding the word package convenient. No end user would ever use the word package. Do you have a recommendation for a better term?
@sideshowbarker As mentioned in another thread, service workers won't work for all of our use cases due to some of its design criteria that aren't aligned with ours. Since it's clear that SW's won't move and since we can't move (and achieve all of our objectives) - we need to find some other (additional!) method.
note connection to https://github.com/w3c/dpub-pwp-ucr/issues/99
@lrosenthol wrote:
@sideshowbarker As mentioned in another thread, service workers won't work for all of our use cases due to some of its design criteria that aren't aligned with ours. Since it's clear that SW's won't move and since we can't move (and achieve all of our objectives) - we need to find some other (additional!) method.
Could you elaborate on this? The service worker Jake made for us even supported the creation of a portable zip archive of the publication.
@dauwhe The biggest limitation that we have found with SW's is the requirement for https connections. This precludes the ability to have standard http links inside of a PWP (either entirely or as a mix with https).
@dauwhe The biggest limitation that we have found with SW's is the requirement for https connections. This precludes the ability to have standard http links inside of a PWP (either entirely or as a mix with https).
TLS only is SW's most valuable feature - it's not a limitation. HTTP puts users at risk and hence it's on its way out - we are trying to deprecate it from the Web.
Please see:
Additionally, iOS has stopped allowing non-secure connections in apps since iOS9. This is industry wide trend.
And I am all for deprecating it - but it will be some time before every single site is able to accomplish that move.
But more importantly is the issue that when viewing content locally (either via a localhost server OR via file), the origin/host cannot be https! You could (under certain circumstances) have a secure context - but that is NOT the same (as you know).
And if the origin/host is http - then it can't be accessed via a SW.
@sideshowbarker Is the term "package" just off-putting?
Yes, that specific term is not helping. But just coming up with a different term isn’t going to help, because the underlying problem is the emphasis on the the claimed requirement to support “ad-hoc distribution mechanisms“ and the “email or put on a USB key” use case (which seem to be what create the perceived need for some form of special new package rather than just delivering content to users with the Web itself).
To me, this document in its current form—from the Portable Web Publications title and PWP acronym and dozens of references throughout to packaging—is giving the appearance of elevating that “email or put on a USB key” use case to be what the entire effort is about. And I think that if this document and the rest of the messaging from this group continues to come across that way, it is going to continue to distract attention of reviewers away from real problems the group is hoping to solve.
We can find another way of wording this.
As tried to suggest above, I think a lot more than just a wording change is needed here.
The idea is that a user needs to be able to download, share, transfer a "thing" (much like an app "thing").
Downloading and transferring are very different from just sharing. Clearly a resource can be shared directly on the Web itself without it needing to be specially packaged or otherwise made “portable”.
Anyway, I don’t think there is any issue with reviewers not understand clearly understanding what this document is getting at with the current emphasis it is. I just don’t think implementor reviewers are going to accept the portability need as a hard requirement or to be willing to implement anything to facilitate its use cases in the way they appears to be imagined by some members of the group.
And despite assertions from @lrosenthol in various issue comments here about that “email or put on a USB key” use case and “ad-hoc distribution mechanisms“ being a hard requirement, it’s not clear to me that the group actually has consensus about that being true. As you’ve pointed out yourself, “Emailing a PWP to a friend could be emailing a link to a friend, not a file.”
We have been finding the word package convenient. No end user would ever use the word package. Do you have a recommendation for a better term?
I think it might help to start by using “collection of documents” (or something like that) throughout in place of “package” wherever possible.
As you note, users won’t use the word “package” or think in terms of it. Instead they think in terms of problems like “I want to be able to continue to read this Web book on my tablet even when I have no Internet connection.” And they can do that without needing to have the Web book packaged or made “portable” off the Web.
I just don’t think implementor reviewers are going to accept the portability need as a hard requirement or to be willing to implement anything to facilitate its use cases in the way they appears to be imagined by some members of the group.
Can you explain why that is the case? Why is there such animosity to this particular requirement? It seems to me that enabling web technology off the web would be something interesting to the community, as it expands the reach of the technology to more uses.
I don' understand why discussions about the File URL, which has been around since 1994, are for a later day, while discussions about Service Workers, which is a glorified local cache at best but more akin to 90s style vapor-ware, are front and center?
Service Workers, which is a glorified local cache at best but more akin to 90s style vapor-ware
You'll go nowhere with nonsensical comments like that. For the sake of discussion, please stop.
I understand the legitimate hesitation of some of us due to SW not being a a final Rec yet, but as it was said before all the major browsers implement SW or are seriously considering it. It has a solid spec. It is part of the Web runtime and follows its security model. It is deployed and functional today on major websites. And it does solve several of our requirements (for instance those related to the offline/online use cases).
File URLs do none of that.
This was the only demo I could get to work - https://serviceworke.rs/offline-status_demo.html It cycles through a couple of images and that's it. Please provide links to more sophisticated examples. Thanks.
Meanwhile, on the 5doc.org site I developed, there is sample after sample of web pages that are extracted and downloaded on the fly and function fully offline via file URL in all browsers and on IOS in a browser based app (Goodreader). There are also plenty of examples here on GitHub of complex HTML containers that also work perfectly offline. These files/containers can be zipped and emailed to friends/colleagues. No internet is required to view the contents. This is offline content. When will service workers function like this?
I accept that maybe sometime in the future service workers might possibly be a solution that some indeterminate number of folks might want. But as demonstrated by the huge two decade success of PDF, and the failure of ebooks to even walk in its shadows, people love to have a file and to open that file in the application of their choice and on the platform of their choice, to email it to a friend and to archive it to their personal cloud. We KNOW this based on years of empirical evidence.
As such, I don't understand how all other solutions apart from service workers are so readily dismissed. Maybe this is why there is such a push to shut down all discussion of solutions apart from service workers... Any kind of comparative analysis would reveal putting all the eggs on the service worker basket is a huge risk.
I don' understand why discussions about the File URL, which has been around since 1994, are for a later day, while discussions about Service Workers, which is a glorified local cache at best but more akin to 90s style vapor-ware, are front and center?
@mac2net, I know all too well, from years of doing standards, that it can be extremely frustrating if you feel your proposal is being ignored - trust me, it's not being ignored! We are all jostling for position here, and we politely side-swipe each other as part of the process to reach consensus. When you lash out, it makes it personal and it weakens your position.
You know that at least you and I have discussed your proposal for using file:// at length elsewhere (and I know you've also discussed it with other members too). So you need to refrain from lashing out - specially because there are those of us heavily invested in Service Workers (i.e., millions of dollars and many person-years investment I'm talking about), and because you are going against consensus about what forms the standard part of the Web Platform.
I believe we all respect your position that file://
is a candidate. And your efforts to show that it can, in fact, meet the requirements, are appreciated. The thing is that you need to give the community a chance for us to come to a consensus on what the requirements are. Your solution may totally meet the requirements - so please be cool, and hold onto your cards until it's time to put them on the table. That's coming really soon - and instead help us make sure we have a great set of Requirements for a great book solution for end-users.
It cycles through a couple of images and that's it. Please provide links to more sophisticated examples.
Please see: https://pwa.rocks
When will service workers function like this?
Emailing files around is a massive security risk. How can I trust that you have not modified the contents of the file? (Don't answer this now! but think about it because you will need to answer that later)
I accept that maybe sometime in the future service workers might possibly be a solution that some indeterminate number of folks might want. But as demonstrated by the huge two decade success of PDF, and the failure of ebooks to even walk in its shadows, people love to have a file and to open that file in the application of their choice and on the platform of their choice, to email it to a friend and to archive it to their personal cloud. We KNOW this based on years of empirical evidence.
You will also notice that orders of magnitude more people access sites like Wikipedia, and the rest of the Web by using search engines. Emailing files around is mostly a defect for when content can't be found online.
You might also like to see all the security vulnerabilities with Adobe Acrobat Reader.
As such, I don't understand how all other solutions apart from service workers are so readily dismissed.
Try this one:
If you set localStorage
using file:// in one book, is that accessible in another book? Right now, yes: https://twitter.com/jiminypan/status/775040727457882112?refsrc=email&s=11
That's pretty serious. Also, as we've discussed previously, file:// breaks the same origin policy, because it can't meet the requirements of "scheme", "host", "port" - because it lacks a "host" and "port". The protocol doesn't have any GET semantics, so it breaks things like XHR, etc.
A lot of us dismiss it out of hand because we know, from 20 years experience with it (and derivatives, like "app://", and "widget://", and "pack://", and a thousand others), how flawed it can be. It can be tiresome to have discussions about file:// over an over again every few years - and again show that it cannot be used as a solution.
Maybe this is why there is such a push to shut down all discussion of solutions apart from service workers... Any kind of comparative analysis would reveal putting all the eggs on the service worker basket is a huge risk.
This may be true, but the analysis is what we must do. The analysis of file:// would also reveal the security issues I listed above. So know from history that file:// will meet a lot of resistance - and a lot of people will not be interested in having the file:// discussion again (because of the years and sunk costs wasted into that already).
However, it's too early for us to be having this discussion. Let's please stay focused on the requirements.
@marcoscaceres far from lashing out, you were the source for this
discussions about Service Workers, which is a glorified local cache at best but more akin to 90s style vapourware
You confirmed that at the moment service workers are a cache and anything beyond that needs to be built. Isn't work on this supposed to start next month and report in 2018? http://www.w3.org/WebPlatform/WG/ That would qualify any advanced PWP-applicable implementation of service workers as vapour.
You confirmed that at the moment service workers are a cache and anything beyond that needs to be built. Isn't work on this supposed to start next month and report in 2018?
No. Far from it. Service Workers are "a thing". As are the things around it. Look at what is implemented in browsers, not possibly out of date pages hosted by the w3c :)
Emailing files around is a massive security risk. How can I trust that you have not modified the contents of the file? (Don't answer this now! but think about it because you will need to answer that later)
Actually, the ability for me to acquire some content, modify it, and then redistribute it is a feature of content distribution for many users & use cases! It is one of the top 3 things done with document formats such as PDF today.
You might also like to see all the security vulnerabilities with Adobe Acrobat Reader.
How about Chrome, which has three times as many, including a significant number that are still open (unlike Reader!).
And I will point out that NONE of the security vulnerabilities that have ever been found against Reader had any bearing on this conversations.
If you set localStorage using file:// in one book, is that accessible in another book? Right now, yes: https://twitter.com/jiminypan/status/775040727457882112?refsrc=email&s=11
And that's a bug in the implementation of localStorage - it's not by design. There are numerous ways to solve that in the browser/UA - but (I assume) no one has bothered.
Also, as we've discussed previously, file:// breaks the same origin policy, because it can't meet the requirements of "scheme", "host", "port" - because it lacks a "host" and "port".
That's also not true - it can meet those requirements should the implementers choose to allow it - but right now they do not. That's easy to fix, if there is a desire to do so.
The protocol doesn't have any GET semantics, so it breaks things like XHR, etc.
Same point - currently it does not, but there is nothing preventing it from doing so.
Via your suggestion I went here - https://m.flipkart.com/. I turned off my internet and clicked on a product. Hey it reloaded the page template, but alas it also told me "Something's not right! Please try again". I turned the internet back on, returned to https://m.flipkart.com/, reloaded, turned the internet off and the page did not reload.
I want to make my position 100% clear - service workers may be a perfect amazing solution for some use cases sometime in the future. But amazing technologies also fail. Maybe you heard about OpenDoc? https://en.wikipedia.org/wiki/OpenDoc
OpenDoc had several hundred developers signed up but the timing was poor. Apple was rapidly losing money at the time and many in the industry press expected the company to fail.
BTW the last time my personal computer was infected was 28 years ago. A work computer - 26 years ago. Chrome - last month.
And that's a bug in the implementation of localStorage - it's not by design. There are numerous ways to solve that in the browser/UA - but (I assume) no one has bothered.
It is by design - it's a feature: because it depends on the same origin policy. The spec states:
Each top-level browsing context has a unique set of session storage areas, one for each origin.
If file:// is the origin, then all books will be able to access each other's storage.
Given the definition of origin:
The origin of a resource and the effective script origin of a resource are both either opaque identifiers or tuples consisting of a scheme component, a host component, a port component, and optionally extra data.
Please explain to me how file:// can work with that definition? Specially when file:// neither meets the definition of an opaque identifier or meets the criteria of having a host, port, and scheme.
That's also not true - it can meet those requirements should the implementers choose to allow it - but right now they do not. That's easy to fix, if there is a desire to do so.
If that was easy to fix, we would have fixed it.
Same point - currently it does not, but there is nothing preventing it from doing so.
file:// doesn't have HTTP semantics. It would mean rewriting large parts of the browser. We've tried to overcome these issues in the past by having custom URL Schemes like: https://www.w3.org/TR/app-uri/
It doesn't seem to help.
If file:// is the origin, then all books will be able to access each other's storage.
ONLY IF the UA limits the definition of origin to the URL and doesn't consider other aspects. It would be quite possible to redefine the handling of file:// by a UA to ensure that each file:// is loaded in a separate (and therefore secure) origin. It simply doesn't do so today.
ONLY IF the UA limits the definition of origin to the URL and doesn't consider other aspects. It would be quite possible to redefine the handling of file:// by a UA to ensure that each file:// is loaded in a separate (and therefore secure) origin. It simply doesn't do so today.
Sure, but that would be a ton of work and massive changes internally to the browser engine: It would mean we would need to special case these particular URLs. That's not something that Mozilla, at lease, would ever be interested in doing. The costs would outweigh the benefits there.
That's not something that Mozilla, at least, would ever be interested in doing. The costs would outweigh the benefits there.
@marcoscaceres I question your financial analysis. Offline is a big market. It's huge! Apparently there are 2.5 trillion PDFs in the world. That should give you some indication of financial. opportunities. http://itextpdf.com/blog/do-you-know-how-many-pdf-documents-exist-world
@mac2net, we (Mozilla) already have a custom PDF renderer. We already have that solved for our users (what firefox uses):
@marcoscaceres sorry, it isn't my intention to put you on the defensive. As I suggested previously to you, I think a systemised analysis is the way forward. In these discussions things are thrown around by everyone, it's very difficult to follow or draw conclusions.
The Offline URL Community Group I mentioned has been setup (https://www.w3.org/community/offline-url/) and it could be a great vehicle for a comprehensive and timely study undertaken by interested parties that could be completed in 6-9 months.
Suggested agenda for Offline URL Community Group
I think having a hard look at file:// would be awesome - particularly as it relates to publications. I would highly welcome it.
OK I was just passing by but since one of my tweets is in there…
If you set localStorage using file:// in one book, is that accessible in another book? Right now, yes: https://twitter.com/jiminypan/status/775040727457882112?refsrc=email&s=11
You might want to take a look at https://github.com/IDPF/epub-revision/issues/873 and https://github.com/readium/readium-js-viewer/issues/559
P.S.: I know the wording of the epub revision issue (its title) could have been a lot better.
I'm new to this thread,, but I would like to comment that I find it astounding that the even the idea of "portable" is still being debated. Millions of EPUB files are downloaded or sideloaded to reading systems every day right now. e.g. just in our own relatively obscure Bluefire Reader app, in the last 30 days folks have added over 688,000 documents to their offline storage libraries. Of those, 173,000 documents were "side loaded" meaning they were manually imported via a variety of ways including email attachments. We have 130 Enterprise customers in 37 countries and I can tell you that beyond a shadow of a doubt, the "offline" and "portable" aspects of the file format are indeed requirements. For consumers as well for a wide variety of reasons. If offline and portable are not key components of PWP, then we'll have to go elsewhere to fulfill our requirements into the future.
If offline and portable are not key components of PWP, then we'll have to go elsewhere to fulfill our requirements into the future.
@BluefireMicah: they are. The only question is how to structure the architecture. The current UCR document is indeed not really clear; we are working on a new version that may make things clearer. The separate wiki page, that jotted down the main line of thought at the F2F, gives an idea of the direction.
Sorry if the discussion went all over the place... but I guess that is the nature of the beast:-)
@BluefireMicah: +1 to Ivan's comments. A substantial amount of time was spent on this logical thread in Lisbon two weeks ago. The current UCR is going through a major transformation both to become more concise with less duplication, but also somewhat bifurcate the PWP term -- the leading P (Portable or even Packaged) from the WP portion. With the expectation that browsers would eventually render the WP part, but perhaps/likely not natively ingest the leading P version. I agree that the P is essential to supporting the current ecosystem and providing a viable migration path for current EPUBs and their associated workflows.
I think you can fear not. If the integration of the IDPF into W3C proceeds, and this Interest Group becomes a Working Group, I can not envision a world where the leading P is abandoned, at least for the foreseeable future.
Thanks Garth and Ivan for your comments. I worry about this effort spinning in circles. What you describe is reassuring. BTW, I fully expect our own focus and that of others to continually migrate towards more piecemeal download rather than full package, but we'll need both for many, many years to come.
On 2 Oct. 2016, at 4:15 am, BluefireMicah notifications@github.com wrote:
Thanks Garth and Ivan for your comments. I worry about this effort spinning in circles. What you describe is reassuring. BTW, I fully expect our own focus and that of others to continually migrate towards more piecemeal download rather than full package, but we'll need both for many, many years to come.
We still need hard justification for creating yet another format: as I mentioned earlier, I already get books I buy as ePub, PDF, Apple's format, etc. I really don't understand why we need yet another format.
Why can't we fix ePub or one of the others? They are already widely supported.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@marcoscaceres: I view a "fixing/enhancing/extending EPUB" as a very good direction -- I don't view "creating something new from whole cloth" anywhere near as favorably! Folks just complain when I pre-suppose solutions, so I just tried to state the requirement! :-)
@marcoscaceres, I agree that EPUB may be one of the starting points towards the package, trying to minimize the changes. What I would like to see, personally, is that "unpacking" such a PWP would be a bona fide WP. We will have to have a more precise definition of what that means and entails.
(By the way, "Apple's format" is EPUB, just with Apple's DRM...)
It's Saturday night so what the *&^%! We really don't know how today's technology will be used tomorrow! https://www.youtube.com/watch?v=S1i5coU-0_Q
@iherman: indeed. Not just Apple, but Google, B&N, Kobo, all Readium-based systems, and on the ingest side, Amazon too.
In the branch, I ended up merging this section with the Constituent Resources sections as they were pretty close as it was. Now the common section addressed the proper set of requirements with combined use cases. Feedback welcome!
Reworded in new draft. See especially http://w3c.github.io/dpub-pwp-ucr/index.html#constituent-resources
This again tries to enforce hard requirements where there are none. There is a business case (in the programming sense) for marking resources as critical - but these are provided programmatically by, for example, putting those resources in the "REALLY_REALLY_IMPORTANT_CACHE".