Open mreinstein opened 10 years ago
Yes, I often think about it. Give me some time to answer :-D
Unfortunatley, I have not so much time to answer it how I want. Here are some points: I'm not 100% sure, wether I will do an absolute new version with new API, but if I do, here are some thoughts:
About your two other points, I'm not 100% sure, what you want here, especially your second point. You say for example: "I dont need to dynamically load my shims over the internet." What do you mean here exactly? Comming from capability based loading. We always want to / have to conditionally load script A for device y and load script B for device x. How would adopting commonjs/AMD/UMD change this exactly? How would a project setup look like? Actually I want to create a selfcontained library without any dependencies, so only UMD would be possible.
There is a common request to only have those files and code needed in the project, which are indeed used. I will setup a github repo with a new file/directory structure in the future. But I might make this atomic inside of the project setup, I don't know wether I would expose this on the build site. I know, that some people see this as an overhead, but actually I packed those features into reasonable groups. The filesize overhead is considered and tested as mostly irrelevant. Additionally if I still want to be able of conditionally loading things, I have to thing about the amount of initial requests. A lot of features like fieldset[disabled] are especially important for building complex forms with conditionally validating fields depending on the checkedness/selectedness of an option/radiobutton/checkbox... It's just good, that it is part of the standard form features. People just need to know about this and how to use it.
One thing that I really still want to have in next version is the separation of polyfill <-> extension on the one hand and the UI <-> and API on the other hand in conjunction with a conditionally and deferred loading system.
Here is one small example: If you load the following site http://afarkas.github.io/webshim/demos/demos/mediaelement/responsive-mediaelement.html with a modern desktop browser, you will see something like this in your network panel:
As you see, webshim will load 9 files with total of 38kb. If you compare this to a solution like videojs just 1file with 20kb) you would think, "oh my god".
The filesize difference of 38kb vs 20 is mainly because of 3 things:
But if you look deeper, how those things are loaded you will see, that webshim has only loaded about 8kb initially and the rest is deferred mostly after the onload event. Although those files are deferred, the UI for the user looks "fully ready" and the API is fully implemented with the 9kb.
But if you use a smartphone to load the exact same page, you will get the following picture in your network panel:
As you see, we now only load 4.5kb initially to have a full unified API for HTML5, flash and youtube.
And if I would use the same setup, but wouldn't have a video element on the site the loading of features would also adopt to this scenario.
This is one great thing about webshim among others, because a developer just don't has to care about lazy loading or conditionally loading, webshim automatically does this for him. So I can implement larger feature sets without slowing down the network performance and a developer simply can use those features.
wow, great write-up, thanks Alexander! I'll add notes inline to hopefully clarify some of the things I mentioned.
Drop jQuery and Modernizr Drop IE8 Support Use defineProperty instead of the current solution $.prop proxying MutationObserver (and other techniques) instead of the current solution $.updatePolyfill, $.appendPolyfill Development and documentation have to be done parallel
All great ideas IMO! :)
"I dont need to dynamically load my shims over the internet." What do you mean here
webshim expects to load things at run time, based on the capabilities of the browser. This is complicated in production, where the code goes through an optimization pipleine (minification, serving assets from a CDN, etc.)
In my experience, the most heavy use of bandwidth on production sites (desktop and mobile) are images. 2-3 images on a page outweigh all of the combined, minified, compressed javascript by a lot. Because of this, I think serving optimized code bundles for my specific user agent is overkill. The download time difference isn't that significant because the "elephant in the room" are the images. It results in much simpler run time logic. It might even be more performant to have a bundled pile of javascript code that is fetched all at once, rather than load, check capabilities, and load the rest. It means 1 request instead of having to do 2 in series.
How would adopting commonjs/AMD/UMD change this exactly?
I'd prefer that any polyfill doesn't declare its own module loading strategy, and instead uses the facilities in the project. This differs from project to project. I've worked on some projects that use requireJS, some that use browserify, and on the old fashioned javascript code that puts everything in the global namespace. They pretty much all have their own loading strategy, but webshim defines it's own. I think removing this would ease integration and simplify the project architecture.
How would a project setup look like?
I'd love to see webshim split into several (perhaps dozens) of tiny modules. Each would be responsible for doing just one thing, and would be published on npm. They'd use UMD, so regardless of whether your project is commonjs, amd, or window globals, the modules would be usable and interoperable.
The first step would be figuring out how to decompose webshim into these small pieces. For example, could form support be split up? If I wanted to only shim the required attribute on form elements, perhaps that could be one module. Perhaps the pattern attribute could be another module. Maybe both of those modules rely on several other common modules related to form checking/processing. The nice thing about this approach is you can pick and choose various pieces of webshim and include only the features relevant to your app. It suspect it would also make maintaining the app easier because small modules that do one thing are much easier to replace, improve, document, etc.
IRT @mreinstein 's clarification, I'd just like to sat that, although I use nodejs for my own work, I prefer that Webshim NOT require npm. Not all web developers are willing/able to use Node, and it would be difficult to sell Webshim to those folks. Part of what makes Webshim so great is that a developer can just download and extract the ZIP, include polyfiller.js, and write a single webshim.polyfill() cal to include what they need on a page. Anything more complicated than that discourages the noobs and reduces the Webshim audience.
While I'm chiming in, I'd also like Webshim to never drop support for IE6 or any outdated browser. The whole point of a polyfill is support the dinosaurs. It might be ideal to add an option that specifies which browsers to support, but I feel that the default should be to go back as far as possible.
Anything more complicated than that discourages the noobs and reduces the Webshim audience. I just wanted to echo this sentiment. Maybe there's a way to both satisfy advanced users like @mreinstein and noobs. For me the appeal definitely was the "it just works" aspect and I can see Webshim rise to very high popularity thanks to this.
I prefer that Webshim NOT require npm
@AMA3 I think you misunderstand. No one is saying we should require npm to use webshim, only to build it from source. Webshim should be comprised of small components, each handling one thing. We can then publish those modules on npm for use individually, or you can download a distro off of the main webshim website.
I'd also like Webshim to never drop support for IE6 or any outdated browser.
I think supporting old browsers is fine, but there's also a cutoff point where it becomes too much baggage to support. I think the right way to do this is to do something like jquery does; keep older browser support in 1.x, and have a leaner 2.x branch that supports newer browsers but is much smaller.
The whole point of a polyfill is support the dinosaurs.
The purpose of polyfilling is to support a feature using the native specification for whatever browsers your audience uses.
@mreinstein You're right, I misunderstood. I'm certainly OK with adding NPM support if it's not required. I also suggest making certain that the easiest path is clearly stated first in all docs :)
I agree about the baggage but I think Webshim already solves that problem by conditionally loading only the necessary polyfills, no separate versions a la jQuery 1.9/2.0 required.
Of course there is no point to supporting browsers that the audience doesn't use, but again, if the support is conditionally loaded and was already there from the beginnings of Webshim, then I don't see a penalty to retaining that support, and I see a certain benefit.
I think Webshim already solves that problem by conditionally loading only the necessary polyfills, no separate versions a la jQuery 1.9/2.0 required
Yes, that's true. My problem with this approach is two-fold:
I'm not the maintainer and it's not my call but I want to see webshim move away from being a large monolithic package. It's designed the way jquery was 8 years ago. Not to say that's bad, but we shouldn't build apps that way any more. Things like es6 module loading/browserify/requirejs are a testament to this. I want this library to remain relevant into the future and I think this simplication/modularization is one way to make the package more useful in modern web development.
Webshim is an amazing polyfill library, but it's value comes from the quality of the polyfills, not the plumbing it uses to load itself into memory. I want to see a way to modularize it into small, re-usable pieces, each of which do one thing well.
Hmmm I think I disagree: I think the plumbing is what makes Webshim so wonderful. Most task-specific polyfills that I've encountered require a lot of configuration or at least a lot of separate scripts and links to load. The beautiful thing about Webshim is that it requires only one script (in addition to its dependencies on modernizr/jquery) and one single JS command (webshim.polyfill()) and you're done.
Now, under the covers, I'm not opposed to allowing different loader modules and other low-level customizations, and I'm also not opposed to allowing power users to select which Webshim polyfills to include in their own projects, and to decide how to include them. I also support the notion that Webshim have a little of logic to detect if a loader is already present, and use the appropriate mechanism to use or load its own, and even to load modernizr.js and/or jquery.js if needed.
I will add some ideas including examples by time here. But to be clear here: Webshim 2.x is really something for 2015. Maybe I will start earlier, but this won't be stable. And don't get to passioned ;-).
Something I can not resist: Although the current webshim-repo is treated as a monolithic package, the architecture isn't and webshim 1.x is developed with modules in mind. Webshim doesn't use amd or commonjs as a module format, because normal optimization tools for those module formats are extremly bad if it comes to capability based/conditionlally loading. This is something you can't blame webshim for. It's a tool problem. Develope in small pieces and then concat everything into one big file is just a bad performance workaround by most build systems. I think webpack seems intersting here, but as you said, webshim should not force a developer to use certain tools, it should simply work. In fact es6 packages are another "workaround" for the es6 modules performance problem, but would be better suited for webshim, because they can work much more like a combohandler service (which is the way, how optimizing webshim would work best). By the way jsdeliver supports combohandling and it would be intersting to create a polyfiller.js which would support it.
This means, I invented a different format, because it's much simpler to pre-optimize it. You are right webshim dictates some loading strategies, but this doesn't mean, that it doesn't work with amd/commonjs, it's simply just works side by side. And it works better how it is done now, than leaving it to the normal r.js optimizer for example.
As a small pro tipp: There are files in webshim depending on feature and/or configuration, which are not loaded conditionally and don't need to be loaded deferred. For example: If you use mediaelement
feature you can simply directly include or require 'mediaelement-core' and 'swfimini'. In case you are using the froms
feature you can directly include 'form-core'. There are some other files, depending on your configuation (especially with replaceUI: true
and customMessages: true
). You can have a look in optimize-polyfiller.js. If you do this, those files are going into your normal "optimization pipline", which can give you indeed a speed boost (if the optimization is done well), but all other files are better handled by the predefined webshim way.
Webshim 2.x is really something for 2015. Maybe I will start earlier, but this won't be stable.
I'm not implying any urgency or intend to push a schedule. Just wanted to collect some thoughts on what a 2.x could look like, and give feedback on what I'd personally love to see from my own selfish perspective. :)
webshim 1.x is developed with modules in mind
It was also developed before there was a viable mainstream module system. Now that there are several (es6 modules, commonjs, amd) would you consider moving to one/some of those? requirejs is already very similar to your strategy.
This is something you can't blame webshim for.
I definitely don't mean to imply any blame, at all. Webshim was created before several of these semi-standards were developed. I think it's a really clever system.
Develop in small pieces and then concat everything into one big file is just a bad performance workaround by most build systems
I used to think that too, which is why requirejs/AMD seemed more appealing at first. The idea that my dependencies load at run time as needed is appealing on the surface. But for most production web sites, the total download of images dwarfs the javascript payload. Having a fancy dependency loader that pulls in modules at runtime is overkill when a site has 600kb of image data and 40kb of compressed, minified javascript. In that case, it's far better to just load the extra data up front, even if half of it (20kb?) isn't even used.
but this doesn't mean, that it doesn't work with amd/commonjs, it's simply just works side by side. And it works better how it is done now, than leaving it to the normal r.js optimizer for example.
yes, r.js is a pain, but don't you think having to deal with 2 side-by-side loader strategies is kind of painful? Webshim's loader is well designed but I still have to deal with it, and it's solving a problem that I don't have. I don't need all of that infrastructure to conditionally load 40kb of javascript, even on a mobile experience. requirejs and webshim both have opinions about the best loading strategy and I just have to live with them or rip out the guts to work with my stuff.
@mreinstein
Your image argument is indeed a strong one. Will come back to this later. ;-)
@aFarkas just wanted to add I really admire all the work you've put into this library. It's people like you that keep the web running. I'm sure whatever you decide in terms of architecture/changes will be an improvement either way. Just giving my thoughts from a very limited perspective.
Want to weight in here, that having a second module loader, one we don't have control over, is an additional complexity for a problem we don't have: It would be much better for us to package the shims we would use, even if they aren't actually needed in each browser.
I'm currently developing something, that might be interesting for you, but progress is slow, so don't expect this to be ready soon. I think first version will be there in 6 month.
This said, webshim doesn't really add much complexity. The only problem I know of is, that there are cm systems messing around with URLs. And to be clear, those systems often have workarounds or exceptions for the most often used relative asset referencing and loading (i.e.: loading background images from a stylesheet). Those systems do not work with amd modules either, if the modules aren't pre-packaged by another build system.
To be clear here, it's quite annoying and a big fail by those "systems" every web standard that needs to load assets uses relative referencing i.e. it is built upon this standard. And if you think on ES6 modules or HTML imports you know, that there are comming new standards involving asset loading.
The only problem I know of is, that there are cm systems messing around with URLs. And to be clear, those systems often have workarounds or exceptions for the most often used relative asset referencing and loading (i.e.: loading background images from a stylesheet).
CM systems usually do that for a reason like far future caching, which is best practice imo and I doubt, that this will change anytime soon. Relative asset loading from js and css is a little bit different. Scripts may quite easily and safely manipulate URLs in stylesheets (which are in 99% of the cases background-image-urls and fonts), while manipulating JS-Files is much more complex and in some cases impossible (as they may be created dynamically from different variables).
Even with future technologies like http2 in mind, I'd still prefer a system, where everything is served as one package (which may optionally be custom built with npm locally or on the server). It might not be a perfect solution as some js is served, which is never used, but I strongly doubt, that further optimizing it (and loading from the client side only the necessary parts) is worth the inflexibility/limitations.
Btw, how many well-established libraries like jQuery have conditional loaders? They are usually not customized at all, as the performance impact is so negligible.
Would be nice if the date inputs automatically put in the separator when needed. Much like the browsers that have this feature built in.
@aFarkas have you given thought to what a next gen version of webshim would look like? A few loose ideas:
Anyway I hope you don't take these as demands, was just hoping to hear your thoughts on a 2.x version, and share some things that I'd find very useful.