str8c / ngc

useful "graphics/game" core, provides portable OpenGL and input
0 stars 0 forks source link

FASM browser proposal #1

Open hsimons opened 9 years ago

hsimons commented 9 years ago

There are dozens of these "browser shells" that just change the UI and nothing about the core rendering engine itself. So called "lightweight" browsers based on webkit appear every now and then (uzbl, vimprobable, xombrero, etc), and all of them are useless. They use WebKit, which means around the same amount of bloat as Chome and Safari, except with a different interface. And even those that attempt at being small, like Dillo or NetSurf, are far from it.

That's why we need a new browser, one that isn't controlled by some large organisation, one which is compatible enough to view most information-oriented sites, maybe even some app-sites like YouTube or Fecesbook. Then figure out how to market it so that it appeals to the masses, making it spread widely. Encourage forks and customisations (it should be public-domain), so we might have several dozen different versions all based around the same core, but with subtly different UIs, extra features, etc. This will be what can start the revolution - when the "web developers" realise that almost everyone's browser is slightly different, and that everyone likes their version because of their differences and is unwilling to change, hopefully this will make them more focused on content instead of styling, reduce unnecessary use of features for the sake of using them, and lead to a more accessible Internet for all. Chrome gained marketshare because it was faster, easier to use in some ways, and still displayed existing pages well. What will be the "killer feature(s)" of The Next Browser? Being smaller and faster, completely pubic-domain and open-source, and easy-to-use but also powerful UI is what I'm thinking of.

A browser that doesn't handle at least most of HTML5 and JS won't be so interesting even if it's much less bloated, because most people will just see it as something like a theoretical exercise. A browser that can work with most websites out there now, including the JS-heavy stuff, on par with other major browsers like FF and Chrome, but also much smaller and more flexible, will get noticed. This is not about finding better protocols and web standards but about better implementations of them. I'm more interested in redefining attitudes towards software complexity and engineering than reinventing the Internet, by taking a relatively complex standard and making a simple implementation of it. HTML/CSS/JS is complex but my main argument is that complexity is far less than what contemporary implementations lead one to believe. Or perhaps that complexity is just from attempting to handle edge-cases that are not at all important in reality. What I'm aiming at is 1MB for HTML5 + CSS2.1 (maybe some of 3, we'll see) + ECMAScript 5.1. This isn't like NetSurf or Dillo or any of the other original browsers out there - it's going to be far smaller than anything else, but at the same time more featured and compatible with more websites. It's designed to be the simplest thing that can possibly work. This is also going to be the web browser that puts YOU in control. Per-site/per-domain/per-path settings for security and privacy; a UI that doesn't treat users like idiots by hiding everything; total control over JS execution environment and rendered page contents (although I don't really want to turn it into a full interactive HTML editor); choose what plugins you want to run on which pages, and what they can do.

The core of the complexity is the HTML/CSS rendering engine. HTTP and everything else is auxiliary, so you can use what's existing. Most OSs will have a network and graphics stack. JS actually takes up the bulk of the bloat. It is automatic memory management which means it MUST use some sort of garbage collection. You can still keep memory usage sane by implementing exponential collection strategy: start off with a small limit like 1MB. Once memory usage hits 1MB, run GC. You should have free space left over. If you don't, then double the limit to 2MB and allocate. Keep going until 4MB gets filled up, then run GC. If memory usage drops below 1/4 the current limit after any GC run then shrink (release memory back to OS). I was thinking of mark-compact: http://en.wikipedia.org/wiki/Mark-compact_algorithm This allows constant-time allocations and doesn't waste half the memory doing so. Essentially the heap becomes a huge dynamic array that gets resized in amortized constant time. Rendering a document is overall a complex task, but if you can consider all that complexity in large pieces, and condense them into a simple algorithm, then the code does not need to be complex. In contrast with the traditional notion of breaking problems into simpler pieces and then combining them together to create a more complex whole, I'm taking complex pieces of the problem and combining them together to create a simple whole. It's somewhat like procedural generation in the demoscene.

Random idea time... configurable relayout/repaint intervals. Those pages full of blingy animated shit are horrible for power consumption because the browser is forced to constantly relayout/repaint stuff. There's even whole JS libraries to batch DOM updates to avoid this for the stupid apps that can't do it right themselves... How about the browser itself throttling things - you don't need to compute and repaint as fast as the scripts are trying to make the browser do, and since up to ~100ms is not so noticeable, by default script-driven repaint intervals could be limited to one every 100ms. Those wanking "web designers" aren't going to like this since it makes animations look jerky, but who gives a shit... that's why this should be configurable - if you really, really want to play some inane JS game or something else that needs ultra-fast repaints, then you can turn it up (and see that your power consumption goes up, and battery life goes down as a result.) Ditto for "smooth scrolling" (one of the worst ideas ever conceived) - unless I'm grabbing a scrollbar and dragging it, I'm not scrolling one pixel at a time, so why the fuck do they think I want to see the window repainted every time it scrolls 1 goddamn pixel!? If I'm scrolling by 10 pixels then just move the existing content by 10 pixels and repaint that 10-pixel gap. What a ridiculous waste of (GPU mostly) power. (I could rant on and on about the idiotic trend of making UI elements behave like physical objects - with the exception of "inertial scrolling" which is genuinely useful on a touchscreen but only without that annoying "bounce-back" or "friction" - but that's not so browser-related....)

Time to look at CSS parsing/tokenising again in more detail... http://dev.w3.org/csswg/css-syntax/#tokenization What a mess... no reason I can see for "colon-token", "semicolon-token", and "comma-token", amongst others, to be separated out, when they could've just put them in with "delim-token". Ditto for "whitespace" since it get skipped over anyway - no sense in pushing that up to the parser. This needs to be transformed into a more usable set of states first.

For anyone here, please ignore W3C. They are asshats. WHATWG is doing proper standardizations since '04.

hsimons commented 9 years ago

If someone can give a starting boost on this concept, it's you. Cudder is all talk and no action.