Here is a suggestion for future link handling within Crunchy (very much
thoughts in progress):
Rationale:
Currently, handling of external links is adequate, but not perfect. It
would be better if Crunchy had a chance to inspect a page before the user
attempted to load it. However it would be prohibitively expensive to load
every link from a page before processing it.
Justification:
In the new COMET IO model, server->client communication is cheap and fast.
It is possible to have a much closer tie in between the python back-end and
the browser front-end.
Method:
When a user clicks on a link, instead of the browser immediately loading
the relevant page, the page sends a notification to the back-end. The
back-end can then load the page, judge if it needs VLAM processing and send
a message back to the server indicating if and where to redirect the user to.
For instance if the link points to example.com, then crunchy can download
example.com and see that this particular page wouldn't need processing.
Crunchy could then direct the page to go to example.com.
If however the link points to somevlamtutorial.com then crunchy can
download this tutorial, and direct the user to a locally cached (cached in
crunchy that is) copy of the VLAMISED tutorial.
In this way, links in tutorials need only be analysed at "click-time" and
material could be cached locally for enhanced performance.
Further optimisations could include analysing the link and beginning to
cache the tutorial as soon as the user hovers over the link to it.
Caching is a natural progression in the current direction that crunchy is
taking.
Original issue reported on code.google.com by johannes...@gmail.com on 28 Jan 2007 at 3:12
Original issue reported on code.google.com by
johannes...@gmail.com
on 28 Jan 2007 at 3:12