Closed christopherreay closed 10 years ago
Out of pure curiosity why are you attempting to carry out intensive parallelization on the client side. What kinds of problems are you trying to solve.
I am sure Thrust and eventually Breach can help solve your problems but please educate me :)
The basic rationale is that the browser is, as far as I am concerned, the interface toll of choice. QT is great, but it cannot really come close to the browser in terms of the speed and breadth of development of tools. Very soon, imo, no one will even think of developing an interface for an application that is not a web interface. Parrallism, on a grand scale is the next big thing in the hardware industry, massive amounts of money are being invested in the development of massively parrallel architechtures and the algorithms to support them. At the moment I feel that the browser is treating the DOM and web workers, with unreasonable care. Its all well and good to say that "javascript programmers are experimental and must be protected from themselves", but if I want to develop a plugin for a browser that has multiple threads, then I should be able to choose my concurrency model.
Message passing is all well and good, and in reality, its perfectly possible to manage complex data scenarios with message passing, however, part of the reality of parrallelism is to embody the deep concepts of "client and server" as being endemic to the process of rendering some application to the user. I want a web application that can literally slide from my desktop to my mobile to my wrist screen, I need multiple threads to be able to dump their content into each other...
Perhaps a functioning model might be to allow variables to be moved into another worker thread upon destruction of the original owner. That might solve all the issues I can see at the moment.
It seems unreasonable for every worker to have to embody some abstraction of the DOM, expressed through message passing into the main thread, rather than just being able to edit it directly. What about saving portions of the DOM for which responsiblity rests with different aspects of an application. Its perfectly reasonable to imagine different threads of an application being responsible for connecting to different servers or whatever. So why on earth should they be forced to use a single service to update the DOM. At the very least, we are looking to have some kind of DOM manager to which we can register a service. Imagine a complex DOM which is built through the interactions of several threads of an application, something a little deeper than a simple box model with a form, for example visualisation of a piece of music alongside a sequenceing program, perhaps with video footage aswell... In this case, single DIVs could very well include content developed by completely different aspects of an application.
In this case, I guess multithreadded doesnt just have to apply to the idea of many threads on a single browser, but the entire process of building and migrating applications accross devices.
"Pervasive Parrallelism" is the name it has.. and we are on the way there. :)
Ok so a few things. While is definitely appreciate your idea and have thought it out on several occasions :
Concurrency is not parallelism Not all mobile devices support multiple os threads Javascript is inherently single threaded and is not designed for a shared memory space. WebWorkers are exactly that, workers, they are essentially asynchronous tasks to execute a body of work. If the users computer has 1 processor, this will most likely cause issues depending on the implementation between browsers.
Threads have always been not memory safe, Threads are a low level implementation that should not necessarily be thought of at a higher level.
We already have a working concurrency model using the internet, it gets the job done very well.
My one and only recommendation may be to remove the stupid limit of 5 concurrent web requests that most current web browsers have, if possible. @spolu see the last line.
As far as migrating applications across devices, Thrust is a step on the way, but not yet. You need a way to deploy the same code base accross multiple devices. There must be a Handoff protocol. And memory will always need to be serialized and deserialized.
@christopherreay : As well I love your optimistic and thoughtful proposition. One of my supervisors once called me a Megalomaniac because i have ideas similar to yours. I think ideas such as that need to be thought, but they also need to be hammered out. We also want to improve the word, unfortunately improving systems takes thought and care, and worst of all time and adoption.
If Spolu likes it then thats his decision, however I would recommend learning C/C++, forking this Repo, and attempting such an implemenation yourself. If you can pull it off write a few internet articles, get your writing hammered out and then right a spec. If your spec is accepted and you provide some useable source code, maybe then, adoption will occur. At which point you, a single man, will have changed the internet.
Something else that is occuring to me is that a "modern" stack for this kind of space could well include a "client side" node.js with the browser pulling from that, and the node pulling from whatever server spaces. I was also thinking to use breach/thrust server side for something.
There is no reason why the single main thread in the browser cant implement its own "native" (lol) threading protocol... Which is probabaly what I will end up doing
Threading is the process of running multiple executions of code across "threads". Most Application Level threads are actually virtual constructs that can be multiplexed and demultiplexed onto Operating System threads. Operating System threads are heavyweight and have alot of rules associated with them, there is no reason to virtualize threads in a single main thread client side, when that client already uses and Evented Model.
Imagine trying to trigger events from other threads, or two threads interweaving and access the same in memory object.
Also an extended answer to your idea about passing memory space across devices, this is generally but not theoretically impossible. Memory is a location, a pointer to memory is a pointer to that location, but even that pointer has an allocated location. The process of moving memory from one location to another is essentially serialization/deserialization.
Please remember that why javascript seems to make things very simple, the code that runs under the hood is anything but.
I hope my answers are more helpful than discouraging.
**Note, also the browser page can be considered and is considered the UI layer. In almost every language, and every ui toolkit, the ui layer is a Single Thread.
@christopherreay If your goal is to have seamless application experience accross devices, I would say you should be looking into a way to create a snapshot of the running browser. It should be browser agnostic. As well would be a very difficult undertaking.
It is probably much easier to use an application like Meteor, with a replicating database.
Exactly, so setting up a virtual threading environment inside the browser, "without" using web workers is perfectly normal, and in this sense, web workers are useful as they can actually run in another (heavyweight) (system) process, and so messaging memory model makes sense.
Hey, Ive been coding since 1984, Pointer Arithmetic is in my veins. Ive also developed some extensions for the Java VM, and am heavily into vm design atm
javascript scoping is nuts :)
I hope my answers are more helpful than discouraging.
So this is the interesting issue. Running the "main cilent application" in a Node.js application on the local machine (client side), which communicates with the web and then having the browser operate as the "UI Thread" is a reasonable solution. And actually there is no reason at all why this isnt a "great" solution. It is clear that "message passing" through serialization describes the web as well as any internal concurrency model. I think, in the end, what Im looking for as as "simple" solution is to implement a scheduler in the main UI thread of the browser, and then break out from there if it becomes necessary.
wow, Meteor is quite incredible. Not sure if I can use it, it might get in the way, but, wow nonetheless, love it
Yes. The idea with having an "action queue" top level interaction space is exactly right. I would be looking to have the same application actually running concurrently on more than one device, and various parts of it able to migrate around depending on user choice and current processing requirements. This is my interpretation of "pervasive parrallelism". I should hopefully be working on this at Edinburgh University next year
Very interesting discussion. But this is rather out of scope here. @christopherreay up to you to create any programming paradigm over an RPC mechanism of your choice between the ui and backend code of an app running on thrust
.
Goal for now is to stick to web standards (embed chromium content API as is) and target goals in the README's roadmap section: https://github.com/breach/thrust#roadmap
HI, Im looking to work on a browser platform that allows applications to create web workers with full access to the dom and to the root name space. Its all well and good saying there is some great model for parrallisation, and i Can see the reality of using the ui space to update the DOM, however, the kinds of appilcations I have in mind are far more similiar to running server side "JS" on both sides of the line, with the "client" side having the unempeachable heml5css backend to display things. the idea of using qt or some other obselete interface technology make mes sad.
There is a difference between "shared memory model " and access to the dom. Do we need to implement som atomix access protocol on the DOm for this>:?