Open emdash opened 5 years ago
Hi there! Thanks for your interest! First, apologies for the inactivity on this. This was originally a university project and when I finished real life things have gotten in the way of making any real progress.
Now, the inspiration. Ultimately it comes back to the John Backus Turing Award lecture "Can Programming be Liberated from the von Neumann Style?" (pdf). We've been stuck with fundamentally the same (with many major improvements) processing architecture for decades. Now we're hitting performance walls and the current trend is towards more parallelization and not faster processors. This project kind of took that idea and ran with it.
At it's core this is an implementation of parallel combinator graph reduction, with a specific slant towards being implemented in hardware. My overall vision for this is of many small, cheap processing units that can self-aggregate into a processing "mesh". Each unit can perform some reductions on a segment of a graph and the resulting reduced form is the program "result" (which may be another program!). In that way you end up with a massively parallel computing fabric.
There is prior work in this field, though most of it is from the 70s and 80s. GRIP was one such hardware implementation. See Simon Peyton-Jones other publications on that site for more applications and experiments conducted onit. Recently the Reduceron was another hardware implementation. Most recently what I've been keeping an eye on is the Formality language, which isn't (as far as I'm aware) aimed at hardware implementation but is aimed at optimal beta reduction.
Very interesting. I figured there were larger ideas at work. Combinators have been popping up a lot in my research lately, and I'm not sure why.
"Discussions about programming languages often resemble medieval debates
about the number of angels that can dance on the head of a pin instead of exciting contests between fundamentally differing concepts..."
I don't think that has really changed since 1977...
I am looking forward to reading through some of these things when I get home. Thanks for taking the time to respond :)
On Mon, Oct 7, 2019 at 3:59 PM Daniel Wilson-Thomas < notifications@github.com> wrote:
Hi there! Thanks for your interest! First, apologies for the inactivity on this. This was originally a university project and when I finished real life things have gotten in the way of making any real progress.
Now, the inspiration. Ultimately it comes back to the John Backus Turing Award lecture "Can Programming be Liberated from the von Neumann Style?" (pdf) http://worrydream.com/refs/Backus-CanProgrammingBeLiberated.pdf. We've been stuck with fundamentally the same (with many major improvements) processing architecture for decades. Now we're hitting performance walls and the current trend is towards more parallelization and not faster processors. This project kind of took that idea and ran with it.
At it's core this is an implementation of parallel combinator graph reduction https://en.wikipedia.org/wiki/Graph_reduction#Combinator_graph_reduction, with a specific slant towards being implemented in hardware. My overall vision for this is of many small, cheap processing units that can self-aggregate into a processing "mesh". Each unit can perform some reductions on a segment of a graph and the resulting reduced form is the program "result" (which may be another program!). In that way you end up with a massively parallel computing fabric.
There is prior work in this field, though most of it is from the 70s and 80s. GRIP https://www.microsoft.com/en-us/research/publication/grip-a-high-performance-architecture-for-parallel-graph-reduction/ was one such hardware implementation. See Simon Peyton-Jones other publications on that site for more applications and experiments conducted onit. Recently the Reduceron https://www.cs.york.ac.uk/fp/reduceron/ was another hardware implementation. Most recently what I've been keeping an eye on is the Formality language https://docs.formality-lang.org/en/latest/, which isn't (as far as I'm aware) aimed at hardware implementation but is aimed at optimal beta reduction.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Combinatron/Emulator/issues/4?email_source=notifications&email_token=AAAC3W5YGJDE65OZX4WTUJ3QNO5MJA5CNFSM4I5YS7EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEASCK3I#issuecomment-539239789, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAC3W52SVXRNJTL7QGXFITQNO5MJANCNFSM4I5YS7EA .
I stumbled on this while reading Reddit posts about LLVM. This seems cool, but I am just wondering what the inspiration for all this was, and if you had any thoughts on what it might be good for. I have only a vague idea about Combinatory logic, but I am always interested in novel architectures.