Open talltyler opened 9 years ago
Hi Tyler,
Sorry about the confusion. If you read the beginning of the hackers guide anything in parens is a future feature. The quote about GL graphics is in parens which means that it's currently unimplemented. We have plans for using Node9 in GPU computing, but if you want to have a whack at it I'd wait until we tweak the build files for Linux/BSD.
Again, sorry about the confusion.
On Fri, Jun 12, 2015 at 9:20 AM, Tyler Larson notifications@github.com wrote:
How are you handling graphics? Is it possible yet to create lua applications that have control over the screen buffer? I don't see any Lua code that is able to interface with OpenGL or related stuff but maybe I'm missing something? I ask because it is mentioned in https://github.com/jvburnes/node9/blob/master/doc/node9-hackers-guide.txt
— Reply to this email directly or view it on GitHub https://github.com/jvburnes/node9/issues/3.
Thanks for clearing that up. I look forward to it.
Grave-digging after a lab-bench afternoon spent on Node9 hacking to say that, perhaps Node9 doesn't need a GUI component itself, so much as a standard Lua-based interface to an existing one, which it hosts, and for which a Lua API is construed to glue things together?
I imagine a scenario where building an application onboard Node9 is a matter of controlling a UI/Graphics engine framework, and there are quite a few of these in the LuaVM space which might be well hosted on Node9.
Anyway, sorry for the gravedigger, Node9 is a delightful project, thanks!
seclorum,
that sounds like a great idea. any good ideas for UI/graphics engines or interfaces / APIs?
I'm glad you're enjoying playing with it. I know there are a lot of engineers in Russia/Ukraine who are playing with it on and off. Some people at Google also.
I wish I had more time to fully realize its potential, but I'm glad you find it delightful. If you have any questions, let me know.
Best Regards,
Jim Burnes
On Thu, Jun 4, 2020 at 1:04 AM seclorum notifications@github.com wrote:
Grave-digging after a lab-bench afternoon spent on Node9 hacking to say that, perhaps Node9 doesn't need a GUI component itself, so much as a standard Lua-based interface to an existing one, which it hosts, and for which a Lua API is construed to glue things together?
I imagine a scenario where building an application onboard Node9 is a matter of controlling a UI/Graphics engine framework, and there are quite a few of these in the LuaVM space which might be well hosted on Node9.
Anyway, sorry for the gravedigger, Node9 is a delightful project, thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jvburnes/node9/issues/3#issuecomment-638647219, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKJIO4LD3VTTTJNEUKQFQLRU5BPVANCNFSM4BIEPP4A .
Grave-digging after a lab-bench afternoon spent on Node9 hacking to say that, perhaps Node9 doesn't need a GUI component itself, so much as a standard Lua-based interface to an existing one, which it hosts, and for which a Lua API is construed to glue things together?
FWIW -- I really agree with the above. Not sure what particular toolkit I'd pick -- but giving the raw power I'd probably opt out for http://tekui.neoscientists.org/
From an architectural point of view, the graphics subsystem would probably be a device that you mount. Something like a GL or Vulkan device that you could open and either send individual commands down into or that you could pipeline commands into. The frame buffers could be readable or writable as subdevices. The Lua APIs could access those devices.
On Fri, Dec 25, 2020 at 7:32 PM Roman V Shaposhnik notifications@github.com wrote:
Grave-digging after a lab-bench afternoon spent on Node9 hacking to say that, perhaps Node9 doesn't need a GUI component itself, so much as a standard Lua-based interface to an existing one, which it hosts, and for which a Lua API is construed to glue things together?
FWIW -- I really agree with the above. Not sure what particular toolkit I'd pick -- but giving the raw power I'd probably opt out for http://tekui.neoscientists.org/
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jvburnes/node9/issues/3#issuecomment-751310890, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKJIO5M4UJRQ5NL6TP2OF3SWVDNTANCNFSM4BIEPP4A .
From an architectural point of view, the graphics subsystem would probably be a device that you mount.
So I guess what you had in mind would be something along the lines of https://github.com/ponty/framebuffer-vncserver (where instead of fb in the Linux sense you'll have devdraw style frontend). That sounds very reasonable to me.
However, I'm also wondering whether we can reuse some of the excellent work that http://tekui.neoscientists.org/ when it comes to actually drawing widgets on a device like that.
Roman,
Consider Plan9's Single System Image (SSI) perspective. When a client node mounts 9p device resources en masse the result is as if you are transparently using a system with the combined resources. For example, by bind / mounting the distributed /prog resources from 20 different systems you appear to have a system with all of those processes locally, for process launch, monitoring, debug, shut down etc.
What's the equivalent for a graphics subsystem?? Something that transparently manages GPU (instead of CPU) resources. Those are memory, frame buffers, shader units, stream processors. The main difference is that GPUs leverage data parallelism (SIMD). Extremely high throughput of a single instruction executed on a bulk data pipeline.
I'd think something like an OpenCL/Vulkan/SIMD oriented device interface would make it possible to create concurrent GPU systems that are very powerful and easily constructed (via Lua scripts). The lowest layers would make it transparent where the SIMD processing was occurring. The GPU scheduler could distribute workload via underlying factors like speed, bandwidth etc.
The graphics APIs you're talking about are several levels up in the application layer.
Let me know what you think.
Jim
On Mon, Dec 28, 2020 at 7:52 PM Roman V Shaposhnik notifications@github.com wrote:
From an architectural point of view, the graphics subsystem would probably be a device that you mount.
So I guess what you had in mind would be something along the lines of https://github.com/ponty/framebuffer-vncserver (where instead of fb in the Linux sense you'll have devdraw style frontend). That sounds very reasonable to me.
However, I'm also wondering whether we can reuse some of the excellent work that http://tekui.neoscientists.org/ when it comes to actually drawing widgets on a device like that.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jvburnes/node9/issues/3#issuecomment-751927814, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKJIO7MTBR4XHJLE2QFMXLSXE77ZANCNFSM4BIEPP4A .
Consider Plan9's Single System Image (SSI) perspective. When a client node mounts 9p device resources en masse the result is as if you are transparently using a system with the combined resources. For example, by bind / mounting the distributed /prog resources from 20 different systems you appear to have a system with all of those processes locally, for process launch, monitoring, debug, shut down etc.
No disagreement there, but I just wanted to point out that this is mostly a cluster view of the system. What I was talking about was more of a terminal one -- more on that later (and, frankly, both can happily coexist I think).
What's the equivalent for a graphics subsystem?? Something that transparently manages GPU (instead of CPU) resources.
Correct, but that is if your point is managing a compute (or otherwise headless) resource. I thought the OP of this issue had a different usecase in mind -- mainly how to build a terminal-centric view of a single node9 instance.
If you run with that other usecase instead -- what you have in front of you is an issue of building "a concise and powerful way to construct graphical user interfaces without directly using the Draw module primitives". Exactly what Inferno/Limbo had with http://doc.cat-v.org/inferno/4th_edition/limbotk/ (and exactly what http://tekui.neoscientists.org/ has built for Lua).
Those are memory, frame buffers, shader units, stream processors. The main difference is that GPUs leverage data parallelism (SIMD). Extremely high throughput of a single instruction executed on a bulk data pipeline.
I'd think something like an OpenCL/Vulkan/SIMD oriented device interface would make it possible to create concurrent GPU systems that are very powerful and easily constructed (via Lua scripts). The lowest layers would make it transparent where the SIMD processing was occurring. The GPU scheduler could distribute workload via underlying factors like speed, bandwidth etc.
Yup -- that's exactly what you need for the cluster-centric view.
The real question is -- can you actually layer one on top of the other. If I were to use latest development in Unix land as any indication -- I'd say they remain parallel to each other (sort of like Wayland and things above are parallel to OpenCL/etc. https://en.wikipedia.org/wiki/Nouveau_(software)#/media/File:The_Linux_Graphics_Stack_and_glamor.svg) and shouldn't really be "stacked" in any meaningful sense.
The graphics APIs you're talking about are several levels up in the application layer.
It is definitely a higher level API -- that's for sure, but my point was somewhat more subtle -- I wanted to make sure we agree on how things are stacked in this kind of a setup. So far, I feel like this issue needs to be split in three parts:
Hope all this makes sense
This is great, thank you! I'll be catching up on the other issues you raised later tonight. (If I can remember :)
On Mon, Jan 4, 2021 at 1:59 PM Roman V Shaposhnik notifications@github.com wrote:
Consider Plan9's Single System Image (SSI) perspective. When a client node mounts 9p device resources en masse the result is as if you are transparently using a system with the combined resources. For example, by bind / mounting the distributed /prog resources from 20 different systems you appear to have a system with all of those processes locally, for process launch, monitoring, debug, shut down etc.
No disagreement there, but I just wanted to point out that this is mostly a cluster view of the system. What I was talking about was more of a terminal one -- more on that later (and, frankly, both can happily coexist I think).
What's the equivalent for a graphics subsystem?? Something that transparently manages GPU (instead of CPU) resources.
Correct, but that is if your point is managing a compute (or otherwise headless) resource. I thought the OP of this issue had a different usecase in mind -- mainly how to build a terminal-centric view of a single node9 instance.
If you run with that other usecase instead -- what you have in front of you is an issue of building "a concise and powerful way to construct graphical user interfaces without directly using the Draw module primitives". Exactly what Inferno/Limbo had with http://doc.cat-v.org/inferno/4th_edition/limbotk/ (and exactly what http://tekui.neoscientists.org/ has built for Lua).
Those are memory, frame buffers, shader units, stream processors. The main difference is that GPUs leverage data parallelism (SIMD). Extremely high throughput of a single instruction executed on a bulk data pipeline.
I'd think something like an OpenCL/Vulkan/SIMD oriented device interface would make it possible to create concurrent GPU systems that are very powerful and easily constructed (via Lua scripts). The lowest layers would make it transparent where the SIMD processing was occurring. The GPU scheduler could distribute workload via underlying factors like speed, bandwidth etc.
Yup -- that's exactly what you need for the cluster-centric view.
The real question is -- can you actually layer one on top of the other. If I were to use latest development in Unix land as any indication -- I'd say they remain parallel to each other (sort of like Wayland and things above are parallel to OpenCL/etc. https://en.wikipedia.org/wiki/Nouveau_(software)#/media/File:The_Linux_Graphics_Stack_and_glamor.svg) and shouldn't really be "stacked" in any meaningful sense.
The graphics APIs you're talking about are several levels up in the application layer.
It is definitely a higher level API -- that's for sure, but my point was somewhat more subtle -- I wanted to make sure we agree on how things are stacked in this kind of a setup. So far, I feel like this issue needs to be split in three parts:
- GUI/widget API
- 2d drawing API/composing
- 3d rendering/full GPU capabilities
Hope all this makes sense
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jvburnes/node9/issues/3#issuecomment-754215304, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKJIO46MQRXDXFL7K4LBZTSYIT2XANCNFSM4BIEPP4A .
Agreed Roman. It depends on your perspective I suppose -- whether Node9 has more immediate applicability in cluster / scripted OS duties or whether it's more useful at the moment in desktop operations.
Since it's a hosted OS, the underlying system is taking care of most of the desktop duties, although since it's largely a research project it just depends where your interests lie.
If you wanted to stay with the underlying desktop / client architecture of inferno then I don't see a problem with playing with that. I do think that all kinds of cool stuff can be created with the underlying single machine image architecture, per process name spaces etc when it comes to SIMD stream processing for graphics and other things.
Yes, there are some good Lua libs for graphics. Important question would be what kinds of abstractions you would want to expose to your garden variety graphics programmer, vs GUI level programming vs low-level SIMD/GPU shader stream processor integration. OpenGL is nice for the first case and Vulkan too. lots of exciting possibilities.
BTW: Things are a bit crazy at work right now, so I'll follow up your docker build chain work as soon as I can. It's really refreshing to talk to another person who sees the power in this architecture. The more I program and do security work (and the more I'm exposed to crappy event oriented programmers) the more I see a need for something better. During the early days of Plan9 / Inferno I really think the new technology wasn't taken up because the enemy of great is "good enough" and that's what Linux and the BSDs supplied.
With all the challenges in event-oriented programming environments and the spaghetti code they create (especially with inexperienced programmers), something like Plan9 is more important than ever. Perhaps there's an opportunity.
Jim
On Mon, Jan 4, 2021 at 1:59 PM Roman V Shaposhnik notifications@github.com wrote:
Consider Plan9's Single System Image (SSI) perspective. When a client node mounts 9p device resources en masse the result is as if you are transparently using a system with the combined resources. For example, by bind / mounting the distributed /prog resources from 20 different systems you appear to have a system with all of those processes locally, for process launch, monitoring, debug, shut down etc.
No disagreement there, but I just wanted to point out that this is mostly a cluster view of the system. What I was talking about was more of a terminal one -- more on that later (and, frankly, both can happily coexist I think).
What's the equivalent for a graphics subsystem?? Something that transparently manages GPU (instead of CPU) resources.
Correct, but that is if your point is managing a compute (or otherwise headless) resource. I thought the OP of this issue had a different usecase in mind -- mainly how to build a terminal-centric view of a single node9 instance.
If you run with that other usecase instead -- what you have in front of you is an issue of building "a concise and powerful way to construct graphical user interfaces without directly using the Draw module primitives". Exactly what Inferno/Limbo had with http://doc.cat-v.org/inferno/4th_edition/limbotk/ (and exactly what http://tekui.neoscientists.org/ has built for Lua).
Those are memory, frame buffers, shader units, stream processors. The main difference is that GPUs leverage data parallelism (SIMD). Extremely high throughput of a single instruction executed on a bulk data pipeline.
I'd think something like an OpenCL/Vulkan/SIMD oriented device interface would make it possible to create concurrent GPU systems that are very powerful and easily constructed (via Lua scripts). The lowest layers would make it transparent where the SIMD processing was occurring. The GPU scheduler could distribute workload via underlying factors like speed, bandwidth etc.
Yup -- that's exactly what you need for the cluster-centric view.
The real question is -- can you actually layer one on top of the other. If I were to use latest development in Unix land as any indication -- I'd say they remain parallel to each other (sort of like Wayland and things above are parallel to OpenCL/etc. https://en.wikipedia.org/wiki/Nouveau_(software)#/media/File:The_Linux_Graphics_Stack_and_glamor.svg) and shouldn't really be "stacked" in any meaningful sense.
The graphics APIs you're talking about are several levels up in the application layer.
It is definitely a higher level API -- that's for sure, but my point was somewhat more subtle -- I wanted to make sure we agree on how things are stacked in this kind of a setup. So far, I feel like this issue needs to be split in three parts:
- GUI/widget API
- 2d drawing API/composing
- 3d rendering/full GPU capabilities
Hope all this makes sense
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/jvburnes/node9/issues/3#issuecomment-754215304, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKJIO46MQRXDXFL7K4LBZTSYIT2XANCNFSM4BIEPP4A .
Agreed Roman. It depends on your perspective I suppose -- whether Node9 has more immediate applicability in cluster / scripted OS duties or whether it's more useful at the moment in desktop operations.
Why not both? ;-)
Yes, there are some good Lua libs for graphics. Important question would be what kinds of abstractions you would want to expose to your garden variety graphics programmer, vs GUI level programming vs low-level SIMD/GPU shader stream processor integration. OpenGL is nice for the first case and Vulkan too. lots of exciting possibilities.
And that's where the rubber meets the road so to speak: we need folks with real use cases to drive those decisions. If we revive node9 to the extent I have in mind (and it therefore becomes my personal tool for a lot of things) I may step up to do some of that later this year.
BTW: Things are a bit crazy at work right now, so I'll follow up your docker build chain work as soon as I can.
Thank you @jvburnes -- there's also a more critical question (e.g. node9 is sort of broken as-is) in the last PR I submitted.
It's really refreshing to talk to another person who sees the power in this architecture. The more I program and do security work (and the more I'm exposed to crappy event oriented programmers) the more I see a need for something better. During the early days of Plan9 / Inferno I really think the new technology wasn't taken up because the enemy of great is "good enough" and that's what Linux and the BSDs supplied.
100% agreed -- and funny you should mention it -- we've just had a similar discussion here: https://twitter.com/rhatr/status/1344391699926110208 and https://twitter.com/rhatr/status/1344745933842444290
With all the challenges in event-oriented programming environments and the spaghetti code they create (especially with inexperienced programmers), something like Plan9 is more important than ever. Perhaps there's an opportunity.
Personally I have a very selfish motivation -- in the context of my passion project https://github.com/lf-edge/eve I need something that is:
To basically answer question #2 on the following "decision tree" of:
To your point, there's a huge gap in #2 these days.
How are you handling graphics? Is it possible yet to create lua applications that have control over the screen buffer? I don't see any Lua code that is able to interface with OpenGL or related stuff but maybe I'm missing something? I ask because it is mentioned in https://github.com/jvburnes/node9/blob/master/doc/node9-hackers-guide.txt