Closed magik6k closed 9 years ago
Loosing persistence should be a "feature" in this case as for example glContext, in real world, is lost when you switch app, on phone, or hibernate on normal computer. You just would need to be ready to reinitialize your context.
Some random thoughts:
math
.math.random
?Overall I think it's a very interesting approach to the problem, but we'll have to be very careful as to how this might affect client performance.
assembly! probably a stripped 65816?
the issue with java bytecode: you always need a full class. And you cannot reliably unload a class once its been loaded. Using that an attacker could let the server run out of permgen quite easily. If anything compile to lua bytecode and let it run in luaj on the client
Edit: It could even happen in regular environment without malicious background - which would lead to very odd and hard to reproduce memory leaks.
@fnuecke - Have you considered http://fscript.sourceforge.net/ ? If not, I might be up to doing something like porting the picoc C interpreter.
Hmm, FScript looks nice and simple, but I don't immediately see a way to limit execution "steps"? I think a way to limit the number of consecutive instructions is essential here (as could be done with the count hook in Lua), to avoid blocking / tick lag.
Have you considered brainfuck?
Talking about how should it work:
It would work more as geometry shader in OpenGL than fragment shader; allowing to save transfer.
Giving shader access to time stamp in mili/nanoseconds would allow to create fluent animations(IMHO even 10fps is fluent).
@fnuecke FScript is so small that adding limit shouldn't be a hassle.
The FScript codebase is small, yes, but adding statefulness to what's essentially a parser could still be quite a bit of effort. I'm not sure that'd be worth it. It also would mean OC would have to ship yet another non-standard library, which is a bit of a minus. A minimal Lua env sounds better to me, tbh.
As recap, here's what I'd currently suggest, opinion subject to change:
setShader(s:string)
, getShader():string
, setUniforms(t:table)
, getUniforms():table
.setData(t:table)
, getData(t:table)
, which is sortakinda like uniforms, but allows more data but is much slower. Consider this the (very very very rough) equivalent of VBOs in OpenGL.Am I missing something?
What about allowing shaders to run over a series of data. Then you could write engine creating descriptions of objects and run it through them. This would allow easier rendering of unspecified number of objects like bullets or creatures. If we are limiting our selves to only primitive->primitive this approach is hardly possible. It would make OC's shaders more similar to those in RL as you have uniforms and data on which you work.
Other question is maybe in higher tier of GPU give shader access to secondary image buffer for z-test or stencil.
I want a BFSL (brainfuck shader language) (it's just about a couple hundred lines)
Can you give me a more concrete example of what you mean by "series of data"? Do you mean texture storage? That could make sense, I guess.
As for depth buffer and such... that would require a concept of depth, first. Which currently doesn't exist. And introducing that just for shaders... I'll need some convincing this will see enough use to justify the changes/overhead :P When you do have depth sensitive rendering, it would probably be feasible to sort them in advance, in the limited context that is OC?
As you made point of only primitive to primitive uniform table, it is not possible to send arbitrary number of object. So either we make primitive to table mapping possible or what is more interesting we make shaders work in real world. Normal shader is run multiple times with same uniforms but with different data as input. This is series of data.
Additional buffer (z-buffer) would be to control what should rendered over what. You would like to render character over a background. Z-test is available in css to control what's on top.
I would recommend having a buffer in Lua that's swapped - presumably all implemented inside the kernel. This would mean the only scala part you have to write is replacing the entire screen buffer with the new one, meaning there should be essentially no jumping between scala and lua, which should be a lot faster.
Updated the summary above based on discussion on IRC with:
setData(t:table)
, getData(t:table)
, which is sortakinda like uniforms, but allows more data but is much slower. Consider this the (very very very rough) equivalent of VBOs in OpenGL.As for buffer in Lua + swapping. Buffer may be table of string (for multibyte chars) or ints. Possibly array of array (speed concerns by @Pwootage, anyone care to benchmark?). Alternatively possibly thin userdata proxy for real buffer? Again, benchmarking would be required to see how expensive the call forwarding in LuaJ would be.
Going to close this as sort of a reverse-duplicate of #779, as that was one of the suggestions that seemed to get the most approval, and seemed most feasible. Further discussion about this topic, if desired, should take place in that issue.
So someone on the IRC has asked about function to put whole buffer on screen at once. It is bad idea like most of other ideas that would allow achieving "high" frame rate in OC apps.
The idea
I think it would be nice(and epic) to allow users to create very small programs that would be run on client side and invoked by server side programs that run on gpu.
Initial thoughts:
Implementation ideas:
Pros:
Cons:
Little code example:
Program:
Shader:
This very basic program draws 10x10 square filled with X characters with some color(I know that there is fill function, but this is just an example. Imagine e.g. sprite manager based on it.. or even advanced window manager).
I feel that the idea is worth discussion. I was thinking to start implementation of it myself, but I'm currently creating 2 libraries, starting small addon mod and playing Minecraft(by playing minecraft I mean creating OS in OC)