Open paulmthompson opened 8 years ago
This is definitely possible. I've been trying to avoid it, since the shader code gets a bit confusing, and people actually would have to write glsl code.
I did some experiments with a map implementation, which was pretty much implemented like this... Haven't published it yet though, because I didn't really have a good use case, yet (and it was very restrictive on older hardware). If you have something cool, I definitely would like to see it ;)
I had a few plans to make this nicer, but nowadays, I rather put my hope into SPIR-V, which would allow to write shaders completely in julia :) https://github.com/JuliaLang/julia/pull/11516
Will take quite some time to get there, though... But it's defenitely the nicest way!
I agree that keeping glsl code away from the user is a good thing. I've seen some WebGL strategies where they have a lot of basic shaders already written and then use a parser to stitch them together for a specific purpose. Maybe something similar to this would work in Julia, where shaders are selected based on the operations being applied to what is rendered, and then stitched together? I'll take a look into it and let you know if I have something working.
Great job on the next2 branch by the way. It is going to be incredibly useful
Thanks :)
I was thinking to create something like that for the near future:
immutable Material{ShaderType, ColorType, ReflectanceType}
color::ColorType
reflectance::ReflectanceType
end
immutable Instances{PositionType, ScaleType, RotationType}
position::PositionType
scale::ScaleType
rotation::RotationType
end
function call(m::Material{Phong}, backend::OpenGL)
"""
in $(to_gltype(m.color)) color;
in $(to_gltype(m.reflectance)) reflectance;
vec3 phong_lighting(...){
...
}
"""
end
function call(instance::Instance, backend::OpenGL)
"""
in $(to_gltype(instance.position)) position;
in $(to_gltype(instance.scale)) scale;
in $(to_gltype(instance.rotation)) rotation;
vec3 do_position_transform(...){
...
}
"""
end
function visualize{Inst,Mat}(instance::Inst, material::Mat, backend)
vertex_kernel = Inst(backend)
fragmen_kernel = Mat(backend)
compile(vertex_kernel, fragmen_kernel)
end
This is obviously not really thought out yet, but it would nicely scale with Vulkan and offer the user the possibility to overwrite and customize the whole pipeline right away! It would also make it easier to create shaders for different OpenGL and WebGL versions...
I've been going through your next2 branch and was wondering if you have tried any experiments with creating vertex shaders on the fly to try to push some of the work to the GPU. When I look at your bouncy.jl example, am I correct thinking that the Reactive package is using a timer to continuously update all of the circle positions and these are pushed to the GPU with every new timestamp?
It might be more efficient to use a macro or some other metaprogramming technique to create shaders on the fly that would apply the position transformations on the GPU rather than have the CPU do it and push the results there. Very roughly it could look something like this to change the x and y position:
As computations get more complex, this would get more beneficial. Have you tried anything like this in the past? I'd be happy to make a fork and get a working prototype started.