MaikKlein / rlsl

Rust to SPIR-V compiler
Apache License 2.0
557 stars 14 forks source link

Some questions about future plans for RLSL #64

Open mitchmindtree opened 6 years ago

mitchmindtree commented 6 years ago

I've already professed my excitement for RLSL in #40, but here I'd like to get a stronger idea about the kind of future you envisage for the project and the ways in which it might be able to inter-operate with regular Rust code.

Also, I apologise ahead of time if any of the following questions seem silly - I'm still new to SPIR-V and have a very limited understanding of what's possible and what is not!

cargo

In the README you mention that one of the goals is to support cargo - could you elaborate on your plans for this? In particular, I'm curious where the boundaries lie. For example, could I write a crate that may be depended upon downstream by both RLSL and rust code? I can imagine it would be particularly useful to be able to use a math crate (something like cgmath) both in regular cpu code and in RLSL.

Further, is it necessary to have separate, specific files for RLSL code? E.g. could I one day write my RLSL shaders within my regular .rs rust modules, perhaps using a decorator of some sort to indicate entry/exit points?

compute shaders

I would love to be able to use something like RLSL for doing general compute - is this something you would like to see at some point?

I think one of the things I love about the idea of RLSL is the reduction of friction between programming for the CPU and programming for the GPU. I've had a dream for a little while that one day we might be able to do something like pass a closure to a function which executes the closure in parallel on the GPU, returning either a Future or the result directly. I realise it is not possible to execute arbitrary Rust code on the GPU, but perhaps there would be some way to constrain the kinds of functions that could be executed on the GPU using a trait of some sort that extended Fn? I guess a similar/easier alternative might be to pass an instance of some type that implements some GpuFn trait?

RLSL & Rust

I wonder if at some point down the track yourself and the rust-lang team might be interested in including (lower tier) support for something like RLSL out of the box, treating SPIR-V as an alternative target to LLVM or something along these lines? I realise I'm getting way ahead of myself here, but it can be fun to theorize and plant seeds :)

Anyway, thanks again for all your work on this - looking forward to watching (and hopefully taking-part-in one day) RLSL progress!

MaikKlein commented 6 years ago

I can imagine it would be particularly useful to be able to use a math crate (something like cgmath) both in regular cpu code and in RLSL.

Yes that will be possible, but I currently only work on my own math library. I still need a nice way to map specific intrinsic types in external math libs. It is a bit awkward at the moment and relies on custom attributes. The rlsl-math library btw is also just an external library, and can be used as an example.

Further, is it necessary to have separate, specific files for RLSL code? E.g. could I one day write my RLSL shaders within my regular .rs rust modules, perhaps using a decorator of some sort to indicate entry/exit points?

No rlsl is basically is just Rust and only the entry points are sort of custom. As you mention you just annotate some function with fragment, vertex, compute etc. At the moment you can only define entry points in non lib files. For example in files that are in the /bin folder. Every entry point in the same file will included in the same spirv module.

I think one of the things I love about the idea of RLSL is the reduction of friction between programming for the CPU and programming for the GPU. I've had a dream for a little while that one day we might be able to do something like pass a closure to a function which executes the closure in parallel on the GPU, returning either a Future or the result directly. I realise it is not possible to execute arbitrary Rust code on the GPU, but perhaps there would be some way to constrain the kinds of functions that could be executed on the GPU using a trait of some sort that extended Fn? I guess a similar/easier alternative might be to pass an instance of some type that implements some GpuFn trait?

I don't see why this wouldn't be possible. You probably want to compile the "closure" ahead of time though, maybe in some kind of proc macro/custom derive and then just execute the spv file at runtime. I actually want to do something super similar in my test suit. It is kind of hacky at the moment but you can look at https://github.com/MaikKlein/rlsl/blob/master/rlsl-test/src/lib.rs#L240-L256

    quickcheck! {
        fn compute_u32_add(input: Vec<f32>) -> TestResult {
            compute("compute", input, "../.shaders/u32-add.spv", issues::u32_add)
        }
        fn compute_square(input: Vec<f32>) -> TestResult {
            compute("compute", input, "../.shaders/square.spv", issues::square)
        }

        fn compute_single_branch(input: Vec<f32>) -> TestResult {
            compute("compute", input, "../.shaders/single-branch.spv", issues::single_branch)
        }

        fn compute_single_branch_glsl(input: Vec<f32>) -> TestResult {
            compute("main", input, "../issues/.shaders-glsl/single-branch.spv", issues::single_branch)
        }
}

And the .spv files are generated from rlsl, like https://github.com/MaikKlein/rlsl/blob/master/issues/src/bin/square.rs

Essentially I execute the same function on the cpu/gpu and compare the results. It is a bit awkward at the moment. I'll do this manually at the moment, but I'll definitely going to automate this soon.

I wonder if at some point down the track yourself and the rust-lang team might be interested in including (lower tier) support for something like RLSL out of the box, treating SPIR-V as an alternative target to LLVM or something along these lines? I realise I'm getting way ahead of myself here, but it can be fun to theorize and plant seeds :)

I definitely would be open to it, but I haven't contacted anyone so far.