Closed halx99 closed 1 year ago
Remove glsl-optimizer please at the final step. Really, it is not good that axmol does not support glsl 1.0 at all. Is there some other game engine that supports only glsl 3.0 specification? Why such limitation? There were customers on the forum that goes from cocos to axmol. And I am one of them. It is more simple if there is opportunity to migrate custom shaders one by one, and check that everything works after each shader. Think please that shaders could not be debuged. Since that, if you really decided to reject glsl 1.0, please don't do it right away. Besides, can I have sub directories inside project_root/Source/shaders? And such question, how glsl 1.0 will be restricted on Android? glslcc can not compile it? And guys, why so importance to delete things that worked many years before? Is it somehow interferers with new platforms? Embedded to c++ shader feature was very good and were achieved using glsl optimizer for ios/Mac. And one more. How shaders could be compiled offline if they depend from macroses which passed in runtime? Or I missed something?
@solan-solan I don't think we're dropping support for gles 1.0 , just creating a new version of the engine that supports gles 3.0, if you want to migrate right away just pick the gles 1.0 version, otherwise you have to manually change your project to work with gles 3.0
@DelinWorks @halx99 Clarify please the following. If I properly understand, glslcc will compile shaders to spir-v in build time. What will happen with macroses? What if the certain macroses should be configured for some shaders in runtime in the game, like in my case? Does this spir-v will be converted back to the platform shader language in runtime where it could be configured?
The glslcc use spirv-cross decompile spirv-bytecode to MSL(metal), ESSL300(GLES), GLSL330(OpenGL)
glslcc compile workflow:
input shader(ESSL310/GLSL450)
--> spirv byte code
--> MSL for ios/macos/tvos
--> ESSL300 for GLES3
--> GLSL330 for DesktopGL
What if the certain macroses should be configured for some shaders in runtime in the game, like in my case?
Any chance you can show an example of your current usage related to macros and shaders?
@DelinWorks @halx99 Clarify please the following. If I properly understand, glslcc will compile shaders to spir-v in build time. What will happen with macroses? What if the certain macroses should be configured for some shaders in runtime in the game, like in my case? Does this spir-v will be converted back to the platform shader language in runtime where it could be configured?
You can still use defines in your shaders, then you can pass your shader as a custom shader and it will be compiled as a user shader WITH your defines set by the C compiler. use R()
when using preprocessor strings.
@DelinWorks Thanks, I asked exactly about defines in custom shaders
@halx99 Ok, looks like my understanding of this workflow corresponds to your indication. But I just wanted to clarify if I could adjust shaders in runtime with defines, like it was before
@rh101 My usage is the similar as it is in axmol now for 3d models. For example there is one pbr shader for 3d models in my project. There are some texture slots in it (like Ambient Occlusion, etc. ), which closed with define, since they should not be used for some models (there are no such textures for these models, and I do not desire to make it for them). Such a way, my code constructs the final shader after checking some props of model from json. I can provide this shader code if you are interested in as is, since I found it in internet and just adapted to axmol
I can provide this shader code if you are interested in as is, since I found it in internet and just adapted to axmol
That would be good. I was just curious to know how you were using the shaders with macros in order to understand why it would be an issue to move to the new shaders.
in order to understand why it would be an issue to move to the new shaders
I was confused with compile word. Looks like defines could be stored to spir-v intermediate language, and no issue exists. Ok I will attach this shader in discussion a little later when will be near pc.
Looks like defines could be stored to spir-v intermediate
do you know how to do?
currently compiled shaders not contains any preprocesser checks, I don't know how to reserve #if checks
Looking through the glslcc usage docs, there is this command line argument:
-D --defines(=Defines) - Preprocessor definitions, seperated by comma or ';'
Doesn't that apply to the input shader file? The output file would be the result of whatever the result of the defines is. Is that how it works?
For example, input shader would have a section like this:
#ifdef USE_NORMAL_MAPPING
attribute vec3 a_tangent;
attribute vec3 a_binormal;
#endif
If USE_NORMAL_MAPPING
is passed to glslcc via the -D
switch, then the code inside the define block would be processed to the output shader, and if USE_NORMAL_MAPPING
is not passed, then it would not be included in the processing to the output shader. This should work, right? If it does work, then we just need to have a way to pass defines to the glslcc processor via cmake for the custom shaders.
@solan-solan is this what you require?
@halx99 Spir-v is some intermediate bytecode. I doubt that this possible
@rh101
is this what you require?
This argument will be applied in build time anyway. I do not know which shaders I need in build time. These data's should be parsed from json according to my architecture.
Think please to save old shader approach equally with new one. Shader compiling does not touch performance. It all have affect just while init application
This argument will be applied in build time anyway. I do not know which shaders I need in build time. These data's should be parsed from json according to my architecture.
You mention that you do not know what shaders to use at build time, but I don't see that as a problem at all. If all shaders exist with the application, then at runtime you select which one to use based on some condition (for instance, your json data). Isn't this what you need?
If that is not what you mean, then I will wait till you show an example of this so I can understand what is going on.
I do now notice in your previous post where you mention that you "construct" the shader at runtime, but I've never seen this use case at all. Is this something that is commonly done? Is there any reason why you cannot provide all the shader combinations at build time, and then select which one you need at runtime?
@rh101 I can adjust to a lot of things. Maybe i will do it after these changes also. But, take in account please, that my code (bad it or good) was based on engine architecture as it was and features, that engine provided. One time I decided that this usage of shader is good and convenient and wasted many time for implementation. And I asked simple question. Why was it needed to remove old shader approach. Or do you know each usage which customer can imagine?!
Really no complaints) It is everything up to you, but your change killed one of use cases.
Is this something that is commonly done?
I do not know.
Is there any reason why you cannot provide all the shader combinations at build time, and then select which one you need at runtime?
There are many different settings in my shaders, and many different models with many different settings. It is general system which supposed to be extended, and controlled with json to simplify certain shader creation.
why not use uniforms?
@halx99
OpenGL UBO support by @delin
maybe you mean @DelinWorks? )
@DelinWorks Defines can change code, but uniforms only data's. For example my shader can apply wind affect to some object. Wind effect is a couple of additional lines which can be commented by define. And if you have many different such settings, it is good to think about some general system The next example. I have one level where 3 point lights and second level where 4 point lights. Should I pass it via uniform? In fact it will hit performance, since for statement has bad optimization on gpu. Take a look that previouse shaders has light sources number as define
then I don't see why @rh101 suggestion doesn't work out for you. You're effectively combining shaders into a single file, why not make a shader do one thing and load them at runtime based on what you need since now you don't have to write a shader for each platform you can just make shaders like wind1.frag wind2.frag wind3.frag that are written in gles 310 compliant shader code and that's it.
although I do agree with you with branching being bad in gpus, I think that splitting them is way better, but doesn't glslcc support define that @rh101 found out? maybe it could be easy to implement.
although I do agree with you with branching being bad in gpus, I think that splitting them is way better, but doesn't glslcc support define that @rh101 found out? maybe it could be easy to implement.
If I have five light sources on the third level and so on?
You're effectively combining shaders into a single file, why not make a shader do one thing and load them at runtime based on what you need since now you don't have to write a shader for each platform you can just make shaders like wind1.frag wind2.frag wind3.frag that are written in gles 310 compliant shader code and that's it.
Effectively combine - is restriction for game design and implementation approach. Imho
And now I do not need to write shader for each platform also. It is not understandable why this well tested feature should be removed
Engine auto sync compiled runtime shader folder(
${CMAKE_BINARY_DIR}/runtime/axslc
) to target appapp_res_root/axslc
, andapp_res_root/axslc
will be added to search path by engine FileUtils implementation.Start app, load compiled shaders from
app_res_root/axslc
by shader name.
The following is just a suggestion:
Is it at all possible to modify this to add a sub-directory to the shader file lookup? What I mean is that the converted shaders would be stored in app_res_root/axslc/shaders
instead of app_res_root/axslc
. The app_res_root/axslc
is still added to the search list, so as an example, to load a shader, you would use shaders/shaderfile_fs
etc.
This way the filenames would not pollute the root search path, and it also makes it easier to override existing shaders at runtime. For example, if all shaders exist in app_res_root/axslc/shaders
with app_res_root/axslc
in the search path, and we want to override an existing shader (example, 3D_color_fs
), then the app does the following:
First, app creates directory in the write path (let's say it's called "dlc" for downloadable content, with a "shaders" sub-dir in it), and then we download a new file named 3D_color_fs
to it. In this example, the new file will be dlc/shaders/3D_color_fs
.
Next the app calls FileUtils::addSearchPath(std::string_view searchpath, const bool front)
with front
set to true
, so it would be:
FileUtils::getInstance()->addSearchPath("dlc", true);
This way, when code tries to load shaders/3D_color_fs
, it will find the new version of it in the dlc/shaders/
folder, instead of using the one that exists in app_res_root/axslc/shaders/
.
This is actually more for custom shaders than the default shaders contained in Axmol. It saves us from forcing the user to download an entirely new release (APK/Installer etc) just to update a few resources, since the app can do it at runtime by using downloadable content.
I realise this can be done without the shaders
sub-directory too, but it means all shader lookup would be at the root level, and I'm not sure if that would cause any issues in the future.
The user can download shader files to writeablepath/any-folder-name, and add it to front searchpaths
The user can download shader files to writeablepath/any-folder-name, and add it to front searchpaths
I understand, so all shaders will be referenced at root level as shaderfilename_[fs][vs]
. As I mentioned in my previous post, it was just a suggestion to add them to ashaders
sub-folder purely as a way of organising them, but it's definitely not necessary in any way.
builtin shaders use root level, app shaders can be relative path, no limitation axslc/custom
, then user can register shader by name custom/xxx_vs
, custom/xxx_fs
without add search path in develop term
builtin shaders use root level, app shaders can be relative path, no limitation
axslc/custom
That would be good!
builtin shaders use root level, app shaders can be relative path, no limitation
axslc/custom
, then user can register shader by namecustom/xxx_vs
,custom/xxx_fs
without add search path in develop term
done by: https://github.com/axmolengine/axmol/commit/ac073ee8c189866f4a38665e888603ed7715c316
FYI Ready to preview for anyone who is interested: https://github.com/axmolengine/axmol/releases/tag/v1.1.0-preview2
Released
UPDATE
GLES3/OpenGL3 support was moved to milestone 2.0, and still support GLES2.0 for old android device and android simulator marketplace of china
workflow
ESSL 310
orGLSL 450
project_root/Source/shaders
, default shader file extensions: vertex(.vert, .vsh), fragment(.frag, .fsh)glscc
compiling filesglscc
compile source shaders to target platforms: Desktop GL(GLSL330
), GLES3(ESSL300
), GLES2(GLSL100
), Apple Metal(MSL
)${CMAKE_BINARY_DIR}/runtime/axslc
) to target appapp_res_root/axslc
, andapp_res_root/axslc
will be added to search path by engine FileUtils implementation.app_res_root/axslc
by shader name.checklist
axmolengine/glslcc
1.9.0 by @halx99AXGLSLCC.cmake
cmake tool by @halx99amol-migrate-1.1
by @delin and @halx99, because glslcc(spirv-cross) only accept ESSL310 or GLSL450 as input shader.GLES
loader toglad
for auto loading GLES3 APIs , i.e.glDrawElementsInstanced
glsl-optimizer
can be removed, and it's not support compile essl 3.1 to msl, not compatible with new shader workflow.${CMAKE_BINARY_DIR}/runtime/axslc
to apps by @halx99draft works
, waiting cmake patch by @halx99 to be merge bykitware
ProgramManager
support load cusom shader program immediately and improve register mechanismCompatibility
main
is thelts
for 1.0.x currently. even through, we want merge dev in to main in the future, but before that, we will create alts
branch1.0.x
New shader looks like
vertex shader:
fragment shader:
Notes
layout(location = TEXCOORD0) out vec2 v_texCoord;
layout(std140) uniform vs_ub { mat4 u_MVPMatrix; };
void main() { gl_Position = u_MVPMatrix * a_position; v_texCoord = a_texCoord; }
Shader syntax migrate summary
#version 310 es
in headerattribute
in vertex shader change toin
varying
in vertex shader change toout
varying
in fragment shader change toin
gl_FragColor
need replaced by a definedout
vec4 variable in fragment shaderShader restrict or recommands
performance preview (release build)
axmol-dev:
axmol-main:
cocos2dx-3.17.1