Closed nairaner closed 6 years ago
@doitsujin been playing with these demos and found that is has built a new Mandel version with AVX512 demo (http://users.skynet.be/fquake/Mandel_AVX512.zip).. seems no source code shared but what it's interesting on this demo it uses HLSL doubles.. maybe one of few codes/demos using it.. using your latest DXVK says with a popup window: GPU doesn't support doubles (FP64) as expected, as DXVK doesn't support that optional D3D11 cap.. as said not really needed, only poiting for if you someday feel really bored.. see doubles in use:
void CSMandelJulia_scalarDouble1( uint3 Gid : SV_GroupID, uint3 DTid : SV_DispatchThreadID, uint3 GTid : SV_GroupThreadID, uint GI : SV_GroupIndex )
{ int counter = max_iterations & (~(UNROLL-1));
double u,v; u = (double)(a0 + daDTid.x); v = (double)(b0 + dbDTid.y);
double a,b; a = (julia) ? (double) ja0 : u; b = (julia) ? (double) jb0 : v;
bool inside; double ur, vr, t;
`
Two comments from previous posts: Please don't waste time with http://users.skynet.be/fquake/MandelDX11_10.zip I wasted my time already it doesn't even work natively on Windows.. Note also Wine D3D11 also doesn't support doubles from my testing.. @jozefkucia, right?
*Note also Wine D3D11 also doesn't support doubles from my testing.. @jozefkucia, right?
Yes, wined3d doesn't support doubles yet.
One funny side-effect of https://github.com/doitsujin/dxvk/commit/4c693fc2624a8d37288815125aba19fa0a3aef0b is that Superposition GUI almost works as it should.. It's still no better than wine-devel, as the text in the menus will be gone once you click on them, but vs. the black box before spirv-tools auto-optimize its a improvement :)
Now if only @pdaniell-nv would come up with an idea why #90 happens to me and another user with GTX 970, but not for one with GTX 750, that would be great :)
Nothing springs to mind. Do you see any validation errors?
There are a few physical device feature and property differences between GTX 750 and GTX 970. There is this new LunarG layer that fakes the physical device properties (https://www.lunarg.com/new-vulkan-dev-sim-layer/). It might be worth simulating the GTX 750 on your GTX 970 and see if it produces the same result. If so, then I guess DXVK is reacting to some physical device property.
If you want me to look more into it, would you mind making it easy to reproduce on Windows. Perhaps with some instructions or a .zip package with some binaries. Thanks.
Or if there are specific shaders you suspect I would be happy to look at this if you attach them here.
@pdaniell-nv Since you hopefully have kept up with the dxvk<->spirv problems when it comes with different validations for nVidia, things work "out-of-the-box" for AMD, but nVidia needs different spirv-opt (like --legalize-hlsl) to not crash. After doing this, there is no validation errors on the shaders, and driver does not crash... but for me atleast produces this weird coloring.
I will look into this simulation thing, and see if i can "fake" a 750.
I have not tested this in "real" windows, but what i have tested is Unigine Superposition https://benchmark.unigine.com/superposition?lang=en (free version), both wine-staging-2.21 and wine-vulkan-3.2 with DXVK. Tested 387.34 and 390.25 driver in Linux. The driver revisions is different in Windows as you are quite aware of, but what 390.25 corresponds to in windows i dont know. In theory if it is the driver and the same bug with dxvk, it should happen in windows to. Binary builds of DXVK: https://haagch.frickel.club/files/dxvk/
There SHOULD not be a difference if i use the Ubuntu binary .deb driver vs. downloading the .run file from nVidia.com? I mean, the "driver" part is closed source binary blob and not really what is being compiled afaik?
@pdaniell-nv Did some fiddling with the simulation layer, and saw that both the "DXVK HUD" showed changes, and VulkanCapsViewer showed changes when i used hardware .json files. I downloaded a few for testing from https://vulkan.gpuinfo.org/
Tried with various 750Ti driver versions and Linux/windows++ and tested with a few different from 970. Problem did not change, although i had a couple of instances with random weird artifacts. I also tested with Intel/Amd, but not suprisingly that ended up in a crash.
It does seem to do with surface lightning of some sort.. Might be hard to see just from the couple of screenshots i have put up, but when (in Superposition) i move to dark spots with low light, the surfaces look mostly normal... its when the light kinda shines "too bright" on the surface that it looks that bad. I guess its the same in Unigine Valley, as some darker scenes does not look that bad, but when the well-lit hillsides show up, its just "too bright".
@pdaniell-nv Also take a look at my screenshots https://github.com/doitsujin/dxvk/issues/90#issuecomment-368323734
And my comment above in the same thread does not show this bug in "The Talos Principle" either with wine vulkan, or wine dxvk. Two different graphics engines tho.
@pdaniell-nv : As you have asked for shader exampled which you can check, i already mentioned the shaders which @SveSop found for Unigine Heaven, Unigine Valley and Unigine Superposition, which still crash the Nvidia linux driver 390.25. To make it much easier for you, here is a summary of all the links and informations spread in different threads. Really looking forward to your findings:
Unigine Heaven:
The bad shaders which crash the driver: bad shader heaven The corresponding fixed ones after using: spirv-opt --inline-entry-points-exhaustive fixed shader heaven
Unigine Valley:
The bad shaders which crash the driver: bad shader valley
The corresponding fixed ones after using: spirv-opt --inline-entry-points-exhaustive fixed shader valley
Unigine Superposition:
The bad shaders which crash the driver: bad shader superposition
The corresponding fixed ones after using: spirv-opt --inline-entry-points-exhaustive --skip-validation --eliminate-dead-code-aggressive fixed shader superposition
If you need any other information, please inform us. It really would be fantastic to be able to use dxvk on nvidia without needing to use the spir optimizer by either nvidia fixing possible bugs in the driver or dxvk fixinf possible bugs in the code.
Many thanks ! Christian
@pingubot Thanks for breaking it down for me, it really helps.
For the Heaven shaders and the Valley shaders they hit a bug in our compiler in that we don't handle OpBitField{S,U}Extract correctly when the "Offset" and "Count" parameters are something other than signed integer. If possible, modify the generator to use int parameters here. We have implemented a compiler fix for this already and it should be released in an upcoming driver.
I didn't finish looking at all the superposition shaders. For the first one PS_1c31935ff36f56ad8a195a03b959cf58e4ee0bec the problem is that OpTypeImage is specifying a non-float depth (or shadow) image. This isn't something we support. It's not clear to me if this is our bug or not. I think it is, I guess nobody has tried creating an integer depth image before now. You should be able to work-around it by using a float type when doing depth images, if possible. In the meantime I'll investigate a fix for this.
For the PS_8c0e1a4293e345daf650578b35851da3fa78cd86 shader I'm seeing an OpTypeVector with a component count of 5. This should be 4 or less.
Shaders PS_55dfe220ecfc27eb1c452ecffbff3397e52ec94a and PS_c38d348a092ddb481d51a49f03c0bc0c8096bc85 have the same problem as PS_1c31935ff36f56ad8a195a03b959cf58e4ee0bec in that they're using a non-float depth image.
@pdaniell-nv
I don't think declaring Integer depth images makes any sense, so I guess I'll change the compiler to only emit depth image declarations when they are actually needed - which is a bit of a hassle, but should prevent bugs like this one in the future. For now I've filtered out the integer depth image declarations.
As for the signed integer issue, are you aware of any other instructions which make this assumption? I changed the compiler to use signed integers for those instructions (as well as OpBitFieldInsert).
I also disabled the workaround that packs the reference value from depth-compare operations into the coordinate vector, which was why the five-dimensional vectors were generated. Hopefully this workaround is actually not needed (anymore) for those instructions.
In any case, thanks again for your input, this has been really helpful so far..
@pdaniell-nv
Thank you for help on these issues. We really appreciate it. Your OpBitField suggestion helped compile those shaders from Heaven's benchmark successfully.
There is another issue in Heaven's benchmark that we need help with. These shaders are crashing in the Nvidia driver. This happened after we removed the packing Dref
value into the coordinate vectors in the lastest commit of DXVK. These shaders work fine on AMD.
@doitsujin Somewhat minor issue with that commit https://github.com/doitsujin/dxvk/commit/8bfd12067aaaaab34a1cd9036be1e66b99f12434#commitcomment-27846864
@SveSop I'm fully aware that the commit breaks pretty much everything on Nvidia. But the old workaround was fundamentally broken as it caused the compiler to generate invalid SPIR-V in a lot of cases.
What I need is clarification as to why depth-compare operations don't work properly on their driver and what I can to to work around it.
@pdaniell-nv , it would be fantastic if you could clarify @doitsujin's question to make dxvk usable on nvidia again..
@doitsujin Since you were fully aware, a comment about "Will break nVidia rendering" on the commit text would be nice :)
@doitsujin
I'm fully aware that the commit breaks pretty much everything on Nvidia.
Not sure about this one but recent commits fixes invisible character's and NPC's heads in The Witcher 3
. Heads a bit not synchronized with body while they (characters) moving (that looks like in scary movie when face changed reveals villain's real nature) but visible.
@ZeroFault Thanks for attaching the broken shaders. I investigated the issue and it appears to have hit another bug in our compiler. I'm confirming with our compiler guys to be sure. In this case it appears we're not handling OpImageSampleDrefExplicitLod correctly. Our bug here is probably what drove the work-around DXVK implemented for NVIDIA that packs the depth parameter with the coordinate. FWIW, you shouldn't need this work-around for cube, cube-array and 2d-array depth images. I think those are probably the cases that generate the OpTypeVector(5), so you can fix that too.
A slightly simpler work-around is to simply use a +1 larger Coordinate vector for the dref Ops. The last component is actually ignored and the proper Dref is the one it actually uses. The larger Coord vector is needed because internally we move the Dref into that last slot and our bug was that we neglected to increase the size of the vector to take the Dref. So with your work-around of putting the Dref into the Coord effectively made it +1 bigger and worked-around the bug.
I'll see what our compiler folks say and try to get the fix in the driver asap.
Thinking about this more, it seems crazy we have this bug because surely Vulkan CTS would have caught it, because it definitely tests all the Dref image ops.
Turns out a bug in glslang is masking it, and most Vulkan CTS tests are written in GLSL.
With GLSL you have to pack the Dref into the cord. The prototype is "textureLod(sampler2DShadow, vec3, float)". The Dref is in the last slot of the vec3 coordinate. When glslang generates the SPIR-V what it should do is shrink the coord down to a vec2 since OpImageSampleDrefExplicitLod with a 2D depth image only needs a vec2, but it doesn't, it leaves it as vec3. This masked our bug.
I don't really think it's a bug in glslang, it's just being lazy and exploiting the SPIR-V spec:
Coordinate must be a scalar or vector of floating-point type. It contains (u[, v] . . . [, array layer]) as needed by the definition of Sampled Image. It may be a vector larger than needed, but all unused components will appear after all used components.
Anyway, I'll implement a new workaround.
0e9b7d7ccdcc6ca457576f44c2e2201c59e4a968 adds a new workaround for the dref issue.
@pdaniell-nv while you watching https://forums.geforce.com/default/topic/1014038/nvidia-driver-bug-with-glsl-mat3-in-vulkan-/
Met this while googling some time ago. Maybe it's not related or not a bug.
Hi all, @doitsujin fixed Intel HD Windows driver support.. the issue is about calling vkCreateInstance on his driver ends up calling your dxgi CreateDXGIFactory or CreateDXGIFactory1 method which also ends up calling vkCreateInstance I created a very lame patch to solve the issue.. (please see last post on issue https://github.com/doitsujin/dxvk/issues/62 with the diff I apply) would be nice if a somewhat "more correct" patch gets merged eventually.. of course disable shaderStorageImageReadWithoutFormat and with that just completed testing DXVK in the latest Vulkan Desktop driver remaining: the Intel HD Windows Vulkan driver.. results seems pretty good:
Closing because the original issue has been resolved.
Software information
Unigine Superposition launcher crashes in kernel32
System information
Apitrace file(s)
https://mega.nz/#!5YI2VQAB!p0069ISZdqoeMSXdPX20fJXlQQM8-C36ZjWhtYl48To
Log files
d3d11.log: launcher_d3d11.log
dxgi.log: launcher_dxgi.log
wine.log