The camera provides a "inverseProjection" vector that is documented with:
"A vector to invert projection calculations: (1/p(0,0), 1/p(1,1), 1/p(2,2), -p(3,2)/p(2,2); Usage: pos.xyz * invProj.xyz + vec3(0,0,invProj.w)"
This is valid only before perspective division! That means that the incoming xyz coordinates were never really in viewspace, but in projection space. (just without w)
However, this is just not true.
On the other hand we have proven that you get a valid view direction with norm(vec3(devCord.xy * inverseProjection, 1)).
The voxel geometry shader code needs the inverseProjection vector in its current form.
The camera provides a "inverseProjection" vector that is documented with: "A vector to invert projection calculations: (1/p(0,0), 1/p(1,1), 1/p(2,2), -p(3,2)/p(2,2); Usage: pos.xyz * invProj.xyz + vec3(0,0,invProj.w)" This is valid only before perspective division! That means that the incoming xyz coordinates were never really in viewspace, but in projection space. (just without w)
However, this is just not true. On the other hand we have proven that you get a valid view direction with norm(vec3(devCord.xy * inverseProjection, 1)). The voxel geometry shader code needs the inverseProjection vector in its current form.