Closed jhonphilipps closed 4 years ago
Source :Ihor Dykhta – Rendering Engineer (dykhta.ihor@gmail.com) sunandblackcat.com
[21 Mar 2013] | Parallax Occlusion Mapping in GLSL
This lesson shows how to implement different Parallax Mapping techniques in GLSL and OpenGL (same techniques are used to implement Parallax Mapping in DirectX). Following techniques will be described: Parallax Mapping, Parallax Mapping with Offset Limiting, Steep Parallax Mapping, Relief Parallax Mapping and Parallax Occlusion Mapping (POM). Also this article shows how to implement self shadowing (soft shadows) in combination with Parallax Mapping. Following images depict results of Parallax Mapping techniques in comparisson to simple lighting and Normal Mapping: Basics of Parallax Mapping In computer graphics Parallax Mapping is enhanced version of Normal Mapping, which not only changes behavior of lighting, but creates illusion of 3D details on plain polygons. No additional geometry is generated. Previous images show Parallax Mapping compared to Normal Mapping. You may think that Parallax Mapping offsets initial geometry, but instead it offsets texture coordinates that are then used to access textures with diffuse colors and normals. To implement Parallax Mapping you need a heightmap texture. The heightmap in each pixel contains information about height of the surface. Height from the texture may be interpreted as amount of depth into the surface. In such case you have to invert values in the initial heightmap. Parallax mapping in this tutorial will use values from the heightmap as depth values. Black color (0) in the heightmap will represent absence of holes, and white (1) - holes of maximum depth. Following examples of Parallax Mapping use three textures: heightmap, diffuse texture and normal map. Usually normal map is generated from heightmap. In our example the heightmap is treated as depthmap, so before generation of the normal map you have to invert the heightmap. You can combine the normal map and the heightmap into one texture (height in alpha channel), but for convenience this lesson uses three different textures. Example of textures for Parallax Mapping is here:
The main task of Parallax Mapping techniques is to modify texture coordinates in such a way that plain surface will look like 3D. Effect is calculated in fragment shader for each visible fragment of the object. Look at the following image. Level 0.0 represets absence of holes. Level 1.0 represents holes of maximum depth. Real geometry of the object is unchanged, and always lies on level 0.0. Curve represents values that are stored in the heightmap, and how these values are interpreted. Current fragment is highlighted with yellow square. The fragment has texture coordinates T0. Vector V is vector from camera to fragment. Sample the heightmap with texture coordinates T0. You will get value H(T0) = 0.55. Value isn't equal to 0.0, so the fragment doesn't lie on the surface. There is a hole below the fragment. So you have to extend vector V to the closest intersection with the surface defined by the heightmap. Such intersection is at depth H(T1) and at texture coordinates T1. Then Т1 is used to sample diffuse texture and normal map. So the main purpose of all Parallax Mapping techniques is to precisely calculate intersection point between camera vector V and the surface defined by the heightmap. Basic shaders for Parallax Mapping Calculation of Parallax Mapping (similarly to Normal Mapping) is performed in tangent space. So vectors to the light (L) and to the camera (V) should be transformed to tangent space. After new texture coordinates are calculated by Parallax Mapping Technique, you can use that texture coordinates to calculate self-shadowing, to get color of the fragment from diffuse texture and for Normal Mapping. In this tutorial implementation of Parallax Mapping is in shader function parallaxMapping(), self-shadowing is in shader function parallaxSoftShadowMultiplier(), and lighting by Blinn-Phong model and Normal Mapping is in normalMappingLighting() shader function. Following vertex and fragment shaders may be used as base for Parallax Mapping and self-shadowing. Vertex shader transforms vectors to the light and to the camera to tangent space. Fragment shader calls Parallax Mapping technique, then calculation of self-shadowing factor, and finally calculation of lighting:
// Basic vertex shader for parallax mapping
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz - position
layout(location = 1) in vec3 i_normal; // xyz - normal
layout(location = 2) in vec2 i_texcoord0; // xy - texture coords
layout(location = 3) in vec4 i_tangent; // xyz - tangent, w - handedness
// uniforms
uniform mat4 u_model_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
uniform mat3 u_normal_mat;
uniform vec3 u_light_position;
uniform vec3 u_camera_position;
// data for fragment shader
out vec2 o_texcoords;
out vec3 o_toLightInTangentSpace;
out vec3 o_toCameraInTangentSpace;
///////////////////////////////////////////////////////////////////
void main(void)
{
// transform to world space
vec4 worldPosition = u_model_mat * vec4(i_position, 1);
vec3 worldNormal = normalize(u_normal_mat * i_normal);
vec3 worldTangent = normalize(u_normal_mat * i_tangent.xyz);
// calculate vectors to the camera and to the light
vec3 worldDirectionToLight = normalize(u_light_position - worldPosition.xyz);
vec3 worldDirectionToCamera = normalize(u_camera_position - worldPosition.xyz);
// calculate bitangent from normal and tangent
vec3 worldBitangnent = cross(worldNormal, worldTangent) * i_tangent.w;
// transform direction to the light to tangent space
o_toLightInTangentSpace = vec3(
dot(worldDirectionToLight, worldTangent),
dot(worldDirectionToLight, worldBitangnent),
dot(worldDirectionToLight, worldNormal)
);
// transform direction to the camera to tangent space
o_toCameraInTangentSpace= vec3(
dot(worldDirectionToCamera, worldTangent),
dot(worldDirectionToCamera, worldBitangnent),
dot(worldDirectionToCamera, worldNormal)
);
// pass texture coordinates to fragment shader
o_texcoords = i_texcoord0;
// calculate screen space position of the vertex
gl_Position = u_proj_mat * u_view_mat * worldPosition;
}
// basic fragment shader for Parallax Mapping
#version 330
// data from vertex shader
in vec2 o_texcoords;
in vec3 o_toLightInTangentSpace;
in vec3 o_toCameraInTangentSpace;
// textures
layout(location = 0) uniform sampler2D u_diffuseTexture;
layout(location = 1) uniform sampler2D u_heightTexture;
layout(location = 2) uniform sampler2D u_normalTexture;
// color output to the framebuffer
out vec4 resultingColor;
////////////////////////////////////////
// scale for size of Parallax Mapping effect
uniform float parallaxScale; // ~0.1
//////////////////////////////////////////////////////
// Implements Parallax Mapping technique
// Returns modified texture coordinates, and last used depth
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Implements self-shadowing technique - hard or soft shadows
// Returns shadow factor
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Calculates lighting by Blinn-Phong model and Normal Mapping
// Returns color of the fragment
vec4 normalMappingLighting(in vec2 T, in vec3 L, in vec3 V, float shadowMultiplier)
{
// restore normal from normal map
vec3 N = normalize(texture(u_normalTexture, T).xyz * 2 - 1);
vec3 D = texture(u_diffuseTexture, T).rgb;
// ambient lighting
float iamb = 0.2;
// diffuse lighting
float idiff = clamp(dot(N, L), 0, 1);
// specular lighting
float ispec = 0;
if(dot(N, L) > 0.2)
{
vec3 R = reflect(-L, N);
ispec = pow(dot(R, V), 32) / 1.5;
}
vec4 resColor;
resColor.rgb = D * (ambientLighting + (idiff + ispec) * pow(shadowMultiplier, 4));
resColor.a = 1;
return resColor;
}
/////////////////////////////////////////////
// Entry point for Parallax Mapping shader
void main(void)
{
// normalize vectors after vertex shader
vec3 V = normalize(o_toCameraInTangentSpace);
vec3 L = normalize(o_toLightInTangentSpace);
// get new texture coordinates from Parallax Mapping
float parallaxHeight;
vec2 T = parallaxMapping(V, o_texcoords, parallaxHeight);
// get self-shadowing factor for elements of parallax
float shadowMultiplier = parallaxSoftShadowMultiplier(L, T, parallaxHeight - 0.05);
// calculate lighting
resultingColor = normalMappingLighting(T, L, V, shadowMultiplier);
}
Parallax Mapping and Parallax Mapping with offset limiting The simplest version among Parallax Mapping techniques is one step approximation of new texture coordinates from original texture coordinates. This technique is simply called Parallax Mapping. Parallax Mapping gives more or less valid results only when heightmap is smooth and doesn't contain a lot of small details. In another case, with large angles between vector to camera (V) and normal (N), effect of parallax won't be valid. The main idea of Parallax Mapping approximation is following:
- get height H(T0) from the heightmap, which is at original texture coodinates T0.
- offset original texture coordinates taking into accout vector to the camera V and height at initial texture coordinates H(T0).
Offset of the texture coordinates is done in the following way. As vector to the camera V is in tangent space, and tangent space is built along gradient of the texture coordinates, so components x and y of vector V can be used without any transforamtion as direction for offset of the texture coordinates along vector V. Component z of vector V is normal component, and it's perpendicular to the surface. You can divide components x and y by z component. This is the original calculation of texture coordinates in Parallax Mapping technique. Or you can leave components x and y as they are, and such implementation is called Parallax Mapping with Offset Limiting. Parallax Mapping with Offset Limiting allows to decrease amount of weird results when angle between vector to the camera (V) and noraml (N) is high. So if you will add x and y components of vector V to original texture coordinates you get new texture coordinates that are shifted along vector V.
You have to take into account depth H(T0) at the original texture coordinates. Simply multiply V.xy by H(T0).
You can control amount of Parallax Mapping effect with scale variable. Again you have to multiply V.xy. The most usefull values of scale are from 0+ to ~0.5. With higher scale results of Parallax Mapping approximation are wrong in most cases (as on the image). You can also make scale negative. In such case you have to invert z components of normals from the normal map. So here is the final formula for calculation of shifted texture coordinates TP:
The following image depicts how depth value H(T0) from the heightmap is used to determine amount of shift of texture coordinates T0 along vector V. In this case the result TP is wrong, as Parallax Mapping is only approximation that isn't intended to find exact location of intersection of vector V and the surface.
The main advantage of this method is that it performs only one additional sampling of the heightmap and this leads to great performance of GLSL shader. Here is implementation of shader function for simple Parallax Mapping:
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// get depth for this fragment
float initialHeight = texture(u_heightTexture, o_texcoords).r;
// calculate amount of offset for Parallax Mapping
vec2 texCoordOffset = parallaxScale * V.xy / V.z * initialHeight;
// calculate amount of offset for Parallax Mapping With Offset Limiting
texCoordOffset = parallaxScale * V.xy * initialHeight;
// return modified texture coordinates
return o_texcoords - texCoordOffset;
}
Steep Parallax Mapping Steep Parallax Mapping, unlike simple Parallax Mapping approximation, not simply offsets texture coordinates without checks for validity and relevance, but checks whether result is close to valid value. The main idea of this method is to divide depth of the surface into number of layers of same height. Then starting from the topmost layer you have to sample the heightmap, each time shifting texture coordinates along view vector V. If point is under the surface (depth of the current layer is greater than depth sampled from texture), stop the checks and use last used texture coordinates as result of Steep Parallax Mapping. Example of work of Steep Parallax Mapping is on the following image. Depth is divided into 8 layers. Each layer has height of 0.125. Shift of the texture coordinates with each layer is equal to V.xy/V.z*scale/numLayers. Checks are started from the topmost layer, where the fragment is located (yellow square). Here is manual calculations:
- Depth of the layer is equal to 0. Depth H(T0) is equal to ~0.75. Depth from the heightmapt is greater than depth of the layer (point is above surface), so start next iteration.
- Shift texture coordinates along vector V. Select next layer with depth equal to 0.125. Depth H(T1) is equal to ~0.625. Depth from the heightmapt is greater than depth of the layer (point is above surface), so start next iteration.
- Shift texture coordinates along vector V. Select next layer with depth equal to 0.25. Depth H(T2) is equal to ~0.4. Depth from the heightmapt is greater than depth of the layer (point is above surface), so start next iteration.
- Shift texture coordinates along vector V. Select next layer with depth equal to 0.375. Depth H(T3) is equal to ~0.2. Depth from the heightmap is less than depth of the layer, so current point on vector V lies below the the surface. We have found texture coordinate Tp = T3 that is close to real intersection point.
As you can see from the image above, texture coordinates T3 are quite far far from the intersection point of vector V and the surface. But these texture coordinate are closer to valid than results of Parallax Mapping. Increase number of layers if you want more precise results.
The main disadvantage of Steep Parallax Mapping is that it divides depth into finite number of layers. If the number of layers is large, then performance will be low. And if the number of layers is too small, then you will notice effect of aliasing (steps), as on the image to the right. You can dynamically determine number of layers with interpolation between minimum and maximum number of layers by angle between vector V and normal of the polygon. The performance/aliasing problem can be fixed with Relief Parallax Mapping or Parallax Occlusion Mapping (POM) that are covered in following parts of the tutorial.
Here is implementation of Steep Paralax Mapping:
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine number of layers from angle between V and N
const float minLayers = 5;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// get first depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along vector V
currentTextureCoords -= dtex;
// get new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// return results
parallaxHeight = currentLayerHeight;
return currentTextureCoords;
}
Relief Parallax Mapping Relief Parallax Mapping enhances Steep Parallax Mapping and allows GLSL shader to more precisely find new texture coordinates. Fisrt you have to use Steep Parallax Mapping. After that GLSL shader have depths of two layers between which intersection between vector V and the surface is located. On the following image such layers are at texture coordinates T3 and T2. Now you can improve the result with binary search. Binary search with each iteraction improves precision of result by 2. Following image shows principle of Relief Parallax Mapping:
- After Steep Parallax Mapping we know texture coordinates T2 and T3 between which intersection of vector V and the surface is located. Real intersection point is marked with green dot.
- Divide current shift of texture coordinates and current height of the layer by two.
- Shift texture coordinates T3 in direction opposite to vector V (in direction of T2) by current shift. Decrease depth of layer by current height of layer.
- (*) Sample the heightmap. Divide current shift of texture coordinates and current height of the layer by two.
- If depth from texture is larger than depth of layer, then increase depth of layer by current height of layer, and shift texture coordinates along vector V by amount of current shift.
- If depth from texture is less than depth of layer, then decrease depth of layer by current height of layer, and shift texture coordinates in direction opposite to vector V by amount of current shift.
- Repeat binary search from step (*) for specified number of times.
- Texture coordinates on the last step of search is results of Relief Parallax Mapping.
Here is implementation of Relief Parallax Mapping:
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine required number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// go to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along V
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// Start of Relief Parallax Mapping
// decrease shift and height of layer by half
vec2 deltaTexCoord = dtex / 2;
float deltaHeight = layerHeight / 2;
// return to the mid point of previous layer
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
// binary search to increase precision of Steep Paralax Mapping
const int numSearches = 5;
for(int i=0; i<numSearches; i++)
{
// decrease shift and height of layer by half
deltaTexCoord /= 2;
deltaHeight /= 2;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// shift along or agains vector V
if(heightFromTexture > currentLayerHeight) // below the surface
{
currentTextureCoords -= deltaTexCoord;
currentLayerHeight += deltaHeight;
}
else // above the surface
{
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
}
}
// return results
parallaxHeight = currentLayerHeight; return currentTextureCoords;
}
Parallax Occlusion Mapping (POM) Parallax Occlusion Mapping (POM) is another improvement for Steep Parallax Mapping. Relief Parallax Mapping uses binary search to improve resuls but such search decreases performance. Parallax Occlusion Mapping is intended to produce better performance compared to Relief Parallax Mapping and provide better results than Steep Parallax Mapping. But the results of POM are a bit worse than results of Relief Parallax Mapping. Parallax Occlusion Mapping simply interpolates between results of Steep Parallax Mapping. Look at the following image. For interpolation POM uses depth of the layer after intersection (0.375, where Steep Parallax Mapping has stopped), previous H(T2) and next H(T3) depths from heightmap. As you can see from the image, result of Parallax Occlusion Mapping interpolation in on intersection of view vector V and line between heights H(T3) and H(T2). Intersection is close enough to real point of intersection (marked with green). Manual calculations from the image:
- nextHeight = HT3 - currentLayerHeight;
- prevHeight = HT2 - (currentLayerHeight - layerHeight)
- weight = nextHeight / (nextHeight - prevHeight)
- TP = TT2 weight + TT3 (1.0 - weight)
Parallax Occlusion Mapping produces good results with relatively small number of samples from heightmap. But Parallax Occlusion Mapping may skip small details of the heighmap more than Relief Parallax Mapping and may produce incorrect results for abrupt changes of values in the heightmap.
Here is shader function that implements Parallax Occlusion Mapping (POM):
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine optimal number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// current depth of the layer
float curLayerHeight = 0;
// shift of texture coordinates for each layer
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above the surface
while(heightFromTexture > curLayerHeight)
{
// to the next layer
curLayerHeight += layerHeight;
// shift of texture coordinates
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// previous texture coordinates
vec2 prevTCoords = currentTextureCoords + texStep;
// heights for linear interpolation
float nextH = heightFromTexture - curLayerHeight;
float prevH = texture(u_heightTexture, prevTCoords).r
- curLayerHeight + layerHeight;
// proportions for linear interpolation
float weight = nextH / (nextH - prevH);
// interpolation of texture coordinates
vec2 finalTexCoords = prevTCoords * weight + currentTextureCoords * (1.0-weight);
// interpolation of depth values
parallaxHeight = curLayerHeight + prevH * weight + nextH * (1.0 - weight);
// return result
return finalTexCoords;
}
Parallax Mapping and self-shadowing You can determine whether the fragment is in shadow by application of algorithm very similar to Steep Parallax Mapping. You have to search not inside the surface (down), but outside (up). Also shifts of texture coordinates should be along vector from fragment to the light (L), not along view vector V. Vector to the light L should be in tangent space, as vector V, and can be directrly used to specify direction of shift for texture coordinates. Result of self-shadowing calculation is a shadowing factor - value in [0, 1] range. This value is used later to modulate intensity of diffuse and specular lighting. To caclulate shadows with hard edges (hard shadows) you have to check values along light vector L until first point under the surface. If point is under the surface than shadowing factor is 0, otherwise shadowing factor is 1. For example, on the next image, H(TL1) is less than height of layer Ha, so the point is under the surface, and shadowing factor is 0. If there are no points under the surface while light vector L is below level 0.0, then fragment is in the light, and shadowing factor is equal to 1. Quality of shadows depends greatly on number of layers, on value of scale modifier and on angle between light vector L and the normal of the polygon. With wrong settings shadows suffer from aliasing or even worse. Soft shadows take into account multiple values along light vector L. Only points under the surface are taken into consideration. Partial shadowing factor is calculated as difference between depth of current layer and depth from texture. You also have to take into account distance from the point to the fragment. So partial factor is multiplied by (1.0 - stepIndex/numberOfSteps). To calculate final shadowing factor you have to select one of the calculated partial shadow factors that has maximum value. So here is formula to calculate shadowing factor for soft shadows: Here is calculation of shadowing factor for soft shadows (as on the image):
- Set shadow factor to 0, number of steps to 4.
- Make step along L to Ha. Ha is less than H(TL1), so point is under the surface. Calculate partial shadowing factor as Ha - H(TL1). This is first check, and total number of check is 4. So taking into account distance to the fragment, multiply partial shadowing factor by (1.0 - 1.0/4.0). Save partial shadowing factor.
- Make step along L to Hb. Hb is less than H(TL2), so point is under the surface. Calculate partial shadowing factor as Hb - H(TL2). This is second check, and total number of checks is 4. So taking into account distance to the fragment, multiply partial shadowing factor by (1.0 - 2.0/4.0). Save partial shadowing factor.
- Make step along L. Point is above the surface.
- Make another step along L. Point is above the surface.
- Point is above layer 0.0. Stop movement along vector L.
- Select maximum from partial shadow factors as final shadow factor
Following code snippet contains shader function with implementation of self-shadowing:
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
float shadowMultiplier = 1;
const float minLayers = 15;
const float maxLayers = 30;
// calculate lighting only for surface oriented to the light source
if(dot(vec3(0, 0, 1), L) > 0)
{
// calculate initial parameters
float numSamplesUnderSurface = 0;
shadowMultiplier = 0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), L)));
float layerHeight = initialHeight / numLayers;
vec2 texStep = parallaxScale * L.xy / L.z / numLayers;
// current parameters
float currentLayerHeight = initialHeight - layerHeight;
vec2 currentTextureCoords = initialTexCoord + texStep;
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
int stepIndex = 1;
// while point is below depth 0.0 )
while(currentLayerHeight > 0)
{
// if point is under the surface
if(heightFromTexture < currentLayerHeight)
{
// calculate partial shadowing factor
numSamplesUnderSurface += 1;
float newShadowMultiplier = (currentLayerHeight - heightFromTexture) *
(1.0 - stepIndex / numLayers);
shadowMultiplier = max(shadowMultiplier, newShadowMultiplier);
}
// offset to the next layer
stepIndex += 1;
currentLayerHeight -= layerHeight;
currentTextureCoords += texStep;
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// Shadowing factor should be 1 if there were no points under the surface
if(numSamplesUnderSurface < 1)
{
shadowMultiplier = 1;
}
else
{
shadowMultiplier = 1.0 - shadowMultiplier;
}
}
return shadowMultiplier;
}
Parallax mapping will be developed by Blender-BF directly. We will support it. Closing the issue. Thanks for reporting
the code to apply the self shadow on parallax is joined here ...
Hi, basicly i'm trying to use lamp data node but I can't find a logic way to use it to apply shadow inside shader with parallax mapping... only baked map shadow inside shader
It is possible to add this soft shadow option ?? in this case if we shade the shadow we need to remove the first real shadow or we have 2 shadow on screen ... but it's to complex for me i'm a simple user not a dev...
Best regards ;)