OpenGL 4 Shading Language Cookbook(Second Edition)
上QQ阅读APP看书,第一时间看更新

Implementing diffuse, per-vertex shading with a single point light source

One of the simplest shading techniques is to assume that the surface exhibits purely diffuse reflection. That is to say that the surface is one that appears to scatter light in all directions equally, regardless of direction. Incoming light strikes the surface and penetrates slightly before being re-radiated in all directions. Of course, the incoming light interacts with the surface before it is scattered, causing some wavelengths to be fully or partially absorbed and others to be scattered. A typical example of a diffuse surface is a surface that has been painted with a matte paint. The surface has a dull look with no shine at all.

The following screenshot shows a torus rendered with diffuse shading:

Implementing diffuse, per-vertex shading with a single point light source

The mathematical model for diffuse reflection involves two vectors: the direction from the surface point to the light source (s), and the normal vector at the surface point (n). The vectors are represented in the following diagram:

Implementing diffuse, per-vertex shading with a single point light source

The amount of incoming light (or radiance) that reaches the surface is partially dependent on the orientation of the surface with respect to the light source. The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the light is perpendicular to the normal. In between, it is proportional to the cosine of the angle between the direction towards the light source and the normal vector. So, since the dot product is proportional to the cosine of the angle between two vectors, we can express the amount of radiation striking the surface as the product of the light intensity and the dot product of s and n.

Implementing diffuse, per-vertex shading with a single point light source

Where Ld is the intensity of the light source, and the vectors s and n are assumed to be normalized.

Note

The dot product of two unit vectors is equal to the cosine of the angle between them.

As stated previously, some of the incoming light is absorbed before it is re-emitted. We can model this interaction by using a reflection coefficient (Kd), which represents the fraction of the incoming light that is scattered. This is sometimes referred to as the diffuse reflectivity, or the diffuse reflection coefficient. The diffuse reflectivity becomes a scaling factor for the incoming radiation, so the intensity of the outgoing light can be expressed as follows:

Implementing diffuse, per-vertex shading with a single point light source

Because this model depends only on the direction towards the light source and the normal to the surface, not on the direction towards the viewer, we have a model that represents uniform (omnidirectional) scattering.

In this recipe, we'll evaluate this equation at each vertex in the vertex shader and interpolate the resulting color across the face.

Note

In this and the following recipes, light intensities and material reflectivity coefficients are represented by 3-component (RGB) vectors. Therefore, the equations should be treated as component-wise operations, applied to each of the three components separately. Luckily, the GLSL will make this nearly transparent because the needed operators operate component-wise on vector variables.

Getting ready

Start with an OpenGL application that provides the vertex position in attribute location 0, and the vertex normal in attribute location 1 (refer to the Sending data to a shader using vertex attributes and vertex buffer objects recipe in Chapter 1, Getting Started with GLSL). The OpenGL application also should provide the standard transformation matrices (projection, modelview, and normal) via uniform variables.

The light position (in eye coordinates), Kd, and Ld should also be provided by the OpenGL application via uniform variables. Note that Kd and Ld are of type vec3. We can use vec3 to store an RGB color as well as a vector or point.

How to do it...

To create a shader pair that implements diffuse shading, use the following steps:

  1. Use the following code for the vertex shader:
    layout (location = 0) in vec3 VertexPosition;
    layout (location = 1) in vec3 VertexNormal;
    
    out vec3 LightIntensity;
    
    uniform vec4 LightPosition;// Light position in eye coords.
    uniform vec3 Kd;           // Diffuse reflectivity
    uniform vec3 Ld;           // Light source intensity
    
    uniform mat4 ModelViewMatrix;
    uniform mat3 NormalMatrix;
    uniform mat4 ProjectionMatrix;
    uniform mat4 MVP;           // Projection * ModelView
    
    void main()
    {
        // Convert normal and position to eye coords
        vec3 tnorm = normalize( NormalMatrix * VertexNormal);
        vec4 eyeCoords = ModelViewMatrix *
                         vec4(VertexPosition,1.0));
        vec3 s = normalize(vec3(LightPosition - eyeCoords));
    
        // The diffuse shading equation
        LightIntensity = Ld * Kd * max( dot( s, tnorm ), 0.0 );
    
        // Convert position to clip coordinates and pass along
        gl_Position = MVP * vec4(VertexPosition,1.0);
    }
  2. Use the following code for the fragment shader:
    in vec3 LightIntensity;
    
    layout( location = 0 ) out vec4 FragColor;
    
    void main() {
        FragColor = vec4(LightIntensity, 1.0);
    }
  3. Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. See Chapter 1, Getting Started with GLSL, for details about compiling, linking, and installing shaders.

How it works...

The vertex shader does all of the work in this example. The diffuse reflection is computed in eye coordinates by first transforming the normal vector using the normal matrix, normalizing, and storing the result in tnorm. Note that the normalization here may not be necessary if your normal vectors are already normalized and the normal matrix does not do any scaling.

Note

The normal matrix is typically the inverse transpose of the upper-left 3 x 3 portion of the model-view matrix. We use the inverse transpose because normal vectors transform differently than the vertex position. For a more thorough discussion of the normal matrix, and the reasons why, see any introductory computer graphics textbook (A good choice would be Computer Graphics with OpenGL by Hearn and Baker). If your model-view matrix does not include any non-uniform scalings, then one can use the upper-left 3 x 3 of the model-view matrix in place of the normal matrix to transform your normal vectors. However, if your model-view matrix does include (uniform) scalings, you'll still need to (re)normalize your normal vectors after transforming them.

The next step converts the vertex position to eye (camera) coordinates by transforming it via the model-view matrix. Then we compute the direction towards the light source by subtracting the vertex position from the light position and storing the result in s.

Next, we compute the scattered light intensity using the equation described previously and store the result in the output variable LightIntensity. Note the use of the max function here. If the dot product is less than zero, then the angle between the normal vector and the light direction is greater than 90 degrees. This means that the incoming light is coming from inside the surface. Since such a situation is not physically possible (for a closed mesh), we use a value of 0.0. However, you may decide that you want to properly light both sides of your surface, in which case the normal vector needs to be reversed for those situations where the light is striking the back side of the surface (refer to the Implementing two-sided shading recipe in this chapter).

Finally, we convert the vertex position to clip coordinates by multiplying with the model-view projection matrix, (which is: projection * view * model) and store the result in the built-in output variable gl_Position.

gl_Position = MVP * vec4(VertexPosition,1.0);

Note

The subsequent stage of the OpenGL pipeline expects that the vertex position will be provided in clip coordinates in the output variable gl_Position. This variable does not directly correspond to any input variable in the fragment shader, but is used by the OpenGL pipeline in the primitive assembly, clipping, and rasterization stages that follow the vertex shader. It is important that we always provide a valid value for this variable.

Since LightIntensity is an output variable from the vertex shader, its value is interpolated across the face and passed into the fragment shader. The fragment shader then simply assigns the value to the output fragment.

There's more...

Diffuse shading is a technique that models only a very limited range of surfaces. It is best used for surfaces that have a "matte" appearance. Additionally, with the technique used previously, the dark areas may look a bit too dark. In fact, those areas that are not directly illuminated are completely black. In real scenes, there is typically some light that has been reflected about the room that brightens these surfaces. In the following recipes, we'll look at ways to model more surface types, as well as provide some light for those dark parts of the surface.

See also

  • The Sending data to a shader using uniform variables recipe in Chapter 1, Getting Started with GLSL
  • The Compiling a shader recipe in Chapter 1, Getting Started with GLSL
  • The Linking a shader program recipe in Chapter 1, Getting Started with GLSL
  • The Sending data to a shader using vertex attributes and vertex buffer objects recipe in Chapter 1, Getting Started with GLSL