Lighting and shading
We need to introduce a light source into the scene and provide a shader that will use it. For this, the cube needs additional data, defining normal vectors and colors at each vertex.
Note
Vertex colors aren't always required for shading, but in our case, the gradient is very subtle, and the different color faces will help you distinguish the edges of the cube. We will also be doing shading calculations in the vertex shader, which is a faster way to do it (there are fewer vertices than raster pixels), but works less well for smooth objects, such as spheres. To do vertex lighting, you need vertex colors in the pipeline, so it also makes sense to do something with those colors. In this case, we choose a different color per face of the cube. Later in this book, you will see an example of per-pixel lighting and the difference it makes.
We'll now build the app to handle our lighted cube. We'll do this by performing the following steps:
- Write and compile a new shader for lighting
- Generate and define cube vertex normal vectors and colors
- Allocate and set up data buffers for rendering
- Define and set up a light source for rendering
- Generate and set up transformation matrices for rendering
Adding shaders
Let's write an enhanced vertex shader that can use a light source and vertex normals from a model.
Right-click on the app/res/raw
folder in the project hierarchy, go to New | File, and name it light_vertex.shader
. Add the following code:
uniform mat4 u_MVP; uniform mat4 u_MVMatrix; uniform vec3 u_LightPos; attribute vec4 a_Position; attribute vec4 a_Color; attribute vec3 a_Normal; const float ONE = 1.0; const float COEFF = 0.00001; varying vec4 v_Color; void main() { vec3 modelViewVertex = vec3(u_MVMatrix * a_Position); vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); float distance = length(u_LightPos - modelViewVertex); vec3 lightVector = normalize(u_LightPos - modelViewVertex); float diffuse = max(dot(modelViewNormal, lightVector), 0.5); diffuse = diffuse * (ONE / (ONE + (COEFF * distance * distance))); v_Color = a_Color * diffuse; gl_Position = u_MVP * a_Position; }
Without going through the details of writing a lighting shader, you can see that the vertex color is calculated based on a formula related to the angle between the light ray and the surface and how far the light source is from the vertex. Note that we are also bringing in the ModelView
matrix as well as the MVP matrix. This means that you will need to have access to both steps of the process, and you can't overwrite/throw away the MV matrix after you're done with it.
Notice that we used a small optimization. Numeric literals (for example, 1.0
) use uniform space, and on certain hardware, this can cause problems, so we declare constants instead (refer to http://stackoverflow.com/questions/13963765/declaring-constants-instead-of-literals-in-vertex-shader-standard-practice-or).
There are more variables to be set in this shader, as compared to the earlier simple one, for the lighting calculations. We'll send these over to the draw methods.
We also need a slightly different fragment shader. Right-click on the raw
folder in the project hierarchy, go to New | File, and name it passthrough_fragment.shader
. Add the following code:
precision mediump float; varying vec4 v_Color; void main() { gl_FragColor = v_Color; }
The only difference in the fragment shader from the simple one is that we replace uniform vec4 u_Color
with varying vec4 v_Color
because colors are now passed in from the vertex shader in the pipeline. And the vertex shader now gets an array buffer of colors. This is a new issue that we'll need to address in our setup/draw code.
Then, in MainActivity
, add these variables:
// Rendering variables private int lightVertexShader; private int passthroughFragmentShader;
Compile the shader in the compileShaders
method:
lightVertexShader = loadShader(GLES20.GL_VERTEX_SHADER, R.raw.light_vertex); passthroughFragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, R.raw.passthrough_fragment);
Cube normals and colors
Each face of a cube faces outwards in a different direction that's perpendicular to the face. A vector is an XYZ coordinate. One that is normalized to a length of 1 can be used to indicate this direction, and is called a normal vector.
The geometry we pass to OpenGL is defined as vertices, not faces. Therefore, we need to provide a normal vector for each vertex of the face, as shown in the following diagram. Strictly speaking, not all vertices on a given face have to face the same direction. This is used in a technique called smooth shading, where the lighting calculations give the illusion of a curved face instead of a flat one. We will be using the same normal for each face (hard edges), which also saves us time while specifying the normal data. Our array only needs to specify six vectors, which can be expanded into a buffer of 36 normal vectors. The same applies to color values.
Each vertex also has a color. Assuming that each face of the cube is a solid color, we can assign each vertex of that face the same color. In the Cube.java
file, add the following code:
public static final float[] CUBE_COLORS_FACES = new float[] { // Front, green 0f, 0.53f, 0.27f, 1.0f, // Right, blue 0.0f, 0.34f, 0.90f, 1.0f, // Back, also green 0f, 0.53f, 0.27f, 1.0f, // Left, also blue 0.0f, 0.34f, 0.90f, 1.0f, // Top, red 0.84f, 0.18f, 0.13f, 1.0f, // Bottom, also red 0.84f, 0.18f, 0.13f, 1.0f, }; public static final float[] CUBE_NORMALS_FACES = new float[] { // Front face 0.0f, 0.0f, 1.0f, // Right face 1.0f, 0.0f, 0.0f, // Back face 0.0f, 0.0f, -1.0f, // Left face -1.0f, 0.0f, 0.0f, // Top face 0.0f, 1.0f, 0.0f, // Bottom face 0.0f, -1.0f, 0.0f, };
For each face of the cube, we defined a solid color (CUBE_COLORS_FACES
) and a normal vector (CUBE_NORMALS_FACES
).
Now, write a reusable method, cubeFacesToArray
, to generate the float arrays actually needed in MainActivity
. Add the following code to your Cube
class:
/** * Utility method for generating float arrays for cube faces * * @param model - float[] array of values per face. * @param coords_per_vertex - int number of coordinates per vertex. * @return - Returns float array of coordinates for triangulated cube faces. * 6 faces X 6 points X coords_per_vertex */ public static float[] cubeFacesToArray(float[] model, int coords_per_vertex) { float coords[] = new float[6 * 6 * coords_per_vertex]; int index = 0; for (int iFace=0; iFace < 6; iFace++) { for (int iVertex=0; iVertex < 6; iVertex++) { for (int iCoord=0; iCoord < coords_per_vertex; iCoord++) { coords[index] = model[iFace*coords_per_vertex + iCoord]; index++; } } } return coords; }
Add this data to MainActivity
with the other variables, as follows:
// Model variables private static float cubeCoords[] = Cube.CUBE_COORDS; private static float cubeColors[] = Cube.cubeFacesToArray(Cube.CUBE_COLORS_FACES, 4); private static float cubeNormals[] = Cube.cubeFacesToArray(Cube.CUBE_NORMALS_FACES, 3);
You can also delete the declaration of private float cubeColor[]
, as it's not needed now.
Armed with a normal and color, the shader can calculate the values of each pixel occupied by the object.
Preparing the vertex buffers
The rendering pipeline requires that we set up memory buffers for the vertices, normals, and colors. We already have vertex buffers from before, we now need to add the others.
Add the variables, as follows:
// Rendering variables private FloatBuffer cubeVerticesBuffer; private FloatBuffer cubeColorsBuffer; private FloatBuffer cubeNormalsBuffer;
Prepare the buffers, and add the following code to the prepareRenderingCube
method (called from onSurfaceCreated
). (This is the first half of the full prepareRenderingCube
method):
private void prepareRenderingCube() { // Allocate buffers ByteBuffer bb = ByteBuffer.allocateDirect(cubeCoords.length * 4); bb.order(ByteOrder.nativeOrder()); cubeVerticesBuffer = bb.asFloatBuffer(); cubeVerticesBuffer.put(cubeCoords); cubeVerticesBuffer.position(0); ByteBuffer bbColors = ByteBuffer.allocateDirect(cubeColors.length * 4); bbColors.order(ByteOrder.nativeOrder()); cubeColorsBuffer = bbColors.asFloatBuffer(); cubeColorsBuffer.put(cubeColors); cubeColorsBuffer.position(0); ByteBuffer bbNormals = ByteBuffer.allocateDirect(cubeNormals.length * 4); bbNormals.order(ByteOrder.nativeOrder()); cubeNormalsBuffer = bbNormals.asFloatBuffer(); cubeNormalsBuffer.put(cubeNormalParam); cubeNormalsBuffer.position(0); // Create GL program
Preparing the shaders
Having defined the lighting_vertex
shader, we need to add the param handles to use it. At the top of the MainActivity
class, add four more variables to the lighting shader params:
// Rendering variables private int cubeNormalParam; private int cubeModelViewParam; private int cubeLightPosParam;
In the prepareRenderingCube
method (which is called by onSurfaceCreated
), attach the lightVertexShader
and passthroughFragmentShader
shaders instead of the simple ones, get the shader params, and enable the arrays so that they now read as follows. (This is the second half of prepareRenderingCube
, continuing from the preceding section):
// Create GL program cubeProgram = GLES20.glCreateProgram(); GLES20.glAttachShader(cubeProgram, lightVertexShader); GLES20.glAttachShader(cubeProgram, passthroughFragmentShader); GLES20.glLinkProgram(cubeProgram); GLES20.glUseProgram(cubeProgram); // Get shader params cubeModelViewParam = GLES20.glGetUniformLocation(cubeProgram, "u_MVMatrix"); cubeMVPMatrixParam = GLES20.glGetUniformLocation(cubeProgram, "u_MVP"); cubeLightPosParam = GLES20.glGetUniformLocation(cubeProgram, "u_LightPos"); cubePositionParam = GLES20.glGetAttribLocation(cubeProgram, "a_Position"); cubeNormalParam = GLES20.glGetAttribLocation(cubeProgram, "a_Normal"); cubeColorParam = GLES20.glGetAttribLocation(cubeProgram, "a_Color"); // Enable arrays GLES20.glEnableVertexAttribArray(cubePositionParam); GLES20.glEnableVertexAttribArray(cubeNormalParam); GLES20.glEnableVertexAttribArray(cubeColorParam);
If you refer to the shader code that we wrote earlier, you'll notice that these calls to glGetUniformLocation
and glGetAttribLocation
correspond to the uniform
and attribute
parameters declared in those scripts, including the change of cubeColorParam
from u_Color
to now a_Color
. This renaming is not required by OpenGL, but it helps us distinguish between vertex attributes and uniforms.
Shader attributes that reference array buffers must be enabled.
Adding a light source
Next, we'll add a light source to our scene and tell the shader its position when we draw. The light will be positioned just above the user.
At the top of MainActivity
, add variables to the light position:
// Scene variables // light positioned just above the user private static final float[] LIGHT_POS_IN_WORLD_SPACE = new float[] { 0.0f, 2.0f, 0.0f, 1.0f }; private final float[] lightPosInEyeSpace = new float[4];
Calculate the position of the light by adding the following code to onDrawEye
:
// Apply the eye transformation to the camera Matrix.multiplyMM(view, 0, eye.getEyeView(), 0, camera, 0); // Calculate position of the light Matrix.multiplyMV(lightPosInEyeSpace, 0, view, 0, LIGHT_POS_IN_WORLD_SPACE, 0);
Note that we're using the view
matrix (the eye view *
camera
) to transform the light position into the current view space using the Matrix.multiplyMV
function.
Now, we just tell the shader about the light position and the viewing matrices it needs. Modify the drawCube
method (called by onDrawEye
), as follows:
private void drawCube() { GLES20.glUseProgram(cubeProgram); // Set the light position in the shader GLES20.glUniform3fv(cubeLightPosParam, 1, lightPosInEyeSpace, 0); // Set the ModelView in the shader, used to calculate lighting GLES20.glUniformMatrix4fv(cubeModelViewParam, 1, false, cubeView, 0); GLES20.glUniformMatrix4fv(cubeMVPMatrixParam, 1, false, modelViewProjection, 0); GLES20.glVertexAttribPointer(cubePositionParam, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, cubeVerticesBuffer); GLES20.glVertexAttribPointer(cubeNormalParam, 3, GLES20.GL_FLOAT, false, 0, cubeNormalsBuffer); GLES20.glVertexAttribPointer(cubeColorParam, 4, GLES20.GL_FLOAT, false, 0, cubeColorsBuffer); GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, cubeVertexCount); }
Building and running the app
We are now ready to go. When you build and run the app, you will see a screen similar to the following screenshot: