Cardboard VR Projects for Android
上QQ阅读APP看书,第一时间看更新

Hello, triangle!

Let's add a triangle to the scene. Yeah, I know that a triangle isn't even a box. However, we're going to start with super simple tips. Triangles are the building blocks of all 3D graphics and the simplest shapes that OpenGL can render (that is, in triangle mode).

Introducing geometry

Before moving on, let's talk a little about geometry.

Virtual reality is largely about creating 3D scenes. Complex models are organized as three-dimensional data with vertices, faces, and meshes, forming objects that can be hierarchically assembled into more complex models. For now, we're taking a really simple approach—a triangle with three vertices, stored as a simple Java array.

The triangle is composed of three vertices (that's why, it's called a tri-angle!). We're going to define our triangle as top (0.0, 0.6), bottom-left (-0.5, -0.3), bottom-right (0.5, -0.3). The first vertex is the topmost point of the triangle and has X=0.0, so it's at the center and Y=0.6 up.

The order of the vertices, or triangle winding, is very important as it indicates the front-facing direction of the triangle. OpenGL drivers expect it to wind in a counter-clockwise direction, as shown in the following diagram:

Introducing geometry

If the vertices are defined clockwise, the shader will assume that the triangle is facing the other direction, away from the camera, and will thus not be visible and rendered. This is an optimization called culling, which allows the rendering pipeline to readily throw away geometry that is on the back side of an object. That is, if it is not visible to the camera, don't even bother trying to draw it. Having said this, you can set various culling modes to choose to only render front faces, back faces, or both.

Refer to the creative commons source at http://learnopengl.com/#!Advanced-OpenGL/Face-culling.

Tip

The OpenGL Programming Guide by Dave Shreiner, Graham Sellers, John M. Kessenich, Bill Licea-Kane, "By convention, polygons whose vertices appear in a counter-clockwise order on the screen are called front-facing." This is determined by a global state mode, and the default value is GL_CCW (https://www.opengl.org/wiki/Face_Culling).

Three-dimensional points, or vertices, are defined with x, y, and z coordinate values. A triangle, for example, in 3D space is made up of three vertices, each having an x, y, and z value.

Our triangle lies on a plane parallel to the screen. When we add 3D viewing to the scene (later in this chapter), we'll need a z coordinate to place it in 3D space. In anticipation, we'll set the triangle on the Z=-1 plane. The default camera in OpenGL is at the origin (0,0,0) and looks down at the negative z axis. In other words, objects in the scene are looking up the positive z axis at the camera. We put the triangle one unit away from the camera so that we can see it at Z=-1.0.

Triangle variables

Add the following code snippet to the top of the MainActivity class:

    // Model variables
    private static final int COORDS_PER_VERTEX = 3;
    private static float triCoords[] = {
        // in counter-clockwise order
        0.0f,  0.6f, -1.0f, // top
       -0.5f, -0.3f, -1.0f, // bottom left
        0.5f, -0.3f, -1.0f  // bottom right
    };

    private final int triVertexCount = triCoords.length / COORDS_PER_VERTEX;
    // yellow-ish color
    private float triColor[] = { 0.8f, 0.6f, 0.2f, 0.0f }; 
    private FloatBuffer triVerticesBuffer;

Our triangle coordinates are assigned to the triCoords array. All the vertices are in 3D space with three coordinates (x, y, and z) per vertex (COORDS_PER_VERTEX). The triVertexCount variable, precalculated as the length of the triangle's triCoords array, is divided by COORDS_PER_VERTEX. We also define an arbitrary triColor value for our triangle, which is composed of R, G, B, and A values (red, green, blue, and alpha (transparency)). The triVerticesBuffer variable will be used in the draw code.

For those who are new to Java programming, you might also wonder about the variable types. Integers are declared int and floating point numbers are declared float. All the variables here are being declared private, which means that they'll only be visible and used within this class definition. The ones that are declared static will share their data across multiple instances of the class. The ones that are declared final are immutable and are not expected to change once they are initialized.

onSurfaceCreated

The purpose of this activity code is to draw stuff on the Android device display. We do this through the OpenGL graphics library, which draws onto a surface, a memory buffer onto which you can draw graphics via a rendering pipeline.

After the activity is created (onCreate), a surface is created and onSurfaceCreated is called. It has several responsibilities, including initializing the scene and compiling the shaders. It also prepares for rendering by allocating memory for vertex buffers, binding textures, and initializing the render pipeline handles.

Here's the method, which we've broken into several private methods that we're going to write next:

    @Override
    public void onSurfaceCreated(EGLConfig eglConfig) {
        initializeScene();
        compileShaders();
        prepareRenderingTriangle();
    }

There's nothing to initialize in the scene at this point:

private void initializeScene() {
}

Let's move on to the shaders and rendering discussions.

Introducing OpenGL ES 2.0

Now is a good time to introduce the graphics pipeline. When a Cardboard app draws 3D graphics on the screen, it hands the rendering to a separate graphics processor (GPU). Android and our Cardboard app uses the OpenGL ES 2.0 standard graphics library.

OpenGL is a specification for how applications interact with graphics drivers. You could say that it's a long list of function calls that do things in graphics hardware. Hardware vendors write their drivers to conform to the latest specification, and some intermediary, in this case Google, creates a library that hooks into driver functions in order to provide method signatures that you can call from whatever language you're using (generally, Java, C++, or C#).

OpenGL ES is the mobile, or Embedded Systems, version of OpenGL. It follows the same design patterns as OpenGL, but its version history is very different. Different versions of OpenGL ES and even different implementations of the same version will require different approaches to drawing 3D graphics. Thus, your code might differ greatly between OpenGL ES 1.0, 2.0, and 3.0. Thankfully, most major changes happened between Version 1 and 2, and the Cardboard SDK is set up to use 2.0. The CardboardView interface also varies slightly from a normal GLSurfaceView.

To draw graphics on the screen, OpenGL needs two basic things:

  • The graphics programs, or shaders (sometimes used interchangeably), which define how to draw shapes
  • The data, or buffers, which define what is being drawn

There are also some parameters that specify transformation matrices, colors, vectors, and so on. You might be familiar with the concept of a game loop, which is a basic pattern to set up the game environment and then initiate a loop that runs some game logic, renders the screen, and repeats at a semi-regular interval until the game is paused or the program exits. The CardboardView sets up the game loop for us, and basically, all that we have to do is implement the interface methods.

A bit more on shaders: at the bare minimum, we need a vertex shader and a fragment shader. The vertex shader is responsible for transforming the vertices of an object from world space (where they are in the world) to screen space (where they should be drawn on the screen).

The fragment shader is called on each pixel that the shape occupies (determined by the raster function, a fixed part of the pipeline) and returns the color that is drawn. Every shader is a single function, accompanied by a number of attributes that can be used as inputs.

A collection of functions (that is, a vertex and a fragment) is compiled by OpenGL into a program. Sometimes, whole programs are referred to as shaders, but this is a colloquialism that assumes the basic knowledge that more than one function, or shader, is required to fully draw an object. The program and the values for all its parameters will sometimes be referred to as a material, given that it completely describes the material of the surface that it draws.

Shaders are cool. However, they don't do anything until your program sets up the data buffers and makes a bunch of draw calls.

A draw call consists of a Vertex Buffer Object (VBO), the shaders that will be used to draw it, a number of parameters that specify the transformation applied to the object, the texture(s) used to draw it, and any other shader parameters.

The VBO refers to any and all data used to describe the shape of an object. A very basic object (for example, a triangle) only needs an array of vertices. The vertices are read in order, and every three positions in space define a single triangle. Slightly more advanced shapes use an array of vertices and an array of indices, which define which vertices to draw in what order. Using an index buffer, multiple vertices can be re-used.

While OpenGL can draw a number of shape types (a point, line, triangle, and quad), we will assume that all are triangles. This is both a performance optimization and a matter of convenience. If we want a quad, we can draw two triangles. If we want a line, we can draw a really long, skinny quad. If we want a point, we can draw a tiny triangle. This way, not only can we leave OpenGL in triangle mode, but we can also treat all VBOs in exactly the same manner. Ideally, you want your render code to be completely agnostic to what it is rendering.

To summarize:

  • The purpose of the OpenGL graphics library is to give us access to the GPU hardware, which then paints pixels on the screen based on the geometry in a scene. This is achieved through a rendering pipeline, where data is transformed and passed through a series of shaders.
  • A shader is a small program that takes certain inputs and generates corresponding outputs, depending on the stage of the pipeline.
  • As a program, shaders are written in a special C-like language. The source code is compiled to be run very efficiently on the Android device's GPU.

For example, a vertex shader handles processing individual vertices, outputting a transformed version of each one. Another step rasterizes the geometry, after which a fragment shader receives a raster fragment and outputs colored pixels.

Note

We'll be discussing the OpenGL rendering pipeline later on, and you can read about it at https://www.opengl.org/wiki/Rendering_Pipeline_Overview.

You can also review the Android OpenGL ES API Guide at http://developer.android.com/guide/topics/graphics/opengl.html.

For now, don't worry too much about it and let's just follow along.

Note: GPU drivers actually implement the entire OpenGL library on a per-driver basis. This means that someone at NVIDIA (or in this case, probably Qualcomm or ARM) wrote the code that compiles your shaders and reads your buffers. OpenGL is a specification for how this API should work. In our case, this is the GL class that's part of Android.

Simple shaders

Presently, we'll write a couple of simple shaders. Our shader code will be written in a separate file, which is loaded and compiled by our app. Add the following functions at the end of the MainActivity class:

   /**
     * Utility method for compiling a OpenGL shader.
     *
     * @param type - Vertex or fragment shader type.
     * @param resId - int containing the resource ID of the shader code file.
     * @return - Returns an id for the shader.
     */
    private int loadShader(int type, int resId){
        String code = readRawTextFile(resId);
        int shader = GLES20.glCreateShader(type);

        // add the source code to the shader and compile it
        GLES20.glShaderSource(shader, code);
        GLES20.glCompileShader(shader);

        return shader;
    }

    /**
     * Converts a raw text file into a string.
     *
     * @param resId The resource ID of the raw text file about to be turned into a shader.
     * @return The content of the text file, or null in case of error.
     */
    private String readRawTextFile(int resId) {
        InputStream inputStream = getResources().openRawResource(resId);
        try {
            BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
            StringBuilder sb = new StringBuilder();
            String line;
            while ((line = reader.readLine()) != null) {
                sb.append(line).append("\n");
            }
            reader.close();
            return sb.toString();
        } catch (IOException e) {
            e.printStackTrace();
        }
        return null;
    }

We will call loadShader to load a shader program (via readRawTextFile) and compile it. This code will be useful in other projects as well.

Now, we'll write a couple of simple shaders in the res/raw/simple_vertex.shader and res/raw/simple_fragment.shader files.

In the Project Files hierarchy view, on the left-hand side of Android Studio, locate the app/res/ resource folder, right-click on it, and go to New | Android Resource Directory. In the New Resource Directory dialog box, from Resource Type:, select Raw, and then click on OK.

Right-click on the new raw folder, go to New | File, and name it simple_vertex.shader. Add the following code:

attribute vec4 a_Position;
void main() {
    gl_Position = a_Position;
}

Similarly, for the fragment shader, right-click on the raw folder, go to New | File, and name it simple_fragment.shader. Add the following code:

precision mediump float;
uniform vec4 u_Color;
void main() {
    gl_FragColor = u_Color;
}

Basically, these are identity functions. The vertex shader passes through the given vertex, and the fragment shader passes through the given color.

Notice the names of the parameters that we declared: an attribute named a_Position in simple_vertex and a uniform variable named u_Color in simple_fragment. We'll set these up from the MainActivity onSurfaceCreated method. Attributes are properties of each vertex, and when we allocate buffers for them, they must all be arrays of equal length. Other attributes that you will encounter are vertex normals, texture coordinates, and vertex colors. Uniforms will be used to specify information that applies to the whole material, such as in this case, the solid color applied to the whole surface.

Also, note that the gl_FragColor and gl_Position variables are built-in variable names that OpenGL is looking for you to set. Think of them as the returns on your shader function. There are other built-in output variables, which we will see later.

The compileShaders method

We're now ready to implement the compileShaders method that onSurfaceCreated calls.

Add the following variables on top of MainActivity:

    // Rendering variables
    private int simpleVertexShader;
    private int simpleFragmentShader;

Implement compileShaders, as follows:

    private void compileShaders() {
        simpleVertexShader = loadShader(GLES20.GL_VERTEX_SHADER, R.raw.simple_vertex);
        simpleFragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, R.raw.simple_fragment);
    }

The prepareRenderingTriangle method

The onSurfaceCreated method prepares for rendering by allocating memory for vertex buffers, creating OpenGL programs, and initializing the render pipeline handles. We will do this for our triangle shape now.

Add the following variables on top of MainActivity:

    // Rendering variables
    private int triProgram;
    private int triPositionParam;
    private int triColorParam;

Here's a skeleton of the function:

    private void prepareRenderingTriangle() {
        // Allocate buffers
        // Create GL program
        // Get shader params
    }

We need to prepare some memory buffers that will be passed to OpenGL when each frame is rendered. This is the first go-round for our triangle and simple shaders; we now only need a vertex buffer:

        // Allocate buffers
        // initialize vertex byte buffer for shape coordinates (4 bytes per float)
        ByteBuffer bb = ByteBuffer.allocateDirect(triCoords.length * 4);
        // use the device hardware's native byte order
        bb.order(ByteOrder.nativeOrder());

        // create a floating point buffer from the ByteBuffer
        triVerticesBuffer = bb.asFloatBuffer();
        // add the coordinates to the FloatBuffer
        triVerticesBuffer.put(triCoords);
        // set the buffer to read the first coordinate
        triVerticesBuffer.position(0);

These five lines of code result in the setting up of the triVerticesBuffer value, which are as follows:

  • A ByteBuffer is allocated that is big enough to hold our triangle coordinate values
  • The binary data is arranged to match the hardware's native byte order
  • The buffer is formatted for a floating point and assigned to our FloatBuffer vertex buffer
  • The triangle data is put into it, and then we reset the buffer cursor position to the beginning

Next, we build the OpenGL ES program executable. Create an empty OpenGL ES program using glCreateProgram, and assign its ID as triProgram. This ID will be used in other methods as well. We attach any shaders to the program, and then build the executable with glLinkProgram:

        // Create GL program
        // create empty OpenGL ES Program
        triProgram = GLES20.glCreateProgram();
        // add the vertex shader to program
        GLES20.glAttachShader(triProgram, simpleVertexShader);
        // add the fragment shader to program
        GLES20.glAttachShader(triProgram, simpleFragmentShader);
        // build OpenGL ES program executable
        GLES20.glLinkProgram(triProgram);
        // set program as current
        GLES20.glUseProgram(triProgram);

Lastly, we get a handle on the render pipeline. A call to glGetAttribLocation on a_Position retrieves the location of the vertex buffer parameter, glEnableVertexAttribArray gives permission to access it, and a call to glGetUniformLocation on u_Color retrieves the location of the color components. We'll be happy that we did this once we get to onDrawEye:

        // Get shader params
        // get handle to vertex shader's a_Position member
        triPositionParam = GLES20.glGetAttribLocation(triProgram, "a_Position");
        // enable a handle to the triangle vertices
        GLES20.glEnableVertexAttribArray(triPositionParam);
        // get handle to fragment shader's u_Color member
        triColorParam = GLES20.glGetUniformLocation(triProgram, "u_Color");

So, we've isolated the code needed to prepare a drawing of the triangle model in this function. First, it sets up buffers for the vertices. Then, it creates a GL program, attaching the shaders it'll use. Then, we get handles to the parameters in the shaders that we'll use to draw.

onDrawEye

Ready, Set, and Go! If you think of what we've written so far as the "Ready Set" part, now we do the "Go" part! That is, the app starts and creates the activity, calling onCreate. The surface is created and calls onSurfaceCreated to set up the buffers and shaders. Now, as the app runs, for each frame, the display is updated. Go!

The CardboardView.StereoRenderer interface delegates these methods. We can handle onNewFrame (and will later on). For now, we'll just implement the onDrawEye method, which will draw the contents from the point of view of an eye. This method gets called twice, once for each eye.

All that onDrawEye needs to do for now is render our lovely triangle. Nonetheless, we'll split it into a separate function (that'll make sense later):

    @Override
    public void onDrawEye(Eye eye) {
        drawTriangle();
    }
    
    private void drawTriangle() {
        // Add program to OpenGL ES environment
        GLES20.glUseProgram(triProgram);

        // Prepare the coordinate data
        GLES20.glVertexAttribPointer(triPositionParam, COORDS_PER_VERTEX,
                GLES20.GL_FLOAT, false, 0, triVerticesBuffer);

        // Set color for drawing
        GLES20.glUniform4fv(triColorParam, 1, triColor, 0);

        // Draw the model
        GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, triVertexCount);
    }

We need to specify which shader program we are using by calling glUseProgram. A call to glVertexAttribPointer sets our vertex buffer to the pipeline. We also set the color using glUniform4fv (4fv refers to the fact that our uniform is a vector with four floats). Then, we actually draw using glDrawArrays.

Building and running

That's it. Yee haa! That wasn't so bad, was it? Actually, if you're familiar with Android development and OpenGL, you might have breezed through this.

Let's build and run it. Go to Run | Run 'app', or simply use the green triangle Run icon on the toolbar.

Gradle will do its build thing. Select the Gradle Console tab at the bottom of the Android Studio window to view the Gradle build messages. Then, assuming that all goes well, the APK file will be installed on your connected phone (it's connected and turned on, right?). Select the Run tab at the bottom to view the upload and launch messages.

This is what it displays:

Building and running

Actually, it kind of looks like a Halloween pumpkin carving! Spooky. But in VR you'll see just a single triangle.

Notice that while the triangle vertex coordinates define edges with straight lines, the CardboardView renders it with barrel distortion to compensate for the lens optics in the headset. Also, the left image is different from the right, one for each eye. When you insert the phone in a Google Cardboard headset, the left and right stereoscopic views appear as one triangle floating in space with straight edges.

That's great! We just built a simple Cardboard app for Android from scratch. Like any Android app, there are a number of different pieces that need to be defined just to get a basic thing going, including the AndroidManifest.xml, activity_main.xml, and MainActivity.java files.

Hopefully everything went as planned. Like a good programmer, you've probably been building and running the app after making incremental changes to the account for syntax errors and unhandled exceptions. A little bit later, we will call the GLError function to check error information from OpenGL. As always, pay close attention to errors in logcat (try filtering for the running application) and to variable names. You might have a syntax error in your shader, causing its compiling to fail, or you might have a typo in the attribute/uniform name when trying to access the handles. These kind of things will not result in any compile-time errors (shaders are compiled at runtime), and your app will run but may not render anything as a result.