Adding 3D models to AndEngine
One of the things to note about AndEngine is that it is, in the first place, a 2D game engine. Although meshes got added quite recently, they are basically meant to be used like polygon models, meaning 2D-only. This makes our life somewhat hard when we try to add functionality similar to what was covered in the preceding Android OpenGL application.
Fortunately, we do not have to use all the existing features in AndEngine. We can implement much of our functionality in parallel, interacting with the AndEngine code only where necessary or desirable.
Our starting point is, of course, the same as with any general AndEngine-based application—using a BaseGameActivity
as the foundation. From here, we have to implement our own shape-derived class as well as our own VertexBufferObject
and ShaderProgram-based implementation.
We create the following files in our project:
MainActivity.java
ModelData.java
Scene3D.java
Camera3D.java
Actor3D.java
Mesh3D.java
ShaderProgram3D.java
We also pull in the Assimp-based JNI project from the example OpenGL application to import model data.
The ModelData
class is identical to the one in the previous OpenGL sample application. Actor3D
, Camera3D
, and Mesh3D
are identical to their counterparts in the sample application, with only the 3D addition omitted. This leaves us with only three classes to define, starting with MainActivity
.
MainActivity
This MainActivity
class is reminiscent of the one in the example application, mixed with the standard AndEngine callback functions:
public class MainActivity extends BaseGameActivity { private Scene mScene; private static final int mCameraHeight = 480; private static final int mCameraWidth = 800; private ModelData mModelData; private Scene3D mScene3D; private Camera3D mCamera3D; private Actor3D mActor3D; private ShaderProgram3D mShaderProgram; private String mVertexShaderCode; private String mFragmentShaderCode; private AssetManager mAssetManager; private float[] mViewToProjectionMatrix = new float[16]; private final String TAG = "AndEngineOnTour"; static { System.loadLibrary("assimpImporter"); } private native boolean getModelData(ModelData model, AssetManager manager, String filename);
All of this should look familiar. To the basic AndEngine application, we add variables for our own ModelData
, Scene3D
, Camera3D
, Actor3D
, and ShaderProgram3D
classes for the shader code; and, of course, for the asset manager.
We also load the importer library and declare its exported function. The first AndEngine callback in which we have to change something is onCreateResources
:
public void onCreateResources( OnCreateResourcesCallback pOnCreateResourcesCallback) throws IOException { mAssetManager = getAssets(); mModelData = new ModelData(); if (!getModelData(mModelData, mAssetManager, "models/teapot.obj")) { mModelData = null; } try { mVertexShaderCode = readFile(mAssetManager.open("shaders/vertex.glv")); mFragmentShaderCode = readFile(mAssetManager.open("shaders/fragment.glf")); } catch (IOException e) { e.printStackTrace(); return; } if (mVertexShaderCode == null || mFragmentShaderCode == null) { return; } pOnCreateResourcesCallback.onCreateResourcesFinished(); }
Instead of loading the texture, we must now load the model data and the code for our vertex and fragment shaders.
OnCreateScene
has been stripped down to its basics:
public void onCreateScene(OnCreateSceneCallback pOnCreateSceneCallback) throws IOException { mScene = new Scene(); pOnCreateSceneCallback.onCreateSceneFinished(mScene); }
We only need to create a new Scene
instance before finishing this callback. The real meat is in the onPopulateScene
callback:
public void onPopulateScene(Scene pScene, OnPopulateSceneCallback pOnPopulateSceneCallback) throws IOException { pScene.getBackground().setColor(0.8f, 0.8f, 0.8f);
We set the background to the same neutral gray color before creating the objects to place in the scene:
if (mModelData != null) { Mesh3D mMesh = new Mesh3D(); mMesh.setVertices(mModelData.getVertexArray(), mModelData.getNormalArray(), mModelData.getUvArray()); mMesh.setIndices(mModelData.getIndexArray()); mShaderProgram = new ShaderProgram3D(mVertexShaderCode, mFragmentShaderCode);
First, we create a new Mesh3D
instance, to which we assign the data from the ModelData
object that we got back from the importer. This includes the interleaved array data (vertex, normal, and UV) and the index data. Finally, we create a ShaderProgram3D
instance, to which we assign the vertex and fragment shader code that we loaded earlier:
float[] position = new float[3]; float[] at = new float[3]; float[] up = new float[3]; float[] side = new float[3];
We prepare the usual orientation and position arrays before we set up the actual objects:
mActor3D = new Actor3D(); mActor3D.setShaderProgram(mShaderProgram); mActor3D.setMesh(mMesh); position[0] = 0.0f; position[1] = 0.0f; position[2] = 0.0f; mActor3D.setPosition(position); at[0] = 0.0f; at[1] = 0.0f; at[2] = 1.0f; up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f; side[0] = 1.0f; side[1] = 0.0f; side[2] = 0.0f; mActor3D.setOrientation(at, up, side);
Actor3D
is identical to the similarly named class in the previous example OpenGL application. We still need to assign a mesh and shader program to it, followed by the setting of its position at (0, 0, 0) and orientation (facing forward):
mCamera3D = new Camera3D(); position[0] = 0.0f; position[1] = 0.0f; position[2] = -50.0f; mCamera3D.setPosition(position); at[0] = 0.0f; at[1] = 0.0f; at[2] = 1.0f; up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f; side[0] = 1.0f; side[1] = 0.0f; side[2] = 0.0f; mCamera3D.setOrientation(at, up, side);
The camera is still the same despite its new name. We place it 50 units away from the actor instance that we just created, facing towards it:
mScene3D = new Scene3D(0.0f, 0.0f, mShaderProgram); mScene3D.addActor(mActor3D); mScene3D.setCamera(mCamera3D); pScene.attachChild(mScene3D); } else { Log.e(TAG, "ModelData was null"); } pOnPopulateSceneCallback.onPopulateSceneFinished(); }
The first major change is visible in the setting up of the Scene3D
object. Here, we can see that we set Actor3D
and Camera3D
on our own Scene3D
instead of on the AndEngine Scene object.
The reason for this is that we have implemented a more or less parallel rendering path for 3D objects. While AndEngine's rendering cycle still triggers it, the actual rendering is done with our own code. The practical argument against extending AndEngine classes directly is that AndEngine is limited to rendering flat elements in its Camera
and other classes.
While this offers no discernible difference to the end user of the game, it does mean a bit more code and management on our side. As a benefit, it also offers us many opportunities that are hard or impossible to realize with the straightforward AndEngine framework, as we'll see later.
In this implementation, we can see that the MainActivity
class has essentially taken the role of the Renderer
class in our sample application. MainActivity
extends the BaseGameActivity
class, which itself implements the IRendererListener
interface.
Then, when we look at the EngineRenderer
(org.andengine.opengl.view
) class of AndEngine, we can see that it implements the very GLSurfaceView.Renderer
interface that we also used in the sample application for the Renderer
class. This EngineRenderer
class calls a registered IRendererListener
instance when onSurfaceCreated
and onSurfaceChanged
get called. For this reason, we need to override these functions in our MainActivity
implementation to tie to the AndEngine framework:
@Override public synchronized void onSurfaceCreated(final GLState pGLState) { super.onSurfaceCreated(pGLState); GLES20.glClearDepthf(1.0f); GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glDepthFunc(GLES20.GL_LEQUAL); } @Override public synchronized void onSurfaceChanged(final GLState pGLState, final int pWidth, final int pHeight) { float ratio = (float) pWidth / (float) pHeight; Matrix.frustumM(mViewToProjectionMatrix, 0, -ratio, ratio, -1, 1, 1, 1000); mScene3D.setViewToProjectionMatrix(mViewToProjectionMatrix); }
Here, we can now do the same things that we did in the Renderer
class earlier—clear the depth buffer and set the depth function mode to use when the OpenGL rendering surface is created, and create the frustum-based view-to-projection matrix when the surface changes.
It is important to note here that since this is called from the renderer implementation, everything in MainActivity
runs on the GLThread. Thus, we don't have to use Runnables or other cross-thread techniques to pass data to any objects used in the rendering. This means that we can directly set a new view-to-projection matrix on the Scene3D
instance, for example, and also manipulate meshes, shader programs, and actor/camera positions and orientations in our MainActivity
.
Scene3D
To integrate our code with the AndEngine classes, some changes have to be made. Aside from more or less replacing the Renderer
class with the MainActivity
class, most of these changes are in the Scene3D
class:
public class Scene3D extends Shape { private ArrayList<Actor3D> actors; private Camera3D camera; private float[] mViewToProjectionMatrix; public Scene3D(final float pX, final float pY, ShaderProgram3D shaderProgram) { super(pX, pY, shaderProgram); actors = new ArrayList<Actor3D>(); camera = null; }
We saw earlier in the MainActivity
class that we changed the constructor of the Scene3D
class from the default constructor. The reason for this is that we have to extend the Shape
class to make the Scene3D
class compatible with the AndEngine Scene
class. This also requires us to call the constructor in the superclass. While none of it is used during the program's life cycle, it has to be added for compatibility reasons.
Finally, we also see that we have added a class variable for our view-to-projection matrix here. We add a getter and setter for this as well:
public float[] getViewToProjectionMatrix() { return mViewToProjectionMatrix; } public void setViewToProjectionMatrix(float[] pViewToProjectionMatrix) { mViewToProjectionMatrix = pViewToProjectionMatrix; }
We further have to implement our draw
function the Shape
way:
@Override protected void draw(GLState pGLState, Camera pCamera) { if (camera == null) { return; } GLES20.glEnable(GLES20.GL_DEPTH_TEST); for (int i = 0; i < actors.size(); i++) { actors.get(i).draw(camera, mViewToProjectionMatrix); } GLES20.glDisable(GLES20.GL_DEPTH_TEST); }
We don't have to use any of the parameters, so we ignore them. We then have to explicitly enable and disable the depth test function of OpenGL, since AndEngine doesn't have it enabled and we don't wish to interfere with its drawing process:
@Override public IVertexBufferObject getVertexBufferObject() { return null; } @Override protected void onUpdateVertices() { }
We also have to implement these functions as they are defined by the interface, but we can leave them empty, because they are not used otherwise.
ShaderProgram3D
This is the briefest section of this chapter by far, as we only have to make two changes to the ShaderProgram
class to turn it into an AndEngine-compatible ShaderProgram3D
class:
public class ShaderProgram3D extends ShaderProgram {
As the ShaderProgram
class of AndEngine doesn't implement any type of interface, we just have to extend it to indicate that it is compatible with AndEngine and can be assigned to a Shape
constructor:
public ShaderProgram3D(final String pVertexShaderSource, final String pFragmentShaderSource) { super(pVertexShaderSource, pFragmentShaderSource); vertexShaderCode = pVertexShaderSource; fragmentShaderCode = pFragmentShaderSource;
Finally, we adapt the constructor to accept the two string parameters containing the vertex and fragment shaders, as we need to pass these to the ShaderProgram
constructor. In the MainActivity
class, we can see this change as well.