Intelligent Mobile Projects with TensorFlow
上QQ阅读APP看书,第一时间看更新

Adding TensorFlow to your own Android app

It turns out that adding TensorFlow to your own Android app is easier than iOS. Let's jump right to the steps:

  1. If you have an existing Android app, skip this. Otherwise, in Android Studio, select File | New | New Project... and accept all the defaults before clicking Finish.
  2. Open the build.gradle (Module: app) file, and add compile 'org.tensorflow:tensorflow-android:+' inside and at the end of dependencies {...};.
  3. Build the gradle file and you'll see libtensorflow_inference.so, the TensorFlow native library that Java code talks to, inside the subfolders of the location app/build/intermediates/transforms/mergeJniLibs/debug/0/lib of your app directory.
  4. If this is a new project, you can create the assets folder by first switching to Packages, then by right-mouse clicking the app and selecting New | Folder | Assets Folder, as shown in the following screenshot, and switching back from Packages to Android:

Figure 2.13 Adding Assets Folder to a new project

  1. Drag and drop the two retrained model files and the label file, as well as a couple of test images, to the assets folder, as shown here:

Figure 2.14 Adding model files, the label file and test images to assets

  1. Hold down the option button, drag and drop TensorFlowImageClassifier.java and Classifier.java from tensorflow/examples/android/src/org/tensorflow/demo to your project's Java folder, as shown:

Figure 2.15 Adding TensorFlow classifier files to the project

  1. Open MainActivity, first create the constants related to the retrained MobileNet model – the input image size, node names, model filename and label filename:
private static final int INPUT_SIZE = 224; 
private static final int IMAGE_MEAN = 128; 
private static final float IMAGE_STD = 128; 
private static final String INPUT_NAME = "input"; 
private static final String OUTPUT_NAME = "final_result"; 
 
private static final String MODEL_FILE = "file:///android_asset/dog_retrained_mobilenet10_224.pb"; 
private static final String LABEL_FILE = "file:///android_asset/dog_retrained_labels.txt"; 
private static final String IMG_FILE = "lab1.jpg";
  1. Now, inside the onCreate method, first create a Classifier instance:
Classifier classifier = TensorFlowImageClassifier.create( 
                getAssets(), 
                MODEL_FILE, 
                LABEL_FILE, 
                INPUT_SIZE, 
                IMAGE_MEAN, 
                IMAGE_STD, 
                INPUT_NAME, 
                OUTPUT_NAME);  

Then read our test image from the assets folder, resize it as specified by the model, and call the inference method, recognizeImage:

Bitmap bitmap = BitmapFactory.decodeStream(getAssets().open(IMG_FILE)); 
Bitmap croppedBitmap = Bitmap.createScaledBitmap(bitmap, INPUT_SIZE, INPUT_SIZE, true); 
final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap); 
 

For simplicity, we don't have any UI-related code added to the Android app, but you can set a breakpoint at the line after you get the results, and debug run the app; you'll see the results as shown in the following screenshot:

Figure 2.16 Recognition results using the MobileNet retrained model

If you switch to use the Inception v3 retrained model by changing MODEL_FILE to quantized_stripped_dogs_retrained.pb, INPUT_SIZE to 299, and INPUT_NAME to Mul, then debug the app and you will get the results as shown here:

Figure 2.17 Recognition results using the Inception v3 retrained model

Now that you see how to add TensorFlow and retrained models to your own iOS and Android apps, it shouldn't be that difficult if you want to add the non-TensorFlow related features, such as using your phone's camera to take a dog picture and recognize its breed.