Mastering TensorFlow 1.x
上QQ阅读APP看书,第一时间看更新

TF Estimator - previously TF Learn

TF Estimator is a high-level API that makes it simple to create and train models by encapsulating the functionalities for training, evaluating, predicting and exporting. TensorFlow recently re-branded and released the TF Learn package within TensorFlow under the new name TF Estimator, probably to avoid confusion with TFLearn package from tflearn.org. TF Estimator API has made significant enhancements to the original TF Learn package, that are described in the research paper presented in KDD 17 Conference, and can be found at the following link: https://doi.org/10.1145/3097983.3098171.

TF Estimator interface design is inspired from the popular machine learning library SciKit Learn, allowing to create the estimator object from different kinds of available models, and then providing four main functions on any kind of estimator:

  • estimator.fit()
  • estimator.evaluate()
  • estimator.predict()
  • estimator.export()

The names of the functions are self-explanatory. The estimator object represents the model, but the model itself is created from the model definition function provided to the estimator. 

We can depict the estimator object and its interface in the following diagram:

Using the Estimator API instead of building everything in core TensorFlow has the benefit of not worrying about graphs, sessions, initializing variables or other low-level details. at the time of writing this book, TensorFlow provides following pre-built estimators:

  • tf.contrib.learn.KMeansClustering
  • tf.contrib.learn.DNNClassifier
  • tf.contrib.learn.DNNRegressor
  • tf.contrib.learn.DNNLinearCombinedRegressor
  • tf.contrib.learn.DNNLinearCombinedClassifier
  • tf.contrib.learn.LinearClassifier
  • tf.contrib.learn.LinearRegressor
  • tf.contrib.learn.LogisticRegressor

The simple workflow in TF Estimator API is as follows:

  1. Find the pre-built Estimator that is relevant to the problem you are trying to solve.
  2. Write the function to import the dataset.
  3. Define the columns in data that contain features.
  4. Create the instance of the pre-built estimator that you selected in step 1.
  5. Train the estimator.
  6. Use the trained estimator to do evaluation or prediction.

Keras library discussed in the next chapter, provides a convenience function to convert Keras models to Estimators:  keras.estimator.model_to_estimator().

The complete code for the MNIST classification example is provided in the notebook ch-02_TF_High_Level_LibrariesThe output from the TF Estimator MNIST example is as follows:

INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmprvcqgu07
INFO:tensorflow:Using config: {'_save_checkpoints_steps': None, '_task_type': 'worker', '_save_checkpoints_secs': 600, '_service': None, '_task_id': 0, '_master': '', '_session_config': None, '_num_worker_replicas': 1, '_keep_checkpoint_max': 5, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7ff9d15f5fd0>, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_is_chief': True, '_save_summary_steps': 100, '_model_dir': '/tmp/tmprvcqgu07', '_num_ps_replicas': 0, '_tf_random_seed': None}
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into /tmp/tmprvcqgu07/model.ckpt.
INFO:tensorflow:loss = 2.4365, step = 1
INFO:tensorflow:global_step/sec: 597.996
INFO:tensorflow:loss = 1.47152, step = 101 (0.168 sec)
INFO:tensorflow:global_step/sec: 553.29
INFO:tensorflow:loss = 0.728581, step = 201 (0.182 sec)
INFO:tensorflow:global_step/sec: 519.498
INFO:tensorflow:loss = 0.89795, step = 301 (0.193 sec)
INFO:tensorflow:global_step/sec: 503.414
INFO:tensorflow:loss = 0.743328, step = 401 (0.202 sec)
INFO:tensorflow:global_step/sec: 539.251
INFO:tensorflow:loss = 0.413222, step = 501 (0.181 sec)
INFO:tensorflow:global_step/sec: 572.327
INFO:tensorflow:loss = 0.416304, step = 601 (0.174 sec)
INFO:tensorflow:global_step/sec: 543.99
INFO:tensorflow:loss = 0.459793, step = 701 (0.184 sec)
INFO:tensorflow:global_step/sec: 687.748
INFO:tensorflow:loss = 0.501756, step = 801 (0.146 sec)
INFO:tensorflow:global_step/sec: 654.217
INFO:tensorflow:loss = 0.666772, step = 901 (0.153 sec)
INFO:tensorflow:Saving checkpoints for 1000 into /tmp/tmprvcqgu07/model.ckpt.
INFO:tensorflow:Loss for final step: 0.426257.
INFO:tensorflow:Starting evaluation at 2017-12-15-02:27:45
INFO:tensorflow:Restoring parameters from /tmp/tmprvcqgu07/model.ckpt-1000
INFO:tensorflow:Finished evaluation at 2017-12-15-02:27:45
INFO:tensorflow:Saving dict for global step 1000: accuracy = 0.8856, global_step = 1000, loss = 0.40996

{'accuracy': 0.88559997, 'global_step': 1000, 'loss': 0.40995964}

You will see in chapter 5 how to create such models using core TensorFlow.