Invoking CPU/GPU devices
TensorFlow supports both CPUs and GPUs. It also supports distributed computation. We can use TensorFlow on multiple devices in one or more computer system. TensorFlow names the supported devices as "/device:CPU:0" (or "/cpu:0") for the CPU devices and "/device:GPU:I" (or "/gpu:I") for the ith GPU device.
As mentioned earlier, GPUs are much faster than CPUs because they have many small cores. However, it is not always an advantage in terms of computational speed to use GPUs for all types of computations. The overhead associated with GPUs can sometimes be more computationally expensive than the advantage of parallel computation offered by GPUs. To deal with this issue, TensorFlow has provisions to place computations on a particular device. By default, if both CPU and GPU are present, TensorFlow gives priority to GPU.