
Why neural networks?
Neural networks have been around for many years, and they have gone through several periods during which they have fallen in and out of favor. However, in recent years, they have steadily gained ground over many other competing machine learning algorithms. The reason for this is that advanced neural net architecture has shown accuracy in many tasks that has far surpassed that of other algorithms. For example, in the field of image recognition, accuracy may be measured against a database of 16 million images named ImageNet.
Prior to the introduction of deep neural nets, accuracy had been improving at a slow rate, but after the introduction of deep neural networks, accuracy dropped from an error rate of 40% in 2010 to less than 7% in 2014, and this value is still falling. The human recognition rate is still lower, but only at about 5%. Given the success of deep neural networks, all entrants to the ImageNet competition in 2013 used some form of deep neural network. In addition, deep neural nets "learn" a representation of the data, that is, not only learn to recognize an object, but also learn what the important features that uniquely define the identified object are. By learning to automatically identify features, deep neural nets can be successfully used in unsupervised learning, by naturally classifying objects with similar features together, without the need for laborious human labeling. Similar advances have also been reached in other fields, such as signal processing. Deep learning and using deep neural networks is now ubiquitously used, for example, in Apple's Siri. When Google introduced a deep learning algorithm for its Android operating system, it achieved a 25% reduction in word recognition error. Another dataset used for image recognition is the MNIST dataset that comprises examples of digits written in different handwriting. The use of deep neural networks for digit recognition can now achieve an accuracy of 99.79%, comparable to a human's accuracy. In addition, deep neural network algorithms are the closest artificial example of how the human brain works. Despite the fact that they are still probably a much more simplified and elementary version of our brain, they contain more than any other algorithm, the seed of human intelligence, and the rest of this book will be dedicated to studying different neural networks and several examples of different applications of neural networks will be provided.