Supervised learning
As we discussed previously, for supervised learning we have some information attached to each data point, the label, and we can train a model to use it and to learn from it. For example, if we want to build a model that tells us whether there is a dog or a cat on a picture, then the picture is the data point and the information whether it is a dog or a cat is the label. Another example is predicting the price of a house--the description of a house is the data point, and the price is the label.
We can group the algorithms of supervised learning into classification and regression algorithms based on the nature of this information.
In classification problems, the labels come from some fixed finite set of classes, such as {cat, dog}, {default, not default}, or {office, food, entertainment, home}. Depending on the number of classes, the classification problem can be binary (only two possible classes) or multi-class (several classes).
Examples of classification algorithms are Naive Bayes, logistic regression, perceptron, Support Vector Machine (SVM), and many others. We will discuss classification algorithms in more detail in the first part of Chapter 4, Supervised Learning - Classification and Regression.
In regression problems, the labels are real numbers. For example, a person can have a salary in the range from $0 per year to several billions per year. Hence, predicting the salary is a regression problem.
Examples of regression algorithms are linear regression, LASSO, Support Vector Regression (SVR), and others. These algorithms will be described in more detail in the second part of Chapter 4, Supervised Learning - Classification and Regression.
Some of the supervised learning methods are universal and can be applied to both classification and regression problems. For example, decision trees, random forest, and other tree-based methods can tackle both types. We will discuss one such algorithm, gradient boosting machines in Chapter 7, Extreme Gradient Boosting.
Neural networks can also deal with both classification and regression problems, and we will talk about them in Chapter 8, Deep Learning with DeepLearning4J.