What is a decision tree?
A decision tree represents a Procedure for Classification From objects on the basis of which decisions can be made. Decision trees are mainly used in the field of stochastics for the representation of conditional probabilities, but also in the area of machine learning and in decision theory.
The graphical illustration of decision trees can be implemented with so-called tree diagrams, which, starting from the root node up to the leaf, can clearly display all decision options.
The clear interpretability and the transparency of the presentation options are among the advantages of decision trees. In addition, they can be expanded relatively flexibly. The good clarity of small trees can, however, turn into the opposite for human observation in the case of large and complex trees. Large trees, especially with a great depth, can become confusing relatively quickly and increase the computational effort for analysis. Furthermore, it is often difficult to represent and evaluate all attributes or decision options of each individual node.
Every decision tree always has its origin in the root node mentioned above. Starting from the root node, the tree branches out to further nodes, which finally end in a leaf or the final classification. A distinction is made between decision nodes and chance nodes. Decision nodes are shown as rectangles in the graphical illustration and represent a query about an attribute that decides on the further course along the tree and the subsequent node.
Chance nodes are represented as a circle and describe the further course of the tree with a certain probability. The end of the decision tree is called a leaf, often represented as a triangle, and represents the result of the respective classification task. If each node of the tree has exactly two attributes or two possible branches, it is called a binary decision tree. Each decision tree can also be represented as a binary tree.
In the context of the analysis or classification of data objects, the process always starts at the root node. Subsequently, an input vector is queried for each decision node and compared with the attributes of the decision node. If the vector matches an attribute, this represents the path to the next subsequent node in the decision tree.
In the case of random nodes, the further path and thus the next successor node is determined on the basis of the existing or assumed probabilities. According to this methodology, the process continues until the end of the decision tree or the last leaf is reached. This immediately represents the result of the tree or the respective classification of the data object. This can be a monetary, numerical, cardinal or nominal value, which can be recoded beforehand for better mathematical processing if necessary.
Example of a decision tree in machine learning
A concrete example of the application of decision trees in the field of machine learning is the methodology of the Random Forest represents. In this algorithm, founded by the statistician Leo Breiman, decision trees are used to solve Classification and regression tasks applied.
As the name Random Forest suggests, not just one decision tree is used, but a large number. This creates a "forest" of decision trees. With Random Forest, the trees are generated randomly and uncorrelated. During the creation of the trees, so-called Bagging care was taken to ensure that the individual trees do not correlate with each other and thus negatively influence the quality of the result.
The input data is applied to several decision trees when applying the algorithm for the prediction by the so-called ensemble method or ensemble learning in order to achieve a high-quality result. This procedure has the advantage over considering only a single decision tree that outliers in the individual trees are counteracted and the result thus corresponds to the average of the individual trees.