References:
- T. Mitchell, 1997.
- P. Winston, 1992.
A decision tree is constructed by looking for regularities in data.
From Data to Trees: Quinlan's ID3 Algorithm for Constructing a Decision Tree
General Form
Until each leaf node is populated by as homogeneous a sample set as possible:
- Select a leaf node with an inhomogeneous sample set.
- Replace that leaf node by a test node that divides the inhomogeneous sample set into minimally inhomogeneous subsets, according to an entropy calculation.
Specific Form
- Examine the attributes to add at the next level of the tree using an entropy calculation.
- Choose the attribute that minimizes the entropy.
Tests Should Minimize Entropy
It is computationally impractical to find the smallest possible decision tree, so we use a procedure that tends to build small trees.
Choosing an Attribute using the Entropy Formula
The central focus of the ID3 algorithm is selecting which attribute to test at each node in the tree.
Procedure:
- See how the attribute distributes the instances.
- Minimize the average entropy.
- Calculate the average entropy of each test attribute and choose the one with the lowest degree of entropy.
Review of Log2
On a calculator:
eg) (0)log2(0) = - (the formula is no good for a probability of 0)
eg) (1)log2(1) = 0
eg) log2(2) = 1
eg) log2(4) = 2
eg) log2(1/2) = -1
eg) log2(1/4) = -2
eg) (1/2)log2(1/2) = (1/2)(-1) = -1/2
Entropy Formulae
Entropy, a measure from information theory, characterizes the (im)purity, or homogeneity, of an arbitrary collection of examples.
Given:
- nb, the number of instances in branch b.
- nbc, the number of instances in branch b of class c. Of course, nbc is less than or equal to nb
- nt, the total number of instances in all branches.
Probability
- If all the instances on the branch are positive, then Pb = 1 (homogeneous positive)
- If all the instances on the branch are negative, then Pb = 0 (homogeneous negative)
Entropy
- As you move from perfect balance and perfect homogeneity, entropy varies smoothly between zero and one.
- The entropy is zero when the set is perfectly homogeneous.
- The entropy is one when the set is perfectly inhomogeneous.
The disorder in a set containing members of two classes A and B, as a function of the fraction of the set beloning to class A.
In the diagram above, the total number of instances in both classes combined is two.
In the diagram below, the total number of instances in both classes is eight.
Average Entropy
Click here for an exercise in decision tree construction.
Comments on the ID3 Algorithm
Unlike the version space candidate-elimination algorithm,
- ID3 searches a completely expressive hypothesis space (ie. one capable of expressing any finite discrete-valued function), and thus avoids the difficulties associated with restricted hypothesis spaces.
- ID3 searches incompletely through this space, from simple to complex hypotheses, until its termination condition is met (eg. until it finds a hypothesis consistent with the data).
- ID3's inductive bias is based on the ordering of hypotheses by its search strategy (ie. follows from its search strategy).
- ID3's hypothesis space introduces no additional bias.
Inductive bias of ID3: shorter trees are preferred over larger trees. Also, trees that place attributes which lead to more information gain (attributes that sort instances to most decrease entropy) are placed closer to the root of the tree.
ID3's capabilities and limitations stem from it's search space and search strategy:
- ID3 searches a complete hypothesis space, meaning that every finite
discrete-valued function can be represented by some decision tree.
- Therefore, ID3 avoids the possibility that the target function might not be contained within the hypothesis space (a risk associated with restriction bias algorithms).
- As ID3 searches through the space of decision trees, it maintains
only asingle current hypothesis.
- This contrasts with the candidate-elimination algorithm, which finds all hypotheses consistent with the training data.
- By learning only a single hypothesis, ID3 loses benefits associated
with explicitly representing all consistent hypotheses.
- For instance, it does not have the ability to determine how many decision trees that are consistent with the data could exist, or select the best hypothesis among these.
- In it's pure form, ID3 performs no backtracking in its search
(greedy algorithm).
- Once an attribute has been chosen as the node for a particular level of the tree, ID3 does not reconsider this choice.
- It is subject to risks associated with hill-climbing searches that do not backtrack. It may converge to a locally optimal solution that is not globally optimal.
- (Although the pure ID3 algorithm does not backtrack, we discuss an extension that adds a form of backtracking: post-pruning the decision tree. See decision rules.)
- At every step, ID3 refines it's current hypothesis based on statistical
analysis of all training examples.
- This differs from methods (such as candidate-elimination) that consider each training example individually, thus making decisions incrementally.
- Using statistical properties of all the examples is what helps to make ID3 a robust algorithm. It can be made insenstive to errors in training examples and handle noisy data by modifying it's termination criteria to accept hypotheses that imperfectly fit the training data.