Data discretization (discretization)¶
Continues features in the data can be discretized using a uniform discretization method. Discretization considers only continues features, and replaces them in the new data set with corresponding categorical features:
import Orange iris = Orange.data.Table("iris.tab") disc_iris = Orange.data.discretization.DiscretizeTable(iris, method=Orange.feature.discretization.EqualFreq(n=3)) print "Original data set:" for e in iris[:3]: print e print "Discretized data set:" for e in disc_iris[:3]: print e
Discretization introduces new categorical features with discretized values:
Original data set: [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'] [4.9, 3.0, 1.4, 0.2, 'Iris-setosa'] [4.7, 3.2, 1.3, 0.2, 'Iris-setosa'] Discretized data set: ['<=5.45', '>3.15', '<=2.45', '<=0.80', 'Iris-setosa'] ['<=5.45', '(2.85, 3.15]', '<=2.45', '<=0.80', 'Iris-setosa'] ['<=5.45', '>3.15', '<=2.45', '<=0.80', 'Iris-setosa']
Data discretization uses feature discretization classes from Feature discretization (discretization) and applies them on entire data set. The suported discretization methods are:
- equal width discretization, where the domain of continuous feature is split to intervals of the same width equal-sized intervals (uses Orange.feature.discretization.EqualWidth),
- equal frequency discretization, where each intervals contains equal number of data instances (uses Orange.feature.discretization.EqualFreq),
- entropy-based, as originally proposed by [FayyadIrani1993] that infers the intervals to minimize within-interval entropy of class distributions (uses Orange.feature.discretization.Entropy),
- bi-modal, using three intervals to optimize the difference of the class distribution in the middle with the distribution outside it (uses Orange.feature.discretization.BiModal),
- fixed, with the user-defined cut-off points.
Default discretization method (equal frequency with three intervals) can be replaced with other discretization approaches as demonstrated below:
disc = Orange.data.discretization.DiscretizeTable() disc.method = Orange.feature.discretization.EqualFreq(numberOfIntervals=2) disc_iris = disc(iris)
Entropy-based discretization is special as it may infer new features that are constant and have only one value. Such features are redundant and provide no information about the class are. By default, DiscretizeTable would remove them, a way performing feature subset selection. The effect of removal of non-informative features is also demonstrated in the following script:
data = Orange.data.Table(Orange.data.Table("heart_disease.tab")[:100]) d_data = Orange.data.discretization.DiscretizeTable(data, method=Orange.feature.discretization.Entropy(forced=False)) old = set(data.domain.features) new = set(x.get_value_from.variable if x.get_value_from else x for x in d_data.domain.features) diff = old.difference(new) print "Redundant features (%d of %d):" % (len(diff), len(data.domain.features)) print ", ".join(sorted(x.name for x in diff))
In the sampled dat set above three features were discretized to a constant and thus removed:
Redundant features (3 of 13): cholesterol, rest SBP, age
Entropy-based and bi-modal discretization require class-labeled data sets.
Data discretization classes¶
- class Orange.data.discretization.DiscretizeTable(features=None, discretize_class=False, method=EqualFreq(n=3), clean=True)¶
Discretizes all continuous features of the data table.
- data (Orange.data.Table) – Data to discretize.
- features (list of Orange.feature.Descriptor) – Data features to discretize. None (default) to discretize all features.
- method (Orange.feature.discretization.Discretization) – Feature discretization method.
- clean (bool) – Clean the data domain after discretization. If True, features discretized to a constant will be removed. Useful only for discretizers which infer number of discretization intervals from data, like Orange.feature.discretize.Entropy (default: True).
|[FayyadIrani1993]||UM Fayyad and KB Irani. Multi-interval discretization of continuous valued attributes for classification learning. In Proc. 13th International Joint Conference on Artificial Intelligence, pages 1022–1029, Chambery, France, 1993.|