source: orange/docs/widgets/rst/data/rank.rst @ 11797:840029d005bb

Revision 11797:840029d005bb, 2.1 KB checked in by blaz <blaz.zupan@…>, 4 months ago (diff)

Enabled custom documentation enumeration (stamper CSS style).

Line 
1.. _Rank:
2
3Rank
4====
5
6.. image:: ../../../../Orange/OrangeWidgets/Data/icons/Rank.svg
7
8Ranking of attributes in classification or regression data sets.
9
10Signals
11-------
12
13Inputs:
14
15   - Data
16      Input data set.
17
18Outputs:
19
20   - Reduced Data
21      Data set which selected attributes.
22
23Description
24-----------
25
26Rank widget considers class-labeled data sets (classification or regression)
27and scores the attributes according to their correlation with the
28class.
29
30.. image:: images/Rank-stamped.png
31
32.. rst-class:: stamp-list
33
34   1. Attributes (rows) and their scores by different scoring methods
35      (columns).
36   #. Scoring techniques and their (optional) parameters.
37   #. For scoring techniques that require discrete attributes this is the number
38      of intervals to which continues attributes will be discretized to.
39   #. Number of decimals used in reporting the score.
40   #. Toggles the bar-based visualisation of the feature scores.
41   #. Adds a score table to the current report.
42
43Example: Attribute Ranking and Selection
44----------------------------------------
45
46Below we have used immediately after the :ref:`File`
47widget to reduce the set of data attribute and include only the most
48informative one:
49
50.. image:: images/Rank-Select-Schema.png
51
52Notice how the widget outputs a data set that includes only the best-scored
53attributes:
54
55.. image:: images/Rank-Select-Widgets.png
56
57Example: Feature Subset Selection for Machine Learning
58------------------------------------------------------
59
60Following is a bit more complicated example. In the workflow below we
61first split the data into training and test set. In the upper branch
62the training data passes through the Rank widget to select the most
63informative attributes, while in the lower branch there is no feature
64selection. Both feature selected and original data sets are passed to
65its own :ref:`Test Learners` widget, which develops a
66:ref:`Naive Bayes <Naive Bayes>` classifier and scores it on a test set.
67
68.. image:: images/Rank-and-Test.png
69
70For data sets with many features and naive Bayesian classifier feature
71selection, as shown above, would often yield a better predictive accuracy.
Note: See TracBrowser for help on using the repository browser.