Ignore:
Timestamp:
01/25/13 18:48:50 (15 months ago)
Author:
markotoplak
Branch:
default
Message:

Code in tutorial: classification now appears (added a colon after literalinclude).

File:
1 edited

Legend:

Unmodified
Added
Removed
  • docs/tutorial/rst/classification.rst

    r11058 r11084  
    3636Above, we read the data, constructed a `naive Bayesian learner <http://en.wikipedia.org/wiki/Naive_Bayes_classifier>`_, gave it the data set to construct a classifier, and used it to predict the class of the first data item. We also use these concepts in the following code that predicts the classes of the first five instances in the data set: 
    3737 
    38 .. literalinclude: code/classification-classifier1.py 
     38.. literalinclude:: code/classification-classifier1.py 
    3939   :lines: 4- 
    4040 
     
    5656additional parameter that specifies the output type. If this is ``Orange.classification.Classifier.GetProbabilities``, the classifier will output class probabilities: 
    5757 
    58 .. literalinclude: code/classification-classifier2.py 
     58.. literalinclude:: code/classification-classifier2.py 
    5959   :lines: 4- 
    6060 
     
    7575Validating the accuracy of classifiers on the training data, as we did above, serves demonstration purposes only. Any performance measure that assess accuracy should be estimated on the independent test set. Such is also a procedure called `cross-validation <http://en.wikipedia.org/wiki/Cross-validation_(statistics)>`_, which averages performance estimates across several runs, each time considering a different training and test subsets as sampled from the original data set: 
    7676 
    77 .. literalinclude: code/classification-cv.py 
     77.. literalinclude:: code/classification-cv.py 
    7878   :lines: 3- 
    7979 
     
    111111   single: classification; k-nearest neighbors 
    112112 
    113 .. literalinclude: code/classification-other.py 
     113.. literalinclude:: code/classification-other.py 
    114114 
    115115For these five data items, there are no major differences between predictions of observed classification algorithms:: 
     
    125125The following code cross-validates several learners. Notice the difference between this and the code above. Cross-validation requires learners, while in the script above, learners were immediately given the data and the calls returned classifiers. 
    126126 
    127 .. literalinclude: code/classification-cv2.py 
     127.. literalinclude:: code/classification-cv2.py 
    128128 
    129129Logistic regression wins in area under ROC curve:: 
     
    138138Classification models are objects, exposing every component of its structure. For instance, one can traverse classification tree in code and observe the associated data instances, probabilities and conditions. It is often, however, sufficient, to provide textual output of the model. For logistic regression and trees, this is illustrated in the script below: 
    139139 
    140 .. literalinclude: code/classification-models.py 
     140.. literalinclude:: code/classification-models.py 
    141141 
    142142   The logistic regression part of the output is: 
Note: See TracChangeset for help on using the changeset viewer.