Ignore:
Timestamp:
03/02/12 00:05:35 (2 years ago)
Author:
janezd <janez.demsar@…>
Branch:
default
Message:

Half-polished Orange.evaluation documentation

File:
1 edited

Legend:

Unmodified
Added
Removed
  • docs/reference/rst/Orange.evaluation.rst

    r9372 r10414  
    33############################# 
    44 
     5Evaluation of prediction modules is split into two parts. Module 
     6:obj:`Orange.evaluation.testing` contains procedures that sample data, 
     7train learning algorithms and test models. All procedures return 
     8results as an instance of 
     9:obj:`~Orange.evaluation.testing.ExperimentResults` that is described 
     10below. Module :obj:`Orange.evaluation.scoring` uses such data to 
     11compute various performance scores like classification accuracy and 
     12AUC. 
     13 
     14There is a third module, which is unrelated to this 
     15scheme,:obj:`Orange.evaluation.reliability`, that assesses the reliability 
     16of individual predictions. 
     17 
    518.. toctree:: 
    619   :maxdepth: 1 
    720 
     21   Orange.evaluation.testing 
    822   Orange.evaluation.scoring 
    9    Orange.evaluation.testing 
    1023   Orange.evaluation.reliability 
    1124 
     25Classes for storing the experimental results 
     26-------------------------------------------- 
     27 
     28 
     29The following two classes are used for storing the results of experiments by :obj:`Orange.evaluation.testing` and computing of scores by :obj:`Orange.evaluation.scoring`. Instances of this class seldom need to be constructed and used outside of these two modules. 
     30 
     31.. py:currentmodule:: Orange.evaluation.testing 
     32 
     33.. autoclass:: ExperimentResults(iterations, classifier_names, class_values=None, weights=None, base_class=-1) 
     34    :members: 
     35 
     36.. autoclass:: TestedExample 
     37    :members: 
Note: See TracChangeset for help on using the changeset viewer.