source: orange/docs/reference/rst/Orange.evaluation.testing.rst @ 10414:ac84efa944dc

Revision 10414:ac84efa944dc, 3.1 KB checked in by janezd <janez.demsar@…>, 2 years ago (diff)

Half-polished Orange.evaluation documentation

Line 
1.. index:: Testing, Sampling
2.. automodule:: Orange.evaluation.testing
3
4==================================
5Sampling and Testing (``testing``)
6==================================
7
8Module :obj:`Orange.evaluation.testing` contains methods for
9cross-validation, leave-one out, random sampling and learning
10curves. These procedures split the data onto training and testing set
11and use the training data to induce models; models then make
12predictions for testing data. Predictions are collected in
13:obj:`ExperimentResults`, together with the actual classes and some
14other data. The latter can be given to functions
15:obj:`~Orange.evaluation.scoring` that compute the performance scores
16of models.
17
18.. literalinclude:: code/testing-example.py
19
20The following call makes 100 iterations of 70:30 test and stores all the
21induced classifiers. ::
22
23     res = Orange.evaluation.testing.proportion_test(learners, iris, 0.7, 100, store_classifiers=1)
24
25Different evaluation techniques are implemented as instance methods of
26:obj:`Evaluation` class. For ease of use, an instance of this class is
27created at module loading time and instance methods are exposed as functions
28in :obj:`Orange.evaluation.testing`.
29
30Randomness in tests
31===================
32
33If evaluation method uses random sampling, parameter
34``random_generator`` can be used to either provide either a random
35seed or an instance of :obj:`~Orange.misc.Random`. If omitted, a new
36instance of random generator is constructed for each call of the
37method with random seed 0.
38
39.. note::
40
41    Running the same script twice will generally give the same
42    results.
43
44For conducting a repeatable set of experiments, construct an instance
45of :obj:`~Orange.misc.Random` and pass it to all of them. This way,
46all methods will use different random numbers, but they will be the
47same for each run of the script.
48
49For truly random number, set seed to a random number generated with
50python random generator. Since python's random generator is reset each
51time python is loaded with current system time as seed, results of the
52script will be different each time you run it.
53
54.. autoclass:: Evaluation
55
56   .. automethod:: cross_validation
57
58   .. automethod:: leave_one_out
59
60   .. automethod:: proportion_test
61
62   .. automethod:: test_with_indices
63
64   .. automethod:: one_fold_with_indices
65
66   .. automethod:: learn_and_test_on_learn_data
67
68   .. automethod:: learn_and_test_on_test_data
69
70   .. automethod:: learning_curve(learners, examples, cv_indices=None, proportion_indices=None, proportions=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], preprocessors=(), random_generator=0, callback=None)¶
71
72   .. automethod:: learning_curve_n(learners, examples, folds=10, proportions=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], stratification=StratifiedIfPossible, preprocessors=(), random_generator=0, callback=None)
73
74   .. automethod:: learning_curve_with_test_data(learners, learn_set, test_set, times=10, proportions=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], stratification=StratifiedIfPossible, preprocessors=(), random_generator=0, store_classifiers=False, store_examples=False)
75
76   .. automethod:: test_on_data
77
Note: See TracBrowser for help on using the repository browser.