Ignore:
Timestamp:
02/23/12 15:18:45 (2 years ago)
Author:
Lan Zagar <lan.zagar@…>
Branch:
default
rebase_source:
5933dac607a60a8d48b5ea78791c0029bba44b5b
Message:

Improved multitarget scoring.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • docs/reference/rst/Orange.evaluation.scoring.rst

    r10340 r10343  
    137137multiple target classes. They can be used with standard 
    138138:obj:`~Orange.evaluation.testing` procedures (e.g. 
    139 :obj:`~Orange.evaluation.testing.Evaluation.cross_validation`), but require special 
    140 scoring functions to compute a single score from the obtained 
     139:obj:`~Orange.evaluation.testing.Evaluation.cross_validation`), but require 
     140special scoring functions to compute a single score from the obtained 
    141141:obj:`~Orange.evaluation.testing.ExperimentResults`. 
     142Since different targets can vary in importance depending on the experiment, 
     143some methods have options to indicate this e.g. through weights or customized 
     144distance functions. These can also be used for normalization in case target 
     145values do not have the same scales. 
    142146 
    143147.. autofunction:: mt_flattened_score 
    144148.. autofunction:: mt_average_score 
    145149 
    146 The whole procedure of evaluating multi-target methods and computing the scores 
    147 (RMSE errors) is shown in the following example (:download:`mt-evaluate.py <code/mt-evaluate.py>`): 
     150The whole procedure of evaluating multi-target methods and computing 
     151the scores (RMSE errors) is shown in the following example 
     152(:download:`mt-evaluate.py <code/mt-evaluate.py>`). Because we consider 
     153the first target to be more important and the last not so much we will 
     154indicate this using appropriate weights. 
    148155 
    149156.. literalinclude:: code/mt-evaluate.py 
Note: See TracChangeset for help on using the changeset viewer.