1 added
9 edited



    r10354 r10355  
     1recursive-include Orange/testing * 
     2recursive-include Orange/doc * 
     4recursive-include Orange/OrangeWidgets *.png *.gs *.vs *.obj *.html 
     5recursive-include Orange/OrangeCanvas *.png *.pyw *.txt 
     6recursive-include Orange/orng *.cfg *.c 
    18recursive-include source *.bat *.c *.cpp *.h *.hpp *.mak COPYRIGHT *.py *.txt *.sip *.defs *.cmake 
    2 recursive-include docs *.rst *.py *.png *.css 
     9recursive-include docs *.rst *.py *.png *.css *.txt Makefile 
    311graft docs/sphinx-ext 
    412graft distribute 
    514include COPYING 
    615include LICENSES 
  • Orange/evaluation/scoring.py

    r10285 r10343  
    2541 def mt_average_scores(res, score, weights=None): 
    2542     """ 
    2543     Average the scores of individual targets. 
    2545     :param score: Single-target scoring method. 
     2541def mt_average_score(res, score, weights=None): 
     2542    """ 
     2543    Compute individual scores for each target and return the (weighted) average. 
     2545    One method can be used to compute scores for all targets or a list of 
     2546    scoring methods can be passed to use different methods for different 
     2547    targets. In the latter case, care has to be taken if the ranges of scoring 
     2548    methods differ. 
     2549    For example, when the first target is scored from -1 to 1 (1 best) and the 
     2550    second from 0 to 1 (0 best), using `weights=[0.5,-1]` would scale both 
     2551    to a span of 1, and invert the second so that higher scores are better. 
     2553    :param score: Single-target scoring method or a list of such methods 
     2554                  (one for each target). 
    25462555    :param weights: List of real weights, one for each target, 
    25472556                    for a weighted average. 
    25522561    if res.number_of_learners < 1: 
    25532562        return [] 
     2563    n_classes = len(res.results[0].actual_class) 
    25542564    if weights is None: 
    2555         weights = [1. for _ in res.results[0].actual_class] 
     2565        weights = [1.] * n_classes 
     2566    if not hasattr(score, '__len__'): 
     2567        score = [score] * n_classes 
     2568    elif len(score) != n_classes: 
     2569        raise ValueError, "Number of scoring methods and targets do not match." 
    25562570    # save original classes 
    25572571    clsss = [te.classes for te in res.results] 
    25592573    # compute single target scores 
    25602574    single_scores = [] 
    2561     for i in range(len(clsss[0][0])): 
     2575    for i in range(n_classes): 
    25622576        for te, clss, aclss in zip(res.results, clsss, aclsss): 
    25632577            te.classes = [cls[i] for cls in clss] 
    25642578            te.actual_class = aclss[i] 
    2565         single_scores.append(score(res)) 
     2579        single_scores.append(score[i](res)) 
    25662580    # restore original classes 
    25672581    for te, clss, aclss in zip(res.results, clsss, aclsss): 
    25732587def mt_flattened_score(res, score): 
    25742588    """ 
    2575     Flatten the predictions of multiple targets 
    2576     and compute a single-target score. 
     2589    Flatten (concatenate into a single list) the predictions of multiple 
     2590    targets and compute a single-target score. 
    25782592    :param score: Single-target scoring method. 
  • Orange/orng/orange2to25.py

    r9671 r10341  
    168168    return int(bool(rt.errors)) 
    170 sys.exit(main("fixes", sys.argv)) 
     170sys.exit(main("Orange.fixes", sys.argv)) 
  • docs/reference/rst/Orange.evaluation.scoring.rst

    r10282 r10343  
    127127.. autofunction:: split_by_iterations 
    129 ===================================== 
    130 Scoring for multilabel classification 
    131 ===================================== 
     130.. _mt-scoring: 
     136:doc:`Multi-target <Orange.multitarget>` classifiers predict values for 
     137multiple target classes. They can be used with standard 
     138:obj:`~Orange.evaluation.testing` procedures (e.g. 
     139:obj:`~Orange.evaluation.testing.Evaluation.cross_validation`), but require 
     140special scoring functions to compute a single score from the obtained 
     142Since different targets can vary in importance depending on the experiment, 
     143some methods have options to indicate this e.g. through weights or customized 
     144distance functions. These can also be used for normalization in case target 
     145values do not have the same scales. 
     147.. autofunction:: mt_flattened_score 
     148.. autofunction:: mt_average_score 
     150The whole procedure of evaluating multi-target methods and computing 
     151the scores (RMSE errors) is shown in the following example 
     152(:download:`mt-evaluate.py <code/mt-evaluate.py>`). Because we consider 
     153the first target to be more important and the last not so much we will 
     154indicate this using appropriate weights. 
     156.. literalinclude:: code/mt-evaluate.py 
     158Which outputs:: 
     160    Weighted RMSE scores: 
     161        Majority    0.8228 
     162          MTTree    0.3949 
     163             PLS    0.3021 
     164           Earth    0.2880 
     167Multi-label classification 
    133170Multi-label classification requires different metrics than those used in 
  • docs/reference/rst/Orange.evaluation.testing.rst

    r10192 r10339  
    1616Different evaluation techniques are implemented as instance methods of 
    17 :obj:`Evaluation` class. For ease of use, an instance of this class in 
     17:obj:`Evaluation` class. For ease of use, an instance of this class is 
    1818created at module loading time and instance methods are exposed as functions 
    1919with the same name in Orange.evaluation.testing namespace. 
  • docs/reference/rst/Orange.multitarget.rst

    r10332 r10339  
    55Multi-target prediction tries to achieve better prediction accuracy or speed 
    6 through prediction of multiple dependent variable at once. It works on 
     6through prediction of multiple dependent variables at once. It works on 
    77:ref:`multi-target data <multiple-classes>`, which is also supported by 
    88Orange's tab file format using :ref:`multiclass directive <tab-delimited>`. 
    1515   Orange.regression.earth 
     17For evaluation of multi-target methods, see the corresponding section in  
     18:ref:`Orange.evaluation.scoring <mt-scoring>`. 
    1821.. automodule:: Orange.multitarget 
  • install-scripts/mac/dailyrun-bundleonly-hg.sh

    r10270 r10345  
    33# Should be run as: sudo ./dailyrun-bundleonly-hg.sh 
     6export PATH=$HOME/bin:$PATH 
    68MAC_VERSION=`sw_vers -productVersion | cut -d '.' -f 2` 
  • install-scripts/mac/dailyrun.sh

    r10337 r10345  
    66test -r /sw/bin/init.sh && . /sw/bin/init.sh 
     8export PATH=$HOME/bin:$PATH 
    810STABLE_REVISION_1=`svn info --non-interactive http://orange.biolab.si/svn/orange/branches/ver1.0/ | grep 'Last Changed Rev:' | cut -d ' ' -f 4` 
  • install-scripts/mac/update-all-scripts.sh

    r10274 r10342  
    2020curl --silent --output bundle-daily-build-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-daily-build-hg.sh 
    2121curl --silent --output bundle-inject-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-inject-hg.sh 
     22curl --silent --output bundle-inject-pypi.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-inject-pypi.sh 
    2223curl --silent --output dailyrun-bundleonly-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/dailyrun-bundleonly-hg.sh 
Note: See TracChangeset for help on using the changeset viewer.