# Changes in [10354:806a16e991b5:10355:0f8db234be5f] in orange

Ignore:
Files:
9 edited

Unmodified
Removed
• ## MANIFEST.in

 r10354 recursive-include Orange/testing * recursive-include Orange/doc * recursive-include Orange/OrangeWidgets *.png *.gs *.vs *.obj *.html recursive-include Orange/OrangeCanvas *.png *.pyw *.txt recursive-include Orange/orng *.cfg *.c recursive-include source *.bat *.c *.cpp *.h *.hpp *.mak COPYRIGHT *.py *.txt *.sip *.defs *.cmake recursive-include docs *.rst *.py *.png *.css recursive-include docs *.rst *.py *.png *.css *.txt Makefile graft docs/sphinx-ext graft distribute include COPYING include LICENSES
• ## Orange/evaluation/scoring.py

 r10285 def mt_average_scores(res, score, weights=None): """ Average the scores of individual targets. :param score: Single-target scoring method. def mt_average_score(res, score, weights=None): """ Compute individual scores for each target and return the (weighted) average. One method can be used to compute scores for all targets or a list of scoring methods can be passed to use different methods for different targets. In the latter case, care has to be taken if the ranges of scoring methods differ. For example, when the first target is scored from -1 to 1 (1 best) and the second from 0 to 1 (0 best), using `weights=[0.5,-1]` would scale both to a span of 1, and invert the second so that higher scores are better. :param score: Single-target scoring method or a list of such methods (one for each target). :param weights: List of real weights, one for each target, for a weighted average. if res.number_of_learners < 1: return [] n_classes = len(res.results[0].actual_class) if weights is None: weights = [1. for _ in res.results[0].actual_class] weights = [1.] * n_classes if not hasattr(score, '__len__'): score = [score] * n_classes elif len(score) != n_classes: raise ValueError, "Number of scoring methods and targets do not match." # save original classes clsss = [te.classes for te in res.results] # compute single target scores single_scores = [] for i in range(len(clsss[0][0])): for i in range(n_classes): for te, clss, aclss in zip(res.results, clsss, aclsss): te.classes = [cls[i] for cls in clss] te.actual_class = aclss[i] single_scores.append(score(res)) single_scores.append(score[i](res)) # restore original classes for te, clss, aclss in zip(res.results, clsss, aclsss): def mt_flattened_score(res, score): """ Flatten the predictions of multiple targets and compute a single-target score. Flatten (concatenate into a single list) the predictions of multiple targets and compute a single-target score. :param score: Single-target scoring method.
• ## Orange/orng/orange2to25.py

 r9671 return int(bool(rt.errors)) sys.exit(main("fixes", sys.argv)) sys.exit(main("Orange.fixes", sys.argv))
• ## docs/reference/rst/Orange.evaluation.scoring.rst

 r10282 .. autofunction:: split_by_iterations ===================================== Scoring for multilabel classification ===================================== .. _mt-scoring: ============ Multi-target ============ :doc:`Multi-target ` classifiers predict values for multiple target classes. They can be used with standard :obj:`~Orange.evaluation.testing` procedures (e.g. :obj:`~Orange.evaluation.testing.Evaluation.cross_validation`), but require special scoring functions to compute a single score from the obtained :obj:`~Orange.evaluation.testing.ExperimentResults`. Since different targets can vary in importance depending on the experiment, some methods have options to indicate this e.g. through weights or customized distance functions. These can also be used for normalization in case target values do not have the same scales. .. autofunction:: mt_flattened_score .. autofunction:: mt_average_score The whole procedure of evaluating multi-target methods and computing the scores (RMSE errors) is shown in the following example (:download:`mt-evaluate.py `). Because we consider the first target to be more important and the last not so much we will indicate this using appropriate weights. .. literalinclude:: code/mt-evaluate.py Which outputs:: Weighted RMSE scores: Majority    0.8228 MTTree    0.3949 PLS    0.3021 Earth    0.2880 ========================== Multi-label classification ========================== Multi-label classification requires different metrics than those used in
• ## docs/reference/rst/Orange.evaluation.testing.rst

 r10192 Different evaluation techniques are implemented as instance methods of :obj:`Evaluation` class. For ease of use, an instance of this class in :obj:`Evaluation` class. For ease of use, an instance of this class is created at module loading time and instance methods are exposed as functions with the same name in Orange.evaluation.testing namespace.
• ## docs/reference/rst/Orange.multitarget.rst

 r10332 Multi-target prediction tries to achieve better prediction accuracy or speed through prediction of multiple dependent variable at once. It works on through prediction of multiple dependent variables at once. It works on :ref:`multi-target data `, which is also supported by Orange's tab file format using :ref:`multiclass directive `. Orange.regression.earth For evaluation of multi-target methods, see the corresponding section in :ref:`Orange.evaluation.scoring `. .. automodule:: Orange.multitarget
• ## install-scripts/mac/dailyrun-bundleonly-hg.sh

 r10270 # Should be run as: sudo ./dailyrun-bundleonly-hg.sh # export PATH=\$HOME/bin:\$PATH MAC_VERSION=`sw_vers -productVersion | cut -d '.' -f 2`
• ## install-scripts/mac/dailyrun.sh

 r10337 test -r /sw/bin/init.sh && . /sw/bin/init.sh export PATH=\$HOME/bin:\$PATH STABLE_REVISION_1=`svn info --non-interactive http://orange.biolab.si/svn/orange/branches/ver1.0/ | grep 'Last Changed Rev:' | cut -d ' ' -f 4`
• ## install-scripts/mac/update-all-scripts.sh

 r10274 curl --silent --output bundle-daily-build-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-daily-build-hg.sh curl --silent --output bundle-inject-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-inject-hg.sh curl --silent --output bundle-inject-pypi.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/bundle-inject-pypi.sh curl --silent --output dailyrun-bundleonly-hg.sh https://bitbucket.org/biolab/orange/raw/tip/install-scripts/mac/dailyrun-bundleonly-hg.sh
Note: See TracChangeset for help on using the changeset viewer.