Changeset 4061:647b41b6e290 in orange


Ignore:
Timestamp:
08/07/07 14:17:34 (7 years ago)
Author:
blaz <blaz.zupan@…>
Branch:
default
Convert:
242404c6ba1105c2dbb00e921e2d228ab9d644dc
Message:

added one more example with orngTest and orngStat (more compact representation of which scores do we want to compute, results in a simple script)

File:
1 edited

Legend:

Unmodified
Added
Removed
  • orange/doc/ofb/c_performance.htm

    r2826 r4061  
    7070<p>The output of this script is:</p> 
    7171<xmp class="code">Learner  CA     IS     Brier    AUC 
    72 bayes    0.903  0.759  0.175  0.974 
    73 tree     0.961  0.850  0.070  0.960 
     72bayes    0.901  0.758  0.176  0.976 
     73tree     0.961  0.845  0.075  0.956 
    7474</xmp> 
    7575 
     
    8484<code>IS</code> and <code>AUC</code>).</p> 
    8585 
    86 <p>Apart from statistics that we have mentioned above,  
    87 <a href="../modules/orngStat.htm">orngStat</a> has build-in functions that can compute other performance 
    88 metrics, and <a href="../modules/orngTest.htm">orngTest</a> includes other testing schemas and includes caching of results.  
    89 If you need to test your learners with standard statistics, these are probably all you need.</p> 
     86<p>Apart from statistics that we have mentioned above, <a 
     87href="../modules/orngStat.htm">orngStat</a> has build-in functions 
     88that can compute other performance metrics, and <a 
     89href="../modules/orngTest.htm">orngTest</a> includes other testing 
     90schemas. If you need to test your learners with standard statistics, 
     91these are probably all you need. Compared to the script above, we 
     92below show the use of some other statistics, with perhaps more modular 
     93code as above.</p> 
     94 
     95<p class="header">part of <a href="accuracy8.py">accuracy8.py</a> (uses <a 
     96href="voting.tab">voting.tab</a>)</p> 
     97<xmp class="code">data = orange.ExampleTable("voting") 
     98res = orngTest.crossValidation(learners, data, folds=10) 
     99cm = orngStat.computeConfusionMatrices(res, 
     100        classIndex=data.domain.classVar.values.index('democrat')) 
     101 
     102stat = (('CA', 'CA(res)'), 
     103        ('Sens', 'sens(cm)'), 
     104        ('Spec', 'spec(cm)'), 
     105        ('AUC', 'AUC(res)'), 
     106        ('IS', 'IS(res)'), 
     107        ('Brier', 'BrierScore(res)'), 
     108        ('F1', 'F1(cm)'), 
     109        ('F2', 'Falpha(cm, alpha=2.0)')) 
     110 
     111scores = [eval("orngStat."+s[1]) for s in stat] 
     112print "Learner  " + "".join(["%-7s" % s[0] for s in stat]) 
     113for (i, l) in enumerate(learners): 
     114    print "%-8s " % l.name + "".join(["%5.3f  " % s[i] for s in scores]) 
     115</xmp> 
     116 
     117<p>Notice that for a number of scoring measures we needed to compute 
     118the confusion matrix, for which we also needed to specify the target 
     119class (democrats, in our case). This script has a similar output to 
     120the previous one:</p> 
     121 
     122<xmp class="code">Learner  CA     Sens   Spec   AUC    IS     Brier  F1     F2 
     123bayes    0.901  0.891  0.917  0.976  0.758  0.176  0.917  0.908 
     124tree     0.961  0.974  0.940  0.956  0.845  0.075  0.968  0.970 
     125</xmp> 
    90126 
    91127 
Note: See TracChangeset for help on using the changeset viewer.