Orange Forum • View topic - AUC error bars and pvalues

AUC error bars and pvalues

A place to ask questions about methods in Orange and how they are used and other general support.

AUC error bars and pvalues

Postby John » Fri Apr 29, 2005 0:11

Hello,

I'm interested in computing (1) error bars, and (2) significance levels, of the AUC from a cross-validated experiment.

For (1), the orngStat documentation says that the function AUCWilcoxon(results) will return the AUC and its standard error, but only for a single-iteration experiment. Indeed, when I give it the results from a call to orngTest.crossValidation, an error is generated. Is there any way to get the standard errors of the AUC of a cross-validated experiment?

For (2), I think what I want is to wrap up the entire experiment inside a permutation test, which would compute a p-value representing the probability of finding an AUC at least as extreme as the one computed without permuting the class labels. Is there a permutation test capability already built into Orange, or would I have to program this myself?

Thanks,

John

Postby Janez » Thu May 05, 2005 18:12

If the experimental results are computed with cross-validation, I'd say that the returned probabilities are not comparable since they are essentially returned by different models. In 10-fold cross-validation of classification trees, you shouldn't (I guess) compare the probabilities returned by ten different trees. This is why AUCWilcoxon checks that the data was not from multiple samples/learners. You can cheat, though:
Code: Select all
res = orngTest.crossValidation([orange.BayesLearner()], data)

res.numberOfIterations = 1
for ex in res.results:
    res.iterationNumber = 0

But I'm not sure that this is statistically correct.

I like you idea for a permutation test very much - it seems more plausible than the other tests for AUC I've heard about. This is not yet in Orange, but it shouldn't take more that a few lines of Python.


Return to Questions & Support