## Applying Cost in evaluation of classifiers

1 post
• Page

**1**of**1**### Applying Cost in evaluation of classifiers

Hello,

As I said earlier, I am applying Orange to an interesting problem in the Oil and Gas exploration. Without entering in details, I want to answer the following challenges:

1. How can we easily classify some rock facies using a binary classifier (related to a particular pretty costly risk) ?

2. Can we predict reliably this risk?

3. What will be the best classifiers knowing the cost impact of errors in classification?

4. What is the probability above which we can consider a rock formation as prone to this risk?

5. For given classifiers, what will be the average cost of error in classification?

Basically, I want to use the isoperformance curve and the cost definition in ROC and Lift.

The results are so far pretty encouraging. The cost curve of Lift graph is giving a very interesting result.

However, I am getting a Lift graph pretty strange with all the curves with a very low P, all the the extreme left side of the graph. Actually, the diagonal, which I understand correspond to a random classifier, is giving me a max of P=0.1 for a True Positive of 1. It seems strange and counter intuitive. Is it normal, is it depending of my data or is it a bug ?

As I said earlier, I am applying Orange to an interesting problem in the Oil and Gas exploration. Without entering in details, I want to answer the following challenges:

1. How can we easily classify some rock facies using a binary classifier (related to a particular pretty costly risk) ?

2. Can we predict reliably this risk?

3. What will be the best classifiers knowing the cost impact of errors in classification?

4. What is the probability above which we can consider a rock formation as prone to this risk?

5. For given classifiers, what will be the average cost of error in classification?

Basically, I want to use the isoperformance curve and the cost definition in ROC and Lift.

The results are so far pretty encouraging. The cost curve of Lift graph is giving a very interesting result.

However, I am getting a Lift graph pretty strange with all the curves with a very low P, all the the extreme left side of the graph. Actually, the diagonal, which I understand correspond to a random classifier, is giving me a max of P=0.1 for a True Positive of 1. It seems strange and counter intuitive. Is it normal, is it depending of my data or is it a bug ?

1 post
• Page

**1**of**1**