My watch list
my.bionity.com  
Login  

Receiver operating characteristic



  In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system as its discrimination threshold is varied. The ROC can also be represented equivalently by plotting the fraction of true positives (TPR = true positive rate) vs. the fraction of false positives (FPR = false positive rate). Also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes (Signal detection theory and ROC analysis in psychology and diagnostics : collected papers; Swets, 1996).

ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making. Widely used in medicine, radiology, psychology and other areas for many decades, it has been introduced relatively recently in other areas like machine learning and data mining.

Contents

Basic concept

Terminology and derivations
from a confusion matrix
true positive (TP)
eqv. with hit
true negative (TN)
eqv. with correct rejection
false positive (FP)
eqv. with false alarm, Type I error
false negative (FN)
eqv. with miss, Type II error
true positive rate (TPR)
eqv. with hit rate, recall, sensitivity
TPR = TP / P = TP / (TP + FN)
false positive rate (FPR)
eqv. with false alarm rate, fall-out
FPR = FP / N = FP / (FP + TN)
accuracy (ACC)
ACC = (TP + TN) / (P + N)
specificity (SPC)
SPC = TN / (FP + TN) = 1 − FPR
positive predictive value (PPV)
eqv. with precision
PPV = TP / (TP + FP)
negative predictive value (NPV)
NPV = TN / (TN + FN)
false discovery rate (FDR)
FDR = FP / (FP + TP)

Source: Fawcett (2004).

See also: Type I and type II errors

A classification model (classifier or diagnosis) is a mapping of instances into a certain class/group. The classifier or diagnosis result can be in a real value (continuous output) in which the classifier boundary between classes must be determined by a threshold value, for instance to determine whether a person has hypertension based on blood pressure measure, or it can be in a discrete class label indicating one of the classes.

Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n) class. There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said a false positive (FP). Conversely, a true negative has occurred when both the prediction outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p.

To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.

Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:

  actual value
  p n total
prediction
outcome
p' True
Positive
False
Positive
P'
n' False
Negative
True
Negative
N'
total P N

ROC space

  The contingency table can derive several evaluation metrics (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed. TPR determines a classifier or a diagnostic test performance on classifying positive instances correctly among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.

An ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 - specificity, the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. Each prediction result or one instance of a confusion matrix represents one point in the ROC space.

The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (all true positives are found) and 100% specificity (no false positives are found). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners. An intuitive example of random guessing is a decision by flipping coins (head or tail).

The diagonal line divides the ROC space in areas of good or bad classification/diagnostic. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results (although the prediction method can be simply inverted to get points above the line). Let us look into four prediction results from 100 positive and 100 negative instances:

A B C C'
TP=63 FP=28 91
FN=37 TN=72 109
100 100 200
TP=77 FP=77 154
FN=23 TN=23 46
100 100 200
TP=24 FP=88 112
FN=76 TN=12 88
100 100 200
TP=88 FP=24 112
FN=12 TN=76 88
100 100 200
TPR = 0.63 TPR = 0.77 TPR = 0.24 TPR = 0.88
FPR = 0.28 FPR = 0.77 FPR = 0.88 FPR = 0.24
ACC = 0.68 ACC = 0.50 ACC = 0.18 ACC = 0.82

Plots of the four results above in the ROC space are given in the figure. The result A clearly shows the best among B and C. The result B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored onto the diagonal line, as seen in C', the result is even better than A.

Since this mirrored C method or test simply reverses the predictions of whatever method or test produced the C contingency table, the C method has positive predictive power simply by reversing all of its decisions. When the C method predicts p or n, the C' method would predict n or p, respectively. In this manner, the C' test would perform the best. While the closer a result from a contingency table is to the upper left corner the better it predicts, the distance from the random guess line in either direction is the best indicator of how much predictive power a method has, albeit, if it is below the line, all of its predictions including its more often wrong predictions must be reversed in order to utilize the method's power.

Curves in ROC space

Discrete classifiers, such as decision tree or rule set, yield numerical values or binary label. When a set is given to such classifiers, the result is a single point in the ROC space. For other classifiers, such as naive Bayesian and neural network, they produce probability values representing the degree to which class the instance belongs to. For these methods, setting a threshold value will determine a point in the ROC space. For instance, if probability values below or equal to a threshold value of 0.8 are sent to the positive class, and other values are assigned to the negative class, then a confusion matrix can be calculated. Plotting the ROC point for each possible threshold value results in a curve.

Further interpretations

  Sometimes, the ROC is used to generate a summary statistic. Three common versions are:

  • the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line
  • the area between the ROC curve and the no-discrimination line
  • the area under the ROC curve, often called AUC.
  • d' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal plus noise conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, it can be proved that the shape of the ROC depends only on d'.

It can be shown that the area under the ROC curve is equivalent to the Mann-Whitney U, which tests for the median difference between scores obtained in the two groups considered if the groups are of continuous data.

However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.

The machine learning community most often uses the ROC AUC statistic. This measure can be interpreted as the probability that when we randomly pick one positive and one negative example, the classifier will assign a higher score to the positive example than to the negative. In engineering, the area between the ROC curve and the no-discrimination line is often preferred, because of its useful mathematical properties as a non-parametric statistic. This area is often simply known as the discrimination. In psychophysics, d' is the most commonly used measure.

The illustration to the right shows the use of ROC graphs for the discrimination between the quality of different epitope predicting algorithms. If you wish to discover at least 60% of the epitopes in a virus protein, you can read out of the graph that about 1/3 of the output would be falsely marked as an epitope. The information that is not visible in this graph is that the person that uses the algorithms knows what threshold settings give a certain point in the ROC graph.

History

The ROC curve was first used during the World War II for the analysis of radar signals before it was employed in the signal detection theory.[1] Following the attack on Pearl Harbor in 1941, the United States army began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals.

In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals.[1] In medicine, ROC analysis has been extensively used for diagnostic testing to evaluate the effectiveness of a new drug or diagnostic method against the already established one.[2][3] ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques.[4]. In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models.

ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.[5]

References

  1. ^ a b D.M. Green and J.M. Swets (1966). Signal detection theory and psychophysics. New York: John Wiley and Sons Inc.. ISBN 0-471-32420-5. 
  2. ^ M.H. Zweig and G. Campbell (1993). "Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine". Clinical chemistry 39 (8): 561–577. PMID 8472349.
  3. ^ M.S. Pepe (2003). The statistical evaluation of medical tests for classification and prediction. New York: Oxford. 
  4. ^ N.A. Obuchowski (2003). "Receiver operating characteristic curves and their use in radiology". Radiology 229 (1): 3–8. PMID 14519861.
  5. ^ Spackman, K. A. (1989). "Signal detection theory: Valuable tools for evaluating inductive learning". Proceedings of the Sixth International Workshop on Machine Learning: 160–163, San Mateo, CA: Morgan Kaufman. 

General references

  • T. Fawcett (2004). "ROC Graphs: Notes and Practical Considerations for Researchers". Technical report, Palo Alto, USA: HP Laboratories. 

Further reading

  • Balakrishnan, N., Handbook of the Logistic Distribution, Marcel Dekker, Inc., 1991, ISBN-13: 978-0824785871.
  • Green, William H., Econometric Analysis, fifth edition, Prentice Hall, 2003, ISBN 0-13-066189-9.
  • Hosmer, David W. and Stanley Lemeshow, Applied Logistic Regression, 2nd ed., New York; Chichester, Wiley, 2000, ISBN 0-471-35632-8.
  • Lasko, T. A., J.G. Bhagwat, K.H. Zou and L. Ohno-Machado (Oct. 2005). The use of receiver operating characteristic curves in biomedical informatics. Journal of Biomedical Informatics 38(5):404-415. PMID 16198999
  • Mason, S. J. and N.E. Graham, Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation. Q.J.R. Meteorol. Soc. (2002), 128, pp. 2145–2166.
  • Pepe, M. S. (2003). The statistical evaluation of medical tests for classification and prediction. Oxford. ISBN 0198565828
  • Stephan, Carsten, Sebastian Wesseling, Tania Schink, and Klaus Jung. Comparison of Eight Computer Programs for Receiver-Operating Characteristic Analysis. Clin. Chem., Mar 2003; 49: 433 - 439. [1]
  • Swets, J.A. (1995). Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. Lawrence Erlbaum Associates.

Open source software

  • ROCR, a comprehensive R package for evaluating scoring classifiers (Introductory article)
  • List of ROC analysis software
  • ROC package for R (part of the BioConductor suite)
  • Standalone PERF program used by the KDD Cup competition
  • Web-based calculator of ROC curves from user-supplied data
  • ROC curve visualiser
 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Receiver_operating_characteristic". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE