Binary classification insight

Receiver Operating Characteristic explorer

The ROC curve shows how well a binary classifier can separate signal from noise. Adjust the probability threshold, inspect the confusion matrix, and compare scenarios to see how separability changes the curve.

Choose score separation

Each option represents score distributions for positives and negatives. Pick one to explore its ROC curve and confusion matrix.

100.0% AUC · 99% TPR · 1% FPR

Negative mean0.22
Positive mean0.78
Threshold0.50

Adjust the levers from anywhere on the page to explore custom separations and thresholds.

ROC curve

The curve plots true positive rate against false positive rate across thresholds.

Scenario: Clear separation

AUC 100%

ROC curves for the selected scenarios0.000.250.500.751.000.000.250.500.751.00False positive rateTrue positive rate

Clear separation

100.0%

Area under the ROC curve

Subtle difference

96.1%

Area under the ROC curve

Barely better than chance

78.0%

Area under the ROC curve

Custom tuning

89.2%

Area under the ROC curve

Score distributions

Visualize how the positive and negative classes are distributed across the probability axis. Adjust the levers to reposition their means.

Negative μ0.22

Positive μ0.78

Probability density for positive and negative scoresThresholdNegative meanPositive mean0.000.250.500.751.00Probability scoreRelative density

Use the floating controls pinned to the top of the page to reposition the class means and see the distributions respond in real time.

Adjusting a lever switches to the custom scenario so you can explore bespoke separations tailored to your inputs.

The levers stay within a 0.20–0.70 probability band so the combined span remains 0.5, and the negative class never overtakes the positive class.

Performance metrics

Probability cutoff 0.50

Use the threshold lever anchored at the top of the page to watch these operating metrics shift in real time.

True positive rate

99%

False positive rate

1%

Specificity

99%

Sensitivity

99%

Precision

98.3%

Accuracy

99.1%

F1 score

98.7%

Balanced accuracy

99%

Confusion matrix

1,000 examples · 35% prevalence

ActualPositiveNegative
Predicted (+)
Predicted (−)
Positive

True positive

347

34.7% of all cases

False negative

3

0.3% overall

Negative

False positive

6

0.6% overall

True negative

644

64.4% overall

What the curve tells us

  • A higher curve hugs the top-left corner, meaning more true positives for the same false positive rate.
  • The diagonal reference line represents random guessing. Curves below that line indicate performance worse than chance.
  • The area under the curve (AUC) summarizes the trade-off: 100.0% for the current scenario.
  • Slide the threshold to watch the operating point move along the curve and see the confusion matrix update in real time.

When to use ROC curves

ROC analysis shines when the balance between sensitivity and specificity matters more than raw accuracy. It is especially useful when class imbalance exists or when different operating points are needed for different stakeholders.

  • Compare classifiers independent of threshold.
  • Communicate trade-offs between catching positives and avoiding false alarms.
  • Identify the threshold that maximizes a preferred statistic (Youden, F1, cost-based, etc.).

Reading the matrix

The confusion matrix grounds the abstract ROC curve in concrete outcomes. As you sweep the threshold, watch how cases migrate between the four quadrants and how metrics such as precision and F1 respond.

In high-risk environments you might prioritize sensitivity and accept more false positives. In resource-constrained workflows you may push for specificity. The ROC curve visualizes every possible operating point.