← Back to Glossary

Calibration Curve

A calibration curve plots predicted probabilities against observed frequencies across probability buckets. It shows whether probabilities mean what they claim.

Definition

A calibration curve visualizes calibration by comparing predicted probabilities to realized outcome rates across probability buckets.

How it is built

• Group forecasts into buckets (example: 0.50 to 0.60, 0.60 to 0.70).

• For each bucket, compute average predicted probability.

• For each bucket, compute realized frequency (average outcome).

• Plot realized frequency against average predicted probability.

How to read it

• Points near the diagonal line mean good calibration.

• If points are closer to 50% than your predictions, you are overconfident.

• If points are more extreme than your predictions, you are underconfident.

Why it matters

Calibration curves explain why your Brier score looks the way it does, and which probability ranges need adjustment.