Log Loss
Log loss (cross entropy) measures probabilistic forecast error by penalizing confident wrong predictions very strongly. Lower is better.
Definition
Log loss (also called logarithmic loss or cross entropy) is a scoring rule for probabilistic forecasts. Like Brier score, lower is better, but log loss punishes extreme wrong forecasts much more sharply.
Formula
For a binary outcome o in {0,1} and predicted probability p of outcome 1:
LL = - ( o * ln(p) + (1 - o) * ln(1 - p) )
Interpretation
• If you predict p close to 1 and the event happens, loss is near 0.
• If you predict p close to 1 and the event does not happen, loss becomes very large.
This makes log loss particularly sensitive to overconfidence.
Why use it
Log loss is widely used in machine learning and forecasting competitions because it is a proper scoring rule and strongly discourages “sure thing” calls unless you are truly sure.
Practical considerations
Avoid p = 0 or p = 1: Because ln(0) is undefined, systems typically clamp probabilities to a small epsilon such as 0.0001 and 0.9999 before scoring.
Related
Compare with Brier score. For assessing probability quality visually, see calibration.