Forecasting Glossary

Key terms and concepts used in probabilistic prediction and forecasting. Understanding these helps you interpret our predictions and methodology.

1

Prediction

A falsifiable statement about a future event, phrased as a binary YES/NO question with a defined target date and clear resolution criteria.

Example: "Will the Federal Reserve cut interest rates before July 2026?"
2

Probability

A number between 0% and 100% representing our confidence that a prediction will resolve YES. We use 1-99% range, never claiming certainty.

Example: 65% probability means we expect YES roughly 65 times out of 100 similar situations.
3

Base Rate

The initial probability assigned to a prediction before any news signals are applied. Determined by historical reference classes, expert consensus, and structural analysis.

Example: If historically 30% of similar trade agreements pass, the base rate would be 30%.
4

Signal

A piece of news or information that affects the probability of a prediction outcome. Signals have direction (+/-/0) and strength (0-100).

Example: A positive signal (+) like "Trade talks resume" increases probability; negative signal (-) like "Negotiations break down" decreases it.
5

Signal Direction

Whether a signal increases (+), decreases (-), or has neutral (0) effect on the prediction probability. Silence (∅) means no relevant news.

Related:SignalDigest
6

Signal Strength

A number from 0-100 indicating how much a signal should affect the probability. Higher strength = larger potential impact.

Example: A major policy announcement might have strength 80, while routine procedural news might be 20.
7

Digest

A daily summary of all signals for a prediction, including the dominant direction, overall signal strength, and narrative summary of relevant news.

8

Probability Update

The daily adjustment to a prediction's probability based on accumulated signals. Updates are capped to prevent overreaction: max +15%/day increase, -20%/day decrease.

9

Daily Cap

Maximum probability change allowed per day. Prevents overreaction to single news events. Asymmetric: +15% max increase, -20% max decrease.

Example: Even with very strong positive signals, probability can only increase 15% in one day.
10

Decay

The gradual reduction of a signal's influence over time. Old news becomes less relevant, and probabilities naturally drift toward base rates without fresh signals.

11

Calibration

How well predicted probabilities match actual outcomes. A well-calibrated forecaster has events predicted at 70% happen about 70% of the time.

Example: If you predict 10 events at 80% and 8 happen, you're well-calibrated at that level.
12

Brier Score

A scoring metric for probability predictions. Ranges from 0 (perfect) to 1 (worst). Calculated as the mean squared difference between predictions and outcomes.

Example: Predicting 90% for an event that happens scores 0.01; predicting 90% for an event that doesn't happen scores 0.81.
13

Resolution

The process of determining whether a prediction resolved YES or NO. Based on predefined resolution criteria and authoritative sources specified when the prediction was created.

14

Resolution Criteria

The specific, unambiguous conditions that determine YES vs NO outcome. Written at prediction creation time and never modified.

Example: "YES if official unemployment rate falls below 4.0% as reported by Bureau of Labor Statistics."
15

Authority Source

The trusted source(s) used to determine resolution. Could be government agencies, official announcements, reputable news organizations, etc.

Example: For election predictions: official election commission results. For company news: SEC filings or official press releases.
16

Post-Mortem

Analysis conducted after a prediction resolves. Examines what signals were meaningful, what was missed, and lessons for future predictions.

17

Frozen

The state of a prediction after its question and resolution criteria are finalized. Frozen predictions cannot have their wording changed, only probability updates.

18

Reference Class

A set of similar historical situations used to establish base rates. Choosing the right reference class is crucial for good initial probability estimates.

Example: For "Will this startup succeed?", reference class might be "tech startups at Series A stage in this sector."