Core docs

Track Record and Learning Loop

EdgeVisor keeps a prediction record, resolves outcomes when markets close, and uses outcome feedback to adapt internal weighting. The page explains what those metrics mean, what they do not mean, and how a user should read them.

Public docs TechArticle track-record-and-learning
Home / Docs / Track Record and Learning Loop

How to use this page

Read the extract first, then the application and limits sections, and only then decide whether the thesis is strong enough for action or only for context.

Extractable overview

Feedback loop: prediction record -> market resolution -> Brier score and correctness -> weight updates.

Learning method: multiplicative weight updates can reward or punish analysts based on resolved outcomes.

Drift control: the learning layer includes safeguards that can pull a signal back toward neutral when its behavior changes.

What gets recorded

When EdgeVisor publishes a prediction, it can store the market id, category, estimate, market price, side, confidence, analyst confidences, signal metadata, and explanation payload. That creates a record that can later be reconciled with the actual market resolution.

This matters because learning without recorded state is theater. The system needs a stable memory of what it believed at prediction time.

Recorded field Why it matters
Estimate and market price Define the exact disagreement the model acted on
Category and signal metadata Allow later analysis of which market conditions helped or hurt the system
Explanation payload Preserves what the product actually showed to the user, not just a hidden score

How feedback updates weights

When a market resolves, the outcome tracker computes whether the thesis was correct and what the probability error looked like. Those signals can feed back into internal weighting. Stronger signals gain more influence when they are helpful; weaker ones lose influence when they mislead.

Example Predicted probability Actual outcome Brier score
Strong but wrong call 0.80 0 0.64
Measured correct call 0.65 1 0.12

EdgeVisor also keeps regime summaries and rolling metrics so the learning loop is not just a single global score. In practice this means the system is trying to learn which mix of evidence is more useful under which market conditions instead of blindly trusting one internal pattern forever.

What the metrics do not mean

Brier score is not the same as profit. Win rate is not the same as calibration. And a good short-term patch does not guarantee that the same analyst mix will stay useful after the environment changes.

  • High win rate can still be shallow: it may come from taking easy favorites rather than from well-calibrated probabilities.
  • Good calibration can still lose money: execution timing and market microstructure still matter.
  • Learning reduces blindness, not risk: it improves accountability, but it cannot remove liquidity risk, timing risk, or category mismatch.

Frequently asked questions

Does EdgeVisor learn only from executed trades?

No. It can also use weaker feedback from resolved predictions, not just live execution outcomes.

Does a good win rate automatically mean good calibration?

No. Win rate and Brier score answer different questions. Calibration checks probability quality, not just directional correctness.