10 Comments
User's avatar
RobertJ's avatar

This is an excellent distillation of rationalist concepts. Here’s a few more data points that come up often in finance:

Robert Rubin teaches how to be a cabinet secretary (using probabilities to govern). Annie Duke teaches how to be a poker player (using probabilities to act). Philip Tetlock teaches how to be a forecaster (using probabilities to calibrate). This essay teaches how to be an empiricist (using probabilities to see).

Rubin and Duke focus on decision-making under uncertainty: how to act when certainty is impossible. Tetlock focuses on measurement and calibration: tracking accuracy over time. But this essay focuses on something more foundational: using predictions as a lens to distinguish beliefs that carve reality at its joints from those that don’t. It’s not merely about forecasting well or deciding wisely. It’s about ensuring beliefs actually constrain expectations about the world.

The section reframing prediction from a temporal concept to an epistemic one was particularly clarifying. Showing that we make predictions about unknown information regardless of whether that information concerns the past, present, or future makes the framework far more general and powerful. Excellent work.

Grant Mulligan's avatar

How do I get better at thinking in predictions? I don’t naturally think this way, so I need a way to train and build a new mental model that more explicitly uses predictions.

Julius's avatar

I do have an upcoming post on this (we'll see when I actually get it out...). Some of it is deliberate practice, but some of it is just an awareness that beliefs should lead to predictions. For example, if you believe policy X is a really good idea, tell me what changes you predict it will lead to. Sometimes I use prediction markets for this, like https://manifold.markets. I'll make a market like, "Given housing bill 123 passes, in a year, the homeless rate will be lower than it is today" or something like that.

Grant Mulligan's avatar

I look forward to reading it!

Michael Keenan's avatar

Try the Quantified Intuitions minigames! They have a new Estimation Game every month.

https://www.quantifiedintuitions.org/

Ceba's avatar

I've started writing predictions down, with reasons, and reviewing them later. Having the dated physical document prevents dishonesty ("actually, that was certain to happen in retrospect, I could have predicted that"), and lets me see why I went wrong.

My predictions can be about anything. I'm trying to cover lots of different parts of my experience and of the world.

When/how/often do you think predictions should happen in your thought process

Michael Magoon's avatar

I am not sure that I agree that the ability to make successful predictions is the best measure of a theory or an opinion. Scientists derive hypotheses from theories in order to validate their theory, but the test is usually in a controlled environment.

Any opinion about human societies can rarely be done in such a controlled environment, so it is unclear what a successful or unsuccessful prediction really means. If someone can make dozens of non-trivial predictions in a row, then I will seriously pay attention, but one successful prediction may not tell us very much.

For example, I may successfully predict that the Philadelphia Eagles win the Super Bowl, but being correct, does not prove that I know anything about football. Now if I can successfully predict the winner ten years in a row, that proves something.

But bets on the future rarely work that way.

Typically, social scientists make hypotheses about the past where we have a substantial enough data to rule out a lucky guess. To me, that is far more convincing than a willingness to take a bet and win that bet.

I think a willingness to take a bet is more a sign of over-confidence, rather than being modest. At Philip Tetlock shows confidence and correctness are inverse to each other.

https://techratchet.com/2021/05/26/book-summary-expert-political-judgement-how-good-is-it-by-philip-tetlock/

Julius's avatar

I certainly agree that natural experiments on human societies are incredibly messy, and it's hard to extract clear lessons from them. I also agree that we don't learn much from a single correct prediction, but I think we can learn a lot by testing lots of predictions.

This post was only the first in what (I hope) will be a series of 3 or 4 on predictive thinking. The next will talk about Popper and why he was opposed to probabilities the way I'm using them, and will go into this in more detail.

I haven't read that book by Tetlock, but I have read Superforecasters, and in it, he talks about how some people really can become much better at making predictions. These aren't the "experts" we put on TV to talk about political events, so maybe it depends on who is considered an expert. There were definitely some people (the foxes) who could learn how to predict well over a large number of samples.

Julius's avatar

What would you say the downsides are?