My favorite model is wrong
(and it's sort of the point)I am preparing a short outreach presentation on mathematical models, why they are difficult to build, and how they can fail. One of the points I think is not made explicitely enough is that sometimes, the model being wrong is the point. And for this, I would like to discuss one of the models I rely on fairly often, and to which I pay close attention to despite the fact that it has never been right even once.
To transport our vast offspring, we bought a vintage Dodge Caravan, which comes with an indicator called DTE. DTE stands for Distance To Empty (or something to that effect, I neither know nor care about cars), and it is a number that tells you how many kilometers are left before you run out of gas.
It’s a model, and it’s wrong.
Calculating DTE is a simple process - it requires to know the volume of gas left in the tank, and the fuel efficiency (which incidentally, I discussed in a recent entry about units). The formula for DTE is
$$\text{DTE} = \frac{\text{fuel left}}{\left(\frac{\text{fuel consumed}}{\text{distance driven}}\right)}$$
This could be an interesting opportunity to discuss why this model is going to be biased (all of these quantities have to be measured, and error propagate, and so on and so forth), but this is not what is interesting in this model. What is interesting is that on the rare occasions where I drive, and I look at how far I can still go, I will sometimes see a number that is low. Let’s say, 35 km.
I can guarantee you that 35 km later, I will not be on the side of the road, out of gas. This is because the model is making a prediction which is only valid as long as the circumstances external to this model do not change - namely, as long as I don’t stop to put some more gas in the tank. Which I do.
This model is my favorite because it’s a situation to which it is relatively easy to relate, in which the model is going to make a prediction, and because this prediction is communicated to us, it will never be realized. The model is right, technically speaking, but because we adopt a mostly data-driven behavior in response to it, we never let the forecast play out.
The model is correct, but comparing the empirical data to the predictions it gives is never going to result in agreement, because we are acting on the circumstances that are external to the model in response to the model output. I’m sure you can guess what family of models I will be discussing next month, by now.
Models are not prophecies. There are parameters in them on which we can act (I can drive more carefully, and change my fuel efficiency), and external actions we can take to change the state variables (I can stop for gas and then the DTE changes). Models are not prophecies, as long as we remember that they exist within a feedback loop where the model informs us about situations, and that based on this information we can respond by modifying the environment in which the model is expressed.
In a sense, we don’t model things to see what will happen - we model things to see how we can prevent what the model predicts from happening. This is especially true when dealing with issues like biodiversity loss, climate change, or this other timely thing I forgot about (something about a virus?), because the whole point of the models is to get a sense for how bad it can be, so that we may act to make it less bad.
A model being wrong, in that a posteriori it disagrees with empirical data, is not an issue. Sometimes it is a feature of the model. Sometimes it is what we should be hoping for.