The relevance of models to *actual* biology or ecology is not always obvious. In
part, this is because models are by design a gross oversimplication of actual
systems. They gloss over mechanisms. They reduce some complex behaviours to a
single parameter. They abstract a lot of processes. And in the end, they let us
say things that are general, to the point of being wrong. And this is precisely
what makes them very useful.

For a predator to consume a prey is the outcome of a very long series of very complex events. The predator needs to balance its own metabolic needs with the cost of hunting, needs to detect the prey, need to figure out which preys are suitable, then it needs to catch, kill, consume, and digest it. How should we model this? If there are $X$ preys and $Y$ predators, I would be absolutely happy with $\alpha X$: $\alpha$ is the rate of consumption of one unit of prey for a unit of predator.

It is of course preposterous to assume that this will be enough to describe all
the mechanisms involved. I may be willing to compromise, and make my answer a
little bit more complex: $X\alpha/(1+\alpha hX)$. In this formulation, $h$ is
the average time spent by a predator to *process* a unit of prey. Is this more
realistic? In a way, yes, because any predator is either hunting ($\alpha$) or
consuming ($h$) at any give time.

**But is this *better***? Well, it depends.

From a purely statistical point of view, the second model has more *parameters*,
so we may define “better” as “giving enough predictive power to justify spending
the extra parameter”.

From a biological point of view, the second model has more *mechanisms*, so it
should be better. Even though represent these mechanisms as their phenomenons
($h$ represents the fact that the consuming a prey takes time, and not the
series of actions needed to consume the prey), there seems to be more realism in
this model.

From a mathematical analysis point of view, the first model is linear, so its analysis is trivial. It is better because it is easier to handle.

From a simulation point of view, the second model is non-linear, which should reduce the risk of instability. Type II responses (to call the second model by its proper name) are usually stabilizing, so this version should be better.

**So why bother?**

We could keep on dumping parameters into our model until we have reproduced something which is very close to the real thing. But this would not be very useful. Because a model with tons of parameters, precise though it may be, is also unwieldy.

The nice thing about a model, is that we can *manipulate* it. In the second
model, we can ask questions like “What happens if the predator handles the prey
very rapidly?“. This means that $h → 0$, so $1+\alpha hX → 1$, and the second
model essentially behaves like the first one!

Models are *extremely good* at letting you play with the parameters. We can
arbitrarily shift entire mechanisms on and off, and observe the consequences.
This will almost never be a prediction of what happens in nature. Not even
close. But this can inform us about the relative importance of mechanisms, or
the relative interactions between them.

What allows us to use model is our knowledge of the underlying biology. We can
evaluate the mistakes that we are making when writing, for example, *growth rate
is constant unless it’s not* (also known as $rN$), and we can also evaluate what
can be said about the *output* of the models.

And when observations and models don’t match? Then models become *even more*
useful. If we put our best knowledge of the mechanisms in them, but they fail to
reproduce what we see, models become a map of our ignorance. In fact, **models
that don’t match empirical observations are the most useful of all**.

So *this* is why we bother modelling ecological processes. Because models offer
the flexibility to manipulate them, and because when they don’t work, we can
start to understand what gap in our understanding is to blame.