There is a whole sub-genre of the ecological network literature working on elucidating “the structure” of bipartite networks (parasite/host, pollinator/plant, …). I am, of course, guilty of contributing a few papers to this genre. The premise is that, by putting together enough data from different places, we may be able to infer some of the general mechanisms that shape different aspects of the structure.

A little while ago, I gave a talk about the promises and challenges of high performance computing for biodiversity sciences. Because I wanted to go beyond “having more cores means we can run more model replicates”, I started by discussing the availability of data on Canadian’s biodiversity, and how we can do data-driven research. Long story short, unless we like birds, we can’t.

Every time I hear about Big Data in ecology, I cringe a little bit. Some of us may be lucky enough to have genuinely big data, but I believe this is the exception rather than the norm. And this is a good thing, because tiny data are extremely exciting – in short, they offer the challenge of isolating a little bit of signal in a lot of noise, and this is a perfect excuse to apply some really fun tools. And one of my favorite approaches for really small data is ABC, Approximate Bayesian Computation. Let’s dig in!