Let’s say we’ve got some observation based on real data. In our case, we’ll say it’s a measurement of niche overlap between ENMs built from real occurrence points for a pair of species (figure partially adapted (okay, stolen) from a figure by Rich Glor). We have ENMs for two species, and going grid cell by grid cell, we sum up the differences between those ENMs to calculate a summary statistic measuring overlap, in this case D.

In the case of good old parametric statistics, we would do that by comparing our empirical measurement to a parametric estimate of the overlap expected between two species (i.e., we would say "if the null hypothesis is true, we would expect an overlap of 0.5 with a standard deviation of .05", or something like that). That would be fine if we could accurately make a parametric estimate of the expected distribution of overlaps under that null hypothesis, i.e., we need to be able to specify a mean and variance for expected overlap under that null hypothesis. How do we do that? Well, unfortunately, in our case we can’t. For one thing we simply can’t state that null in a manner that makes it possible for us to put numbers on those expectations. For another, standard parametric statistics mostly require the assumption that the distribution of expected measurements under the null hypothesis meets some criteria, the most frequent being that the distribution is normal. In many cases we don’t know whether or not that’s true, but in the case of ENM overlaps we know it’s probably

*not*true most of the time. Overlap metrics are bound between 0 and 1, and if the null hypothesis generates expectations that are near one of those extremes, the distribution of expected overlaps is highly unlikely to be even approximately normal. There can also be (and this is based on experience), multiple peaks in those null distributions, and a whole lot of skew and kurtosis as well. So a specification of our null based on a normal distribution would be a poor description of our actual expectations under the null hypothesis, and as a result any statistical test based on parametric stats would be untrustworthy. I have occasionally been asked whether it’s okay to do t-tests or other parametric tests on niche overlap statistics, and, for the reasons I’ve just listed, I feel that the answer has to be a resounding “no”.

So what’s the alternative? Luckily, it’s actually quite easy. It’s just a little less familiar to most people than parametric stats are, and requires us to think very precisely about the ideas we’re trying to test. In our case, what we need to do is to find some way to estimate the distribution of overlaps expected between a pair of species using this landscape and these sample sizes

*if they were effectively drawn from the same distribution of environments*. What would that imply? Well, if each of these sets of points were drawn from the same distribution, we should be able to generate overlap values similar to our empirical measurement by repeating that process. So that’s exactly what we do!

We take all of the points for these two species and we throw them in a big pool. Then we randomly pull out points for two species from that pool, keeping the sample sizes consistent with our empirical data. Then we build ENMs for those sets of points and measure overlaps between them. That gives us a single estimate of expected overlaps under the null hypothesis. So now we’ve got our empirical estimate (red) and one realization of the null hypothesis (blue)

So there you go! We now have a nonparametric test of our hypothesis. All we had to do was (1) figure out precisely what our null hypothesis was, (2) devise a way to generate the expected statistics if that hypothesis were true, (3) generate a bunch of replicate realizations of that null hypothesis to get an expected distribution under that null, and (4) compare our empirical observations to that distribution. Although this approach is certainly less easy than simply plugging your data into Excel and doing a t-test or whatnot, there are many strengths to the Monte Carlo approach. For instance, we can use this approach to test pretty much any hypothesis that we can simulate – as long as we can produce summary statistics from a simulation that are comparable to our empirical data, we can test the probability of observing our empirical data under the set of assumptions that went into that simulated data. It also means we don’t have to make assumptions about the distributions that we’re trying to test – by generating those distributions directly and comparing our empirical results to those distributions, we manage to step around many of the assumptions that can be problematic for parametric statistics.

The chief difficulty in applying this method is in steps 2 and 3 above – we have to be able to explicitly state our null hypothesis, and we have to be able to generate the distribution of expected measurements under that null. Honestly, though, I think this is actually one of the greatest strengths of Monte Carlo methods: while this process may be more intensive than sticking our data into some plug-and-chug stats package, it requires us to think very carefully about what precisely our null hypothesis means, and what it means to reject it. It requires more work, but more importantly it requires a more thorough understanding of our own data and hypotheses.

## Author

Dan Warren is a postdoctoral researcher in the Parmesan lab at UT Austin.