As part of various research projects, I have occasionally developed methods for testing hypotheses about ecological and evolutionary phenomena.  A point of confusion occasionally arises for some people using these tests when they come to the point of having to compare their empirical observations to a null distribution: it’s not something they’ve done so explicitly before, and they’re not quite sure how to do it.  In this post I’m going to try to explain in the simplest possible terms how hypothesis testing, and in particular nonparametric tests based on Monte Carlo methods, work.

Let’s say we’ve got some observation based on real data.  In our case, we’ll say it’s a measurement of niche overlap between ENMs built from real occurrence points for a pair of species (figure partially  adapted (okay, stolen) from a figure by Rich Glor).  We have ENMs for two species, and going grid cell by grid cell, we sum up the differences between those ENMs to calculate a summary statistic measuring overlap, in this case D.
Due to some evolutionary or ecological question we’re trying to answer, we’d like to know whether this overlap is what we’d expect under some null hypothesis.  For the sake of example, we’ll talk about the “niche identity” test of Warren et al. 2008.  In this case, we are asking whether the occurrence points from two species are effectively drawn from the same distribution of environmental variables.  If that is the case, then whatever overlap we see between our real species should be statistically indistinguishable from the overlap we would see under that null hypothesis.  But how do we test that idea quantitatively?

In the case of good old parametric statistics, we would do that by comparing our empirical measurement to a parametric estimate of the overlap expected between two species (i.e., we would say "if the null hypothesis is true, we would expect an overlap of 0.5 with a standard deviation of .05", or something like that).  That would be fine if we could accurately make a parametric estimate of the expected distribution of overlaps under that null hypothesis, i.e., we need to be able to specify a mean and variance for expected overlap under that null hypothesis.  How do we do that?  Well, unfortunately, in our case we can’t.  For one thing we simply can’t state that null in a manner that makes it possible for us to put numbers on those expectations.  For another, standard parametric statistics mostly require the assumption that the distribution of expected measurements under the null hypothesis meets some criteria, the most frequent being that the distribution is normal.  In many cases we don’t know whether or not that’s true, but in the case of ENM overlaps we know it’s probably not true most of the time.  Overlap metrics are bound between 0 and 1, and if the null hypothesis generates expectations that are near one of those extremes, the distribution of expected overlaps is highly unlikely to be even approximately normal.  There can also be (and this is based on experience), multiple peaks in those null distributions, and a whole lot of skew and kurtosis as well.  So a specification of our null based on a normal distribution would be a poor description of our actual expectations under the null hypothesis, and as a result any statistical test based on parametric stats would be untrustworthy.  I have occasionally been asked whether it’s okay to do t-tests or other parametric tests on niche overlap statistics, and, for the reasons I’ve just listed, I feel that the answer has to be a resounding “no”.

So what’s the alternative?  Luckily, it’s actually quite easy.  It’s just a little less familiar to most people than parametric stats are, and requires us to think very precisely about the ideas we’re trying to test.  In our case, what we need to do is to find some way to estimate the distribution of overlaps expected between a pair of species using this landscape and these sample sizes if they were effectively drawn from the same distribution of environments.  What would that imply?  Well, if each of these sets of points were drawn from the same distribution, we should be able to generate overlap values similar to our empirical measurement by repeating that process.  So that’s exactly what we do!

We take all of the points for these two species and we throw them in a big pool.  Then we randomly pull out points for two species from that pool, keeping the sample sizes consistent with our empirical data.  Then we build ENMs for those sets of points and measure overlaps between them.  That gives us a single estimate of expected overlaps under the null hypothesis.  So now we’ve got our empirical estimate (red) and one realization of the null hypothesis (blue)

All right, so it looks like based on that one draw from the null distribution, our empirical overlap is a lot lower than you’d expect.  But how much confidence can we have in this conclusion can we have based on one single draw from the null distribution?  Not very much.  Let’s do it a bunch more times and make a histogram:
All right, now we see that, in 100 draws from that null distribution, we never once drew an overlap value that was as low as the actual value that we get from our empirical data.  This is pretty strong evidence that, whatever process generated our empirical data, it doesn’t look much like the process that generated that null distribution, and based on this evidence we can statistically reject that null hypothesis.  But how do we put a number on that?  Easy!  All we need to do is figure out what the percentile in that distribution is that corresponds to our empirical measurement.  In this case our empirical value is lower than the lowest number in our null distribution.  That being the case, we can’t specify exactly what the probability of getting our empirical result is, only that it’s lower than the lowest value we obtained, so it’s p < (whatever that number is).  Since we did 100 iterations of that null hypothesis, the resolution of our null distribution is 1/100 = .01.  Given our resolution, that means p is between 0 and .01 or, as we normally phrase it, p < .01.  If we’d done 500 simulation runs and our empirical value was still lower than our lowest value, it would be p < 1/500, or p < .0002.  If we’d done 500 runs and found that our empirical value was between the lowest value and the second lowest value, we would know that .0002 < p < .0004, although typically we just report these things as p < .0004.  Basically the placement of our empirical value in the distribution of expected values from our null hypothesis is an estimate of the probability of getting that value if that hypothesis were true.  This is exactly how hypothesis testing works in parametric statistics, the only difference being that in our case we generated the null distribution from simulations rather than specifying it mathematically.

So there you go!  We now have a nonparametric test of our hypothesis.  All we had to do was (1) figure out precisely what our null hypothesis was, (2) devise a way to generate the expected statistics if that hypothesis were true, (3) generate a bunch of replicate realizations of that null hypothesis to get an expected distribution under that null, and (4) compare our empirical observations to that distribution.  Although this approach is certainly less easy than simply plugging your data into Excel and doing a t-test or whatnot, there are many strengths to the Monte Carlo approach. For instance, we can use this approach to test pretty much any hypothesis that we can simulate – as long as we can produce summary statistics from a simulation that are comparable to our empirical data, we can test the probability of observing our empirical data under the set of assumptions that went into that simulated data.  It also means we don’t have to make assumptions about the distributions that we’re trying to test – by generating those distributions directly and comparing our empirical results to those distributions, we manage to step around many of the assumptions that can be problematic for parametric statistics.

The chief difficulty in applying this method is in steps 2 and 3 above – we have to be able to explicitly state our null hypothesis, and we have to be able to generate the distribution of expected measurements under that null.  Honestly, though, I think this is actually one of the greatest strengths of Monte Carlo methods: while this process may be more intensive than sticking our data into some plug-and-chug stats package, it requires us to think very carefully about what precisely our null hypothesis means, and what it means to reject it.  It requires more work, but more importantly it requires a more thorough understanding of our own data and hypotheses.

Author

Dan Warren is a postdoctoral researcher in the Parmesan lab at UT Austin.

 


02/19/2014 04:44

Thanks for nice content

Reply
04/09/2014 23:35

You’ve written nice post, I am gonna bookmark this page, thanks for info. I actually appreciate your own position and I will be sure to come back here.

Reply

I would definitely rate this as excellent subject to review, a much needed one for future concerns and it must be share too with others'

Reply

Excellent explanations shown here, would make me more notifiable in future need of this occurrence a noteworthy experience seen before.

Reply

 Brilliant masterpiece being written here, made me pretty delighted while reading throughout those and made me enlighted with logice completely

Reply

Hats off for publishing this unique comments here, would be used always for best outcomes more often, the results would be positive always

Reply
04/20/2014 22:38

No content is comparable and admirable then this mentioned for the explanations, a must one to read for everyone and get assisted a lot

Reply
04/26/2014 00:18

Actually This post is exactly what I am interested. we need some more good information. Please add more good information that would help others in such good way.

Reply
04/26/2014 11:08

Thank you for one more essential article. wherever else may anyone get that sort of knowledge in such a whole approach of writing? I even have a presentation incoming week, and that i am on the lookout for such info.

Reply
05/02/2014 00:06

The post is written in very a good manner and it entails many useful information for me. I am happy to find your distinguished way of writing the post. Now you make it easy for me to understand and implement the concept.

Reply
05/04/2014 09:58

Very useful article for me, there is more information I have is very important. Thank you!

Reply
05/07/2014 09:49

It was a beneficial workout for me to go through your webpage. It definitely stretches the limits with the mind when you go through very good info and make an effort to interpret it properly. I am going to glance up this web site usually on my PC. Thanks for sharing

Reply
05/09/2014 00:40

I’ve desired to post about something similar to this on one of my blogs and this has given me an idea. Cool Mat.

Reply
05/09/2014 14:56

No content is comparable and admirable then this mentioned for the explanations

Reply
05/09/2014 14:56

Great post

Reply
05/13/2014 03:49

I am really enjoying reading your well written articles. I think you spend numerous effort and time updating your blog. I have bookmarked it.

Reply
05/18/2014 05:40

Excellent blog! I really love how it’s easy on my eyes as well as the info are well written. I am wondering how I might be notified whenever a new post has been made. I have subscribed to your rss feed which really should do the trick! Have a nice day!

Reply
05/18/2014 05:40

I just couldn’t depart your website prior to suggesting that I extremely enjoyed the standard information a person supply for your guests? Is gonna be again frequently to inspect new posts

Reply
05/18/2014 07:47

It was a pleasure meeting you today. Your website is quite impressive and I look forward to reading your new blog postings.\r\n\r\nWishing you all the best.

Reply

It seems not so easy to analyze such graphics when I saw at the first time. Your researches on climate are very interesting and I decided to try on my own to create such graphs. thank you for such post, it's really inspired me.

Reply
05/30/2014 07:47

Blog is very good, I like it! Thank you for you sharing!Your blog is really helps for my search and i really like it.

Reply
06/21/2014 05:13

I am quite pleased with this site. There were some concepts by the author which were essential. I have stored your blog. Thanks for the useful period.

Reply
06/29/2014 08:04

I like the cut of your job :) or at least your thought process but sorry to say, I honestly think you would have fully sold me on the idea had you been able to back up your premis with a substantial bit more solid facts.

Reply

I was very encouraged to find this site. I wanted to thank you for this special read. I definitely savored every little bit of it and I have you bookmarked to check out new stuff you post.

Reply

An fascinating dialogue is worth comment. I believe that it’s best to write more on this subject, it might not be a taboo topic however typically persons are not sufficient to speak on such topics. To the next. Cheers

Reply
07/13/2014 08:41

Excellent post. I was checking continuously this blog and I am impressed! Extremely useful information specially the last part I care for such info much. I was looking for this certain info for a long time. Thank you and best of luck.

Reply

’ve read some good stuff here. Definitely worth bookmarking for revisiting. I wonder how much attempt you set to create this type of wonderful informative site.

Reply
07/13/2014 08:42

Wow…This was a great read, it brought back so many great memories of when I went to the caves. The black water rafting was one of the highlights of the few weeks i spent in New Zealand on my round the world trip…….gona go look at my photos again.

Reply
08/07/2014 03:06

Thanks for posting this useful information. This was just what I was on looking for. I'll come back to this blog for sure! I bookmarked this blog a while ago because of the useful content and I am never being disappointed.

Reply
09/09/2014 01:35

i am so glad that i can see your post at here,i like it,and hopping you can share us more about this.

Reply



Leave a Reply.