Did they, or didn’t they?

Earlier this year, Peter Edmonds showed me a press release that the Chandra folks were, at the time, considering putting out describing the possible identification of a Type Ia Supernova progenitor. What appeared to be an accreting white dwarf binary system could be discerned in 4-year old observations, coincident with the location of a supernova that went off in November 2007 (SN2007on). An amazing discovery, but there is a hitch.

And it is a statistical hitch, and involves two otherwise highly reliable and oft used methods giving contradictory answers at nearly the same significance level! Does this mean that the chances are actually 50-50? Really, we need a bona fide statistician to take a look and point out the errors of our ways..

The first time around, Voss & Nelemans (arXiv:0802.2082) looked at how many X-ray sources there were around the candidate progenitor of SN2007on (they also looked at 4 more galaxies that hosted Type Ia SNe and that had X-ray data taken prior to the event, but didn’t find any other candidates), and estimated the probability of chance coincidence with the optical position. When you expect 2.2 X-ray sources/arcmin2 near the optical source, the probability of finding one within 1.3 arcsec is tiny, and in fact is around 0.3%. This result has since been reported in Nature.

However, Roelofs et al. (arXiv:0802.2097) went about getting better optical positions and doing better bore-sighting, and as a result, they measured the the X-ray position accurately and also carried out Monte Carlo simulations to estimate the error on the measured location. And they concluded that the actual separation, given the measurement error in the location, is too large to be a chance coincidence, 1.18±0.27 arcsec. The probability that the two locations are the same of finding offsets in the observed range is ~1% [see Tom's clarifying comment below].

Well now, ain’t that a nice pickle?

To recap: there are so few X-ray sources in the vicinity of the supernova that anything close to its optical position cannot be a coincidence, BUT, the measured error in the position of the X-ray source is not copacetic with the optical position. So the question for statisticians now: which argument do you believe? Or is there a way to reconcile these two calculations?

Oh, and just to complicate matters, the X-ray source that was present 4 years ago had disappeared when looked for in December, as one would expect if it was indeed the progenitor. But on the other hand, a lot of things can happen in 4 years, even with astronomical sources, so that doesn’t really confirm a physical link.

5 Comments
  1. hlee:

    Not a bona fide and knowledgeable statistician, but one thing I noticed is two different hypotheses to test the same astrophysical discovery. One is the probability of finding one within 1.3 arcsec is tiny, and in fact is around 0.3% and the other is the probability that the two locations are the same is ~1%. The first one focuses a radius around the given location and the latter focuses on the coincidence of two events. A brief glance of the post reminds me Buffon’s needle (some paradox from stochastic geometry) although I’m not sure if there’s an analogy between the discovery of the supernova and the coverage probability.

    05-20-2008, 5:25 pm
  2. TomLoredo:

    Hyunsook is right, that the two probabilities as you quoted them are of different things, and thus the answers to different questions. But I wonder if the quote is accurate (i.e., the authors’ inaccurate description of what they calculated, or perhaps a misquote here)? The last one—”the probability that the two locations are the same is ~1%”—is a Bayesian statement. Did they really do a Bayesian calculation? Or did they calculate a significance level of some kind and just incorrectly describe it with Bayesian language? Well, I’ll have to go look at the papers.

    I couldn’t resist commenting on it, however, because I came up with a Bayesian approach for assessing directional (and more generally, spatio-temporal) coincidences quite a few years ago (inspired by a GRB problem), and I’ll be using it as a pedagogical example at the CASt summer school in just a few weeks. The exercise compares the behavior of these two quantities (a p-value for a hypothesis test, and the posterior odds for coincidence vs. no coincidence). I’m also waiting (on pins and needles—news should be imminent) to see if an NSF proposal that, in part, seeks to develop MCMC-flavored algorithms for implementing Bayesian coincidence assessment with large data sets will get funded. We’ll see….

    Anyone, one of the lessons of the toy computation for CASt is that a p-value can reject the null hypothesis of no true association (i.e., conclude there is an association) where the Bayesian calculation favors the null. The reason is that some data may be rather improbable under the null (thus leading to rejection in a significance test), yet similarly improbable under the alternative (here there is a definite alternative: association); a Bayes factor can thus say that data with a small p-value nevertheless does not significantly favor the alternative. An explicit (and often messy) power calculation might spare the significance test fans embarassment, but no one does them. The Bayes factor nicely puts all you need into a single quantity, with the usual “Occam factor” machinery coming in to play to help things out.

    05-21-2008, 11:36 pm
  3. vlk:

    Tom, yes, the interesting part is indeed that the two probabilities are different things — different numbers, differently arrived at, answering different questions. But they are the products of commonly used techniques, and use essentially the same data. It seems to me that the first one is a significance test (the probability that an unrelated X-ray source can show up nearby by chance), and the second one is a power calculation (the probability that a true association will be flagged as false for the observed separation). I am unsure how to interpret the combination though. As you say, perhaps a full Bayesian calculation is necessary to make sense of it.

    The quote above was a paraphrase, btw. The exact quote is “Extensive simulations of the Chandra data show that the probability of finding an offset of this magnitude is ~1%, equal to the (trial-corrected) probability of a chance alignment with any X-ray source in the field.”

    05-22-2008, 1:49 am
  4. TomLoredo:

    Vinay, thanks for the exact quote. I think what they meant to say is, “the probability of finding an offset of this magnitude or larger…” It’s a shame that got by the referees (this kind of detail often does). It does look like a power calculation, in the sense of being based on the alternative of genuine association. Though I know you know it, for visitors I think it’s worth emphasizing that this is not at all the same thing as saying, “the probability that the two locations are the same is ~1%.” The latter is a Bayesian statement (i.e., it reports a probability for a hypothesis about the true locations, instead of reporting the fraction of time with which a procedure would reject a hypothesis). In general there is no simple relationship between such probabilities, though in special cases they may be related.

    05-22-2008, 1:07 pm
  5. vlk:

    Thanks, Tom. I’ve edited the main post a bit. The language of probability theory is very intricate, and one “summarizes” at their peril! I’m glad you are keeping us honest.

    I have to check the paper carefully, but I think though that the 1% number in Roelofs et al does not refer to offsets of “this magnitude or larger”, but rather to a small range of offsets defined by the error on the measured offset (see their Fig 3).

    05-22-2008, 7:24 pm
Leave a comment