Archive for the ‘Quotes’ Category.

[Quote] Abstract – There are none.

From Guaranteed Margins for LQG Regulartors J.C. Doyle (1978) IEEE Transactions on Automatic Control 23(4), pp. 756- 757

The abstract has one sentence: There are none and the first paragraph of this short paper explains the uniqueness of the abstract: Continue reading ‘[Quote] Abstract – There are none.’ »

[Quote] The “Bible”

Although it is a great read, Numerical Recipe[1] is no more suitable as a statistical bible than Ptolemy is for astronomy.

Continue reading ‘[Quote] The “Bible”’ »

  1. W.H.Press, S.A.Teukolsky, W.T. Vetterling, and B.P.Flannery, 2nd ed., 1992[]

[Quote] Bootstrap and MCMC

The Bootstrap and Modern Statistics Brad Efron (2000), JASA Vol. 95 (452), p. 1293-1296.

If the bootstrap is an automatic processor for frequentist inference, then MCMC is its Bayesian counterpart.

Continue reading ‘[Quote] Bootstrap and MCMC’ »

Provocative Corollary to Andrew Gelman’s Folk Theorem

This is a long comment on October 3, 2007 Quote of the Week, by Andrew Gelman. His “folk theorem” ascribes computational difficulties to problems with one’s model.

My thoughts:
Model , for statisticians, has two meanings. A physicist or astronomer would automatically read this as pertaining to a model of the source, or physics, or sky. It has taken me a long time to be able to see it a little more from a statistics perspective, where it pertains to the full statistical model.

For example, in low-count high-energy physics, there had been a great deal of heated discussion over how to handle “negative confidence intervals”. (See for example PhyStat2003). That is, when using the statistical tools traditional to that community, one had such a large number of trials and such a low expected count rate that a significant number of “confidence intervals” for source intensity were wholly below zero. Further, there were more of these than expected (based on the assumptions in those traditional statistical tools). Statisticians such as David van Dyk pointed out that this was a sign of “model mis-match”. But (in my view) this was not understood at first — it was taken as a description of physics model mismatch. Of course what he (and others) meant was statistical model mismatch. That is, somewhere along the data-processing path, some Gauss-Normal assumptions had been made that were inaccurate for (essentially) low-count Poisson. If one took that into account, the whole “negative confidence interval” problem went away. In recent history, there has been a great deal of coordinated work to correct this and do all intervals properly.

This brings me to my second point. I want to raise a provocative corollary to Gelman’s folk theoreom:

When the “error bars” or “uncertainties” are very hard to calculate, it is usually because of a problem with the model, statistical or otherwise.

One can see this (I claim) in any method that allows one to get a nice “best estimate” or a nice “visualization”, but for which there is no clear procedure (or only an UNUSUALLY long one based on some kind of semi-parametric bootstrapping) for uncertainty estimates. This can be (not always!) a particular pitfall of “ad-hoc” methods, which may at first appear very speedy and/or visually compelling, but then may not have a statistics/probability structure through which to synthesize the significance of the results in an efficient way.

Quote of the Week, October 3, 2007

From the ever-quotable Andrew Gelman comes this gem, which he calls a Folk Theorem :

When things are hard to compute, often the model doesn’t fit the data. Difficulties in computation are therefore often model problems… [When the computation isn't working] we have the duty and freedom to think about models.

Continue reading ‘Quote of the Week, October 3, 2007’ »

ab posteriori ad priori

A great advantage of Bayesian analysis, they say, is the ability to propagate the posterior. That is, if we derive a posterior probability distribution function for a parameter using one dataset, we can apply that as the prior when a new dataset comes along, and thereby improve our estimates of the parameter and shrink the error bars.

But how exactly does it work? I asked this of Tom Loredo in the context of some strange behavior of sequential applications of BEHR that Ian Evans had noticed (specifically that sequential applications of BEHR, using as prior the posterior from the preceding dataset, seemed to be dependent on the order in which the datasets were considered (which, as it happens, arose from approximating the posterior distribution before passing it on as the prior distribution to the next stage — a feature that now has been corrected)), and this is what he said:

Yes, this is a simple theorem. Suppose you have two data sets, D1 and D2, hypotheses H, and background info (model, etc.) I. Considering D2 to be the new piece of info, Bayes’s theorem is:

[1]

p(H|D1,D2) = p(H|D1) p(D2|H, D1)            ||  I
             -------------------
                    p(D2|D1)

where the “|| I” on the right is the “Skilling conditional” indicating that all the probabilities share an “I” on the right of the conditioning solidus (in fact, they also share a D1).

We can instead consider D1 to be the new piece of info; BT then reads:

[2]

p(H|D1,D2) = p(H|D2) p(D1|H, D2)            ||  I
             -------------------
                    p(D1|D2)

Now go back to [1], and use BT on the p(H|D1) factor:

p(H|D1,D2) = p(H) p(D1|H) p(D2|H, D1)            ||  I
             ------------------------
                    p(D1) p(D2|D1)

           = p(H, D1, D2)
             ------------      (by the product rule)
                p(D1,D2)

Do the same to [2]: use BT on the p(H|D2) factor:

p(H|D1,D2) = p(H) p(D2|H) p(D1|H, D2)            ||  I
             ------------------------
                    p(D2) p(D1|D2)

           = p(H, D1, D2)
             ------------      (by the product rule)
                p(D1,D2)

So the results from the two orderings are the same. In fact, in the Cox-Jaynes approach, the “axioms” of probability aren’t axioms, but get derived from desiderata that guarantee this kind of internal consistency of one’s calculations. So this is a very fundamental symmetry.

Note that you have to worry about possible dependence between the data (i.e., p(D2|H, D1) appears in [1], not just p(D2|H)). In practice, separate data are often independent (conditional on H), so p(D2|H, D1) = p(D2|H) (i.e., if you consider H as specified, then D1 tells you nothing about D2 that you don’t already know from H). This is the case, e.g., for basic iid normal data, or Poisson counts. But even in these cases dependences might arise, e.g., if there are nuisance parameters that are common for the two data sets (if you try to combine the info by multiplying *marginalized* posteriors, you may get into trouble; you may need to marginalize *after* multiplying if nuisance parameters are shared, or account for dependence some other way).

what if you had 3, 4, .. N observations? Does the order in which you apply BT affect the results?

No, as long as you use BT correctly and don’t ignore any dependences that might arise.

if not, is there a prescription on what is the Right Thing [TM] to do?

Always obey the laws of probability theory! 9-)

Betraying your heritage

[arXiv:0709.3093v1] Short Timescale Coronal Variability in Capella (Kashyap & Posson-Brown)

We recently submitted that paper to AJ, and rather ironically, I did the analysis during the same time frame as this discussion was going on, about how astronomers cannot rely on repeating observations. Ironic because the result reported there hinges on the existence of small, but persistent signal that is found in repeated observations of the same source. Doubly ironic in fact, in that just as we were backing and forthing about cultural differences I seemed to have gone and done something completely contrary to my heritage! Continue reading ‘Betraying your heritage’ »

PHYSTAT-LHC 2007

The idea that some useful materials related to the Chandra calibration problem, which CHASC is putting an effort to, could be found from PHYSTAT conferences came along. Owing to the recent advanced technologies adopted by physicists (I haven’t seen any statistical conference offers what I obtained from PHYSTAT-LHC 2007), I had a chance to go through some video files from PHYSTAT-LHC 2007. The files are the recorded lectures and lecture notes. They are available from PHYSTAT-LHC 2007 Program.
Continue reading ‘PHYSTAT-LHC 2007’ »

Quote of the Week, Aug 31, 2007

Once again, the middle of a recent (Aug 30-31, 2007) argument within CHASC, on why physicists and astronomers view “3 sigma” results with suspicion and expect (roughly) > 5 sigma; while statisticians and biologists typically assume 95% is OK:

David van Dyk (representing statistics culture):

Can’t you look at it again? Collect more data?

Vinay Kashyap (representing astronomy and physics culture):

…I can confidently answer this question: no, alas, we usually cannot look at it again!!

Ah. Hmm. To rephrase [the question]: if you have a “7.5 sigma” feature, with a day-long [imaging Markov Chain Monte Carlo] run you can only show that it is “>3sigma”, but is it possible, even with that day-long run, to tell that the feature is really at 7.5sigma — is that the question? Well that would be nice, but I don’t understand how observing again will help?

David van Dyk :

No one believes any realistic test is properly calibrated that far into the tail. Using 5-sigma is really just a high bar, but the precise calibration will never be done. (This is a reason not to sweet the computation TOO much.)

Most other scientific areas set the bar lower (2 or 3 sigma) BUT don’t really believe the results unless they are replicated.

My assertion is that I find replicated results more convincing than extreme p-values. And the controversial part: Astronomers should aim for replication rather than worry about 5-sigma.

[ArXiv] Numerical CMD analysis, Aug. 28th, 2007

From arxiv/astro-ph:0708.3758v1
Numerical Color-Magnitude Diagram Analysis of SDSS Data and Application to the New Milky Way Satellites by J. T. A. de Jong et. al.

The authors applied MATCH (Dolphin, 2002[1] -note that the year is corrected) to M13, M15, M92, NGC2419, NGC6229, and Pal14 (well known globular clusters), and BooI, BooII, CvnI, CVnII, Com, Her, LeoIV, LeoT, Segu1, UMaI, UMaII and Wil1 (newly discovered Milky Way satellites) from Sloan Digital Sky Survey (SDSS) to fit Color Magnitude diagrams (CMDs) of these stellar clusters and find the properties of these satellites.
Continue reading ‘[ArXiv] Numerical CMD analysis, Aug. 28th, 2007’ »

  1. Numerical methods of star formation history measurement and applications to seven dwarf spheroidals,Dolphin (2002), MNRAS, 332, p. 91[]

Quote of the Week, Aug 23, 2007

These are from two lively CHASC discussions on classification, or cluster analysis. The first was on Feb 7, 2006; the continuation on Dec 12, 2006, at the Harvard Statistics Department, as part of Stat 310 .

David van Dyk:

Don’t demand too much of the classes. You’re not going to say that all events can be well-classified…. It’s more descriptive. It gives you places to look. Then you look at your classes.

Xiao Li Meng:

Then you’re saying the cluster analysis is more like -

David van Dyk:

It’s really like you have a propsal for classes. You then investigate the physical processes more thoroughly. You may have classes that divide it [up]

……

David van Dyk:

But it can make a difference, where you see the clusters, depending on your [parameter] transformation.You can squish the white spaces, and stretch out the crowded spaces; so it can change where you think the clusters are.

Aneta Siemignowska:

But that is interesting.

Andreas Zezas:

Yes, that is very interesting.

These are particularly in honor of Hyunsook Lee‘s recent posting of Chattopadhyay et. al.’s new work about possible intrinsic classes of gamma-ray bursts. Are they really physical classes — or do they only appear to be distinct clusters because we view them through the “squished” lens (parameter spaces) of our imperfect instruments?

Mmm.. donuts

Mmm.. chi-square!

The withering criticisms Hyunsook has been directing towards the faulty use of chisquare by astronomers brings to mind this classic comment by [astronomer] Jeremy Drake during the 2005 Chandra Calibration Workshop: Continue reading ‘Mmm.. donuts’ »

[Quote] Changing my mind (again)

From IMS Bulletin Vol. 36(7) p.10, Terence’s Stuff: Changing my mind (again)
Continue reading ‘[Quote] Changing my mind (again)’ »

[Quote] Model Skeptics

From IMS Bulletin Vol. 36(3), p.11, Terence’s Stuff: Model skeptics

[Once I quoted an article by Prof. Terry Speed in IMS Bulletin: Data-Doctors. Reading his columns in the IMS Bulletin provides me an opportunity to reflect who I am as a statistician and some guidance for treating data. Although his ideas were not from astronomy or astronomical data analysis, I often find his thoughts and words can be shared with astronomers.]
Continue reading ‘[Quote] Model Skeptics’ »

Quote of the Week, August 2, 2007

Some of the lively discussion at the end of the first “Statistical Challenges in Modern Astronomy” conference, at Penn State in 1991, was captured in the proceedings (“General Discussion: Working on the Interface Between Statstics and Astronomy, Terry Speed (Moderator)”, in SCMA I, editors Eric D. Feigelson and G. Jogesh Babu, 1992, Springer-Verlag, New York,p 505).
Portrait of Joe HorowitzJoseph Horowitz (Statistician):

…there should be serious collaboration between astronomers and statisticians. Statisticians should be involved from the beginning as real collaborators, not mere number crunchers. When I collaborate with anybody, astronomer or otherwise, I expect to be a full scientific equal and to get something out of it of value to statistics or mathematics, in addition to making a contribution to the collaborator’s field…

Portrait of Jasper Wall re-enacting his Apollo work Jasper Wall (Astrophysicist):

…I feel strongly that the knowledge of statistics needs to come very early in the process. It is no good downstream when the paper is written. It is not even much good when you have built the instrument, because we should disabuse statisticians of any impression that the data coming from astronomical instruments are nice, pure, and clean. Each instrument has its very own particular filter, each person using that instrument puts another filter on it and each method of data acquisition does something else yet again. I get more and more concerned particularly at the present time [1991] of data explosion (the observatory I work with is getting 700 MBy per night!). There is discussion of data compression, cleaning on-line, and other treatments even before the observing astronomer gets the data. The knowledge of statistics and the knowledge of what happens to the data need to come extremely early in the process.