The AstroStat Slog » regression http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 Scatter plots and ANCOVA http://hea-www.harvard.edu/AstroStat/slog/2009/scatter-plots-and-ancova/ http://hea-www.harvard.edu/AstroStat/slog/2009/scatter-plots-and-ancova/#comments Thu, 15 Oct 2009 23:46:14 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=1640 Astronomers rely on scatter plots to illustrate correlations and trends among many pairs of variables more than any scientists[1]. Pages of scatter plots with regression lines are often found from which the slope of regression line and errors bars are indicators of degrees of correlation. Sometimes, too many of such scatter plots makes me think that, overall, resources for drawing nice scatter plots and papers where those plots are printed are wasted. Why not just compute correlation coefficients and its error and publicize the processed data for computing correlations, not the full data, so that others can verify the computation results for the sake of validation? A couple of scatter plots are fine but when I see dozens of them, I lost my focus. This is another cultural difference.

When having many pairs of variables that demands numerous scatter plots, one possibility is using parallel coordinates and a matrix of correlation coefficients. If Gaussian distribution is assumed, which seems to be almost all cases, particularly when parametrizing measurement errors or fitting models of physics, then error bars of these coefficients also can be reported in a matrix form. If one considers more complex relationships with multiple tiers of data sets, then one might want to check ANCOVA (ANalysis of COVAriance) to find out how statisticians structure observations and their uncertainties into a model to extract useful information.

I’m not saying those simple examples from wikipedia, wikiversity, or publicly available tutorials on ANCOVA are directly applicable to statistical modeling for astronomical data. Most likely not. Astrophysics generally handles complicated nonlinear models of physics. However, identifying dependent variables, independent variables, latent variables, covariates, response variables, predictors, to name some jargon in statistical model, and defining their relationships in a rather comprehensive way as used in ANCOVA, instead of pairing variables for scatter plots, would help to quantify relationships appropriately and to remove artificial correlations. Those spurious correlations appear frequently because of data projection. For example, datum points on a circle on the XY plane of the 3D space centered at zero, when seen horizontally, look like that they form a bar, not a circle, producing a perfect correlation.

As a matter of fact, astronomers are aware of removing these unnecessary correlations via some corrections. For example, fitting a straight line or a 2nd order polynomial for extinction correction. However, I rarely satisfy with such linear shifts of data with uncertainty because of changes in the uncertainty structure. Consider what happens when subtracting background leading negative values, a unrealistic consequence. Unless probabilistically guaranteed, linear operation requires lots of care. We do not know whether residuals y-E(Y|X=x) are perfectly normal only if μ and σs in the gaussian density function can be operated linearly (about Gaussian distribution, please see the post why Gaussianity? and the reference therein). An alternative to the subtraction is linear approximation or nonparametric model fitting as we saw through applications of principle component analysis (PCA). PCA is used for whitening and approximating nonlinear functional data (curves and images). Taking the sources of uncertainty and their hierarchical structure properly is not an easy problem both astronomically and statistically. Nevertheless, identifying properties of the observed both from physics and statistics and putting into a comprehensive and structured model could help to find out the causality[2] and the significance of correlation, better than throwing numerous scatter plots with lines from simple regression analysis.

In order to understand why statisticians studied ANCOVA or, in general, ANOVA (ANalysis Of VAriance) in addition to the material in wiki:ANCOVA, you might want to check this page[3] and to utilize your search engine with keywords of interest on top of ANCOVA to narrow down results.

From the linear model perspective, if a response is considered to be a function of redshift (z), then z becomes a covariate. The significance of this covariate in addition to other factors in the model, can be tested later when one fully fit/analyze the statistical model. If one wants to design a model, say rotation speed (indicator of dark matter occupation) as a function of redshift, the degrees of spirality, and the number of companions – this is a very hypothetical proposal due to my lack of knowledge in observational cosmology. I only want to point that the model fitting problem can be seen from statistical modeling like ANCOVA by identifying covariates and relationships – because the covariate z is continuous, and the degrees are fixed effect (0 to 7, 8 tuples), and the number of companions are random effect (cluster size is random), the comprehensive model could be described by ANCOVA. To my knowledge, scatter plots and simple linear regression are marginalizing all additional contributing factors and information which can be the main contributors of correlations, although it may seem Y and X are highly correlated in the scatter plot. At some points, we must marginalize over unknowns. Nonetheless, we still have some nuisance parameters and latent variables that can be factored into the model, different from ignoring them, to obtain advanced insights and knowledge from observations in many measures/dimensions.

Something, I think, can be done with a small/ergonomic chart/table via hypothesis testing, multivariate regression, model selection, variable selection, dimension reduction, projection pursuit, or names of some state of the art statistical methods, is done in astronomy with numerous scatter plots, with colors, symbols, and lines to account all possible relationships within pairs whose correlation can be artificial. I also feel that trees, electricity, or efforts can be saved from producing nice looking scatter plots. Fitting/Analyzing more comprehensive models put into a statistical fashion helps to identify independent variables or covariates causing strong correlation, to find collinear variables, and to drop redundant or uncorrelated predictors. Bayes factors or p-values can be assessed for comparing models, testing significance their variables, and computing error bars appropriately, not the way that the null hypothesis probability is interpreted.

Lastly, ANCOVA is a complete [MADS]. :)

  1. This is not an assuring absolute statement but a personal impression after reading articles of various fields in addition to astronomy. My readings of other fields tell that many rely on correlation statistics but less scatter plots by adding straight lines going through data sets for the purpose of imposing relationships within variable pairs
  2. the way that chi-square fitting is done and the goodness-of-fit test is carried out is understood by the notion that X causes Y and by the practice that the objective function, the sum of (Y-E[Y|X])^2/σ^2 is minimized
  3. It’s a website of Vassar college, that had a first female faculty in astronomy, Maria Mitchell. It is said that the first building constructed is the Vassar College Observatory, which is now a national historic landmark. This historical factor is the only reason of pointing this website to drag some astronomers attentions beyond statistics.
]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/scatter-plots-and-ancova/feed/ 0
my first AAS. I. Regression http://hea-www.harvard.edu/AstroStat/slog/2008/firstaas-regression/ http://hea-www.harvard.edu/AstroStat/slog/2008/firstaas-regression/#comments Mon, 09 Jun 2008 00:38:27 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=330 My first impression from the 212th AAS meeting is that it’s planned for preparing IYA 2009 and many talks are about current and future project reviews and strategies to reach public (People kept saying to me that winter meetings are more grand with expanded topics). I cannot say I understand everything (If someone says no astronomers understand everything, I’ll be relieved) but thanks to the theme of the meeting, I was intelligently entertained enough in many respects. The downside of this intellectual stimulus is growing doubts. One of those doubts was regression analysis in astronomy.

I’m not going to name the session, the speaker, nor the topic. Only relevant story related to regression analysis.

One of sessions, a speaker showed a slide with a headline, … test Ho. My expectation was that Ho indicated a null hypothesis related to the expansion of the universe so that he was going to do a hypothesis testing. I was wrong. This Ho was the Hubble constant and his plan was estimating it with his carefully executed astrometry.

After a few slides later, I saw a straight line overplotted on top of scattered points. If I dissect the given space into 4×4, the most of points were occupied in the lower left corner section, and there was only one point placed in the section of the upper right corner. This single point had the most leverage that determines the slope of the line. Without verification, such as using Cook’s distance, I wondered what would happen with the estimated slope. Even with that high leverage point, I wondered if he still could claim with real statistics that his slope (Ho) estimate prefers the model by Freedman to the model by Sandage? To my naive eyes, the differences between the estimated slope from data and the two theoretical slopes are hardly distinguishable.

I saw papers in astronomy/astrophysics that carefully explain caveats of regression analysis on their target data and describe statistical tests to show the differences and similarities. Probably, the speaker didn’t want to disturb the audience with boring statistics. Yet, this was one of the occasions where my doubts toward astronomers who practice statistics in their own ways without consulting scholarly works in statistics sufficiently. The other likelihood is that I myself is biased to see things. I bet I’m the only one who expected that …test Ho would accompany a null hypothesis and hypothesis tests, instead of estimating the Hubble constant.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/firstaas-regression/feed/ 0
[ArXiv] Kernel Regression, June 20, 2007 http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-kernel-regression-june-20-2007/ http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-kernel-regression-june-20-2007/#comments Mon, 25 Jun 2007 17:27:54 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-kernel-regression-june-20-2007/ One of the papers from arxiv/astro-ph discusses kernel regression and model selection to determine photometric redshifts astro-ph/0706.2704. This paper presents their studies on choosing bandwidth of kernels via 10 fold cross-validation, choosing appropriate models from various combination of input parameters through estimating root mean square error and AIC, and evaluating their kernel regression to other regression and classification methods with root mean square errors from literature survey. They made a conclusion of flexibility in kernel regression particularly for data at high z.

Off the topic but worth to be notified:
1. They used AIC for model comparison. In spite of many advocates for BIC, choosing AIC would do a better job for analyzing catalog data (399,929 galaxies) since the penalty term in BIC with huge sample will lead to select the model of most parsimony.

2. Despite that more detailed discussion hasn’t been posted, I’d like to point out photometric redshift studies are more or less regression problems. Whether they use sophisticated and up-to-date classification schemes such as support vector machine (SVM), artificial neural network (ANN), or classical regression methods, the goal of the study in photometric redshifts is finding predictors for right classification and the model from those predictors. I wish there will be some studies on quantile regression, which receive many spotlights recently in economics.

3. Adaptive kernels were mentioned and the results of adaptive kernel regression are highly expected.

4. Comparing root mean square errors from various classification and regression models based on Sloan Digital Sky Survey (SDSS) EDR (Early Data Release) to DR5 (Date Release 5) might mislead the conclusion of choosing the best regression/classification method due to different sample sizes in EDR to DR5. Further formulation, especially asymptotic properties of these root mean square errors will be very useful to make a legitimate comparison among different regression/classification strategies.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2007/arxiv-kernel-regression-june-20-2007/feed/ 0