Good luck!
Doug Applegate
The lack of an unsheared training set is probably the biggest obstacle in this challenge. In real life, we’ll have high resolution images of ‘nearby’ galaxies in addition to the very deep observations. Though still sheared, these hi-res galaxies can provide a ‘training-set’ for the deep observations. We’ll include a set of hi-res simulated images that can be used to extract shape information about galaxies in addition to the lo-res simulated images.
The core of the challenge is to develop a way to describe shapes of galaxies and understand how your system responds to shearing and convolution with gaussian-like kernels to make statements about the amount of shearing. Other image processing problems like measuring the point spread function (ie the convolution kernel), detecting sources, etc, are all removed.
Hope that answers your question!
Cheers,
Doug Applegate
I began to understand that the photometry is very complicated and it depends on the templates (psfs or profiles for a extended source). It seems like that lensing becomes another layer of the convolving process or in image analysis.
I should spend more time to learn details. Yet, there’s a naive question that why not a simulated image of no lensing (no shear) or partial lensing (only a proportion of the sky has lensing material)? I bet it’s something to do with the design of the challenge, though.
]]>To clear any confusion, let me briefly define some terms. ‘Strong’ lensing is when we see multiple distinct images of the same galaxy. These are the large arcs seen on Hubble images like http://hubblesite.org/hubble_discoveries/10th/photos/graphics/slide36high.jpg . We can use caustic theory and the location of the arcs to reconstruct the mass distribution of the lens.
In GREAT08, we are interested in ‘cosmological,’, or ‘weak’ lensing. In this regime, the shapes are systematically distorted (sheared), but only at a very small level. Magnitude information is mostly ignored. The usual game is to define a shape measurement that averages to 0 over an ensemble of unlensed galaxies. A lensing signal would be a shift in the average away from 0. Each galaxy becomes an extremely noisy point estimation of the shear field caused by foreground material.
The challenge is to develop an inference technique that accurately estimates the shear for an ensemble of galaxies. Some methods only rely on the averaging to 0 property, while some new methods are exploring hierarchical Bayesian methods to infer the prior distribution of shapes from the data. None recover the shear at a high enough accuracy yet. The main obstacle is removing optical effects of the telescope (deconvolve the image), which mimics the shearing signal.
As you pointed out, we do need to determine if a galaxy is lensed. When discussing lensing by a large compact mass, such as a galaxy cluster, we are only interested in galaxies behind the source. We use other information, such as galaxy redshifts, to make this determination. In cosmological lensing, the filaments of mass throughout the universe lenses all galaxies, regardless of distance. However, the size of the effect is considerably smaller than with the large compact mass. In the GREAT08 simulations, all galaxies in an image are lensed.
I’m happy to answer any lensing questions. Also, http://www.cosmocoffee.info has relevant discussion boards where GREAT08, lensing, and cosmological questions can be addressed.
Cheers,
Doug Applegate
We welcome any discussion or sharing information both in statistics and astronomy, (probably, a list of contestants for this challenge and their approaches). Unless I go astray, I will watch for tutorials or relevant reviews on the topic for the slog postings.
]]>I’m not sure I follow what you mean by a multiple testing problem. Could you elaborate?
Cheers,
Doug Applegate