Comments on: Cross-validation for model selection http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/ Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 01 Jun 2012 18:47:52 +0000 hourly 1 http://wordpress.org/?v=3.4 By: hlee http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/comment-page-1/#comment-73 hlee Wed, 22 Aug 2007 16:02:22 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comment-73 <p>In addition, the maximum likelihood is not the only statistics for model selection. Nonetheless, the popularity seems to be originated from the fact that Boltman's maximum entropy, Shannon's information theory, and Fisher's maximum likelihood principle are equivalent.</p> In addition, the maximum likelihood is not the only statistics for model selection. Nonetheless, the popularity seems to be originated from the fact that Boltman’s maximum entropy, Shannon’s information theory, and Fisher’s maximum likelihood principle are equivalent.

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/comment-page-1/#comment-72 hlee Wed, 22 Aug 2007 15:57:53 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comment-72 Inference on parameters is not a main subject for model selection. Once a model is chosen then we can move on to estimating parameters or hypothesis testing. However, to my understanding, there are not many works combining model selection and inference for a general application. Bayesian may think differently because they adopt models in choosing priors and likelihoods to build chains. My aim of model selection is whether I should choose a thermal or non-thermal model (I hope the choice of words is correct). To answer your questions, leaving one datum out and having a series of MLEs is correct. One computes the likelihood on each datum with the MLE obtained without that datum, then add the all likelihoods computed with one datum will be the likelihood computed via cross-validation (CV). Next, one choose one model among candidates based on likelihoods of different models (if CV is chosen), or information criteria (many of which are maximum likelihoods + penalty). Once the model is chosen, the we can move to the inference step; however, this needs some care. To prevent a little confusion, estimating parameters (getting MLEs) is a by-product to get the maximum likelihoods when the model selection is the goal of the study. Andrew Liddle and his colleagues have been writing papers on model selection applied to cosmology. Their papers may help to understand how statistical model selection is applied to astronomy, although their model selection methods are limited to BIC and DIC. I had a feeling that Protossov et al (2001) just scratched the surface of model selection and didn't let people to taste the fruit. Yet, it's a good reference because of its appendix, at least. Inference on parameters is not a main subject for model selection. Once a model is chosen then we can move on to estimating parameters or hypothesis testing. However, to my understanding, there are not many works combining model selection and inference for a general application. Bayesian may think differently because they adopt models in choosing priors and likelihoods to build chains. My aim of model selection is whether I should choose a thermal or non-thermal model (I hope the choice of words is correct).

To answer your questions, leaving one datum out and having a series of MLEs is correct. One computes the likelihood on each datum with the MLE obtained without that datum, then add the all likelihoods computed with one datum will be the likelihood computed via cross-validation (CV). Next, one choose one model among candidates based on likelihoods of different models (if CV is chosen), or information criteria (many of which are maximum likelihoods + penalty). Once the model is chosen, the we can move to the inference step; however, this needs some care.

To prevent a little confusion, estimating parameters (getting MLEs) is a by-product to get the maximum likelihoods when the model selection is the goal of the study. Andrew Liddle and his colleagues have been writing papers on model selection applied to cosmology. Their papers may help to understand how statistical model selection is applied to astronomy, although their model selection methods are limited to BIC and DIC. I had a feeling that Protossov et al (2001) just scratched the surface of model selection and didn’t let people to taste the fruit. Yet, it’s a good reference because of its appendix, at least.

]]>
By: vlk http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/comment-page-1/#comment-71 vlk Tue, 21 Aug 2007 21:01:45 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comment-71 When you Leave One Out, the thing you are leaving out is a datum, correct? So then you have a series of MLEs of the parameter, one for each datum left out. What next? How do they get combined and how do you then go from parameter estimation to model selection? When you Leave One Out, the thing you are leaving out is a datum, correct? So then you have a series of MLEs of the parameter, one for each datum left out. What next? How do they get combined and how do you then go from parameter estimation to model selection?

]]>
By: hlee http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/comment-page-1/#comment-70 hlee Mon, 20 Aug 2007 07:44:21 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comment-70 <p>Did I say "apply cross-validation to AIC?" Hmm... To clarify, apply cross-validation to model selection!!!</p> <p><strong>LOO</strong>(Leave One Out) is an expression that Prof. Rao often said. For a maximum likelihood calculation, leave one observation out, compute the maximum likelihood (ML) and the ML estimator (MLE) with the rest. With the one left observation and the MLE, a likelihood of that observation is obtained and repeat this process for all observations. Asymptotically, calculating the likelihood by LOO is equivalent to AIC. </p> <p>Instead of "score function," I'd rather use J function but this single letter gives more ambiguity. Here, the score function means the expectation of the first derivative of the log likelihood at the true parameter. Fisher information involves the 2nd order derivation and there are cases that the analytic forms of such derivations are not available, where cross validation could replace AIC or TIC. </p> <p>One drawback would be computation time O(n) if AIC is O(1). For binned/clipped data, this increment could be nothing but if we happened to keep all 1078 channels and adopting a complicated model for MLEs, we'd better not to use resampling methods without smart optimization tools.</p> Did I say “apply cross-validation to AIC?” Hmm… To clarify, apply cross-validation to model selection!!!

LOO(Leave One Out) is an expression that Prof. Rao often said. For a maximum likelihood calculation, leave one observation out, compute the maximum likelihood (ML) and the ML estimator (MLE) with the rest. With the one left observation and the MLE, a likelihood of that observation is obtained and repeat this process for all observations. Asymptotically, calculating the likelihood by LOO is equivalent to AIC.

Instead of “score function,” I’d rather use J function but this single letter gives more ambiguity. Here, the score function means the expectation of the first derivative of the log likelihood at the true parameter. Fisher information involves the 2nd order derivation and there are cases that the analytic forms of such derivations are not available, where cross validation could replace AIC or TIC.

One drawback would be computation time O(n) if AIC is O(1). For binned/clipped data, this increment could be nothing but if we happened to keep all 1078 channels and adopting a complicated model for MLEs, we’d better not to use resampling methods without smart optimization tools.

]]>
By: vlk http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/comment-page-1/#comment-69 vlk Mon, 20 Aug 2007 04:41:18 +0000 http://hea-www.harvard.edu/AstroStat/slog/2007/cross-validation-for-model-selection/#comment-69 How exactly does one apply cross-validation to AIC? Also, what is a "score function" in this context? How exactly does one apply cross-validation to AIC? Also, what is a “score function” in this context?

]]>