Ana-Maria Staicu

Skewed functional processes and their applications

We introduce a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial location. Such data are not envisaged by the current approaches to model functional data, due to the lack of Gaussian – like features. Our methodology allows modeling the pointwise quantiles, has interpretability advantages and is computationally feasible. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls.

Thursday, September 15, 2011 - 3:30pm

MoonJung Cho

Bureau of Labor Statistics

Evaluation of Generalized Variance Function Estimators for the U.S. Current Employment Survey

In applied work with generalized variance function models for sample survey data, one generally seeks to develop and validate a model that is relatively parsimonious and that produces variance estimators that are approximately unbiased and relatively stable. This development and validation work often begins with regression of initial variance estimators (computed through standard design-based methods) on one or more candidate explanatory variables. Evaluation of initial modeling results is often complicated by correlation among the initial variance estimators.

Thursday, September 8, 2011 - 3:30pm

William A. Brenneman

The Procter & Gamble Company

Practicing Statistics in Corporate R&D

Modern industry is constantly seeking to efficiently produce new and improved products. Statisticians play a central role in helping the product team quickly identify areas for improvement and optimization. Many of the problems faced in industry can be solved with known statistical methods, while occasionally there are problems encountered that require original research. For a research statistician practicing in industry, these types of problems are a joy to encounter and an opportunity to contribute.

Thursday, August 25, 2011 - 3:30pm

Somnath Datta

Nonparametric Regression for Multistate Models

We will consider inference for various marginal temporal functions of a multistate system such as the state occupation probabilities, the integrated transition hazards, the state entry, exit and sojourn time distributions. For most parts, we will not assume a Markov or semi-Markov system. Nonparametric estimators under right censored, current status and interval censored data will be constructed. In this talk, we will consider construction of nonparametric regression estimators of the above quantities given a continuous covariate.

Friday, July 15, 2011 - 3:30pm

Lixing Zhu

Consistent model selection and estimation in a general single-index Model with “large p and small n”

For a general single-index model that does not assume an additive structure of unknown regression function and error with the dimension of predictor vector larger than the sample size, the consistency of predictor selection and estimation has not yet been investigated in the literature. In this paper, we investigate this issue by the following methods.

Wednesday, July 13, 2011 - 3:30pm

Dennis K.J. Lin

Clemson - UGA joint statistics seminar

In the past decades, we have witnessed the revolution of information technology. Its impact to statistical research is enormous. This talk attempts to address recent developments and some potential research issues in Business, Industry and Government (BIG) Statistics, with special focus on computer experiment and information systems. An overall introduction and review will be given, followed by specific research potentials. For each subject, the problem will be introduced, some initial results will be presented, and future research problems will be suggested.

Friday, April 22, 2011 - 4:30pm

Peter McCullagh

Statistical models for distance matrices

This lecture is concerned with probability models for distance matrices, which are non-negative symmetric matrices of negative type. Several families of distributions are considered, including Wishart distance matrices and Mahalanobis distance matrices, all derived ultimately from Gaussian matrices by marginalization. The likelihood functions are obtained in a relatively straightforward manner without an explicit representation of the joint density.

Friday, April 15, 2011 - 4:30pm

Laine Bradshaw

A Psychometric Model for Scaling Ability and Diagnosing Misconceptions Using Multiple Choice Tests

Commonly utilized in educational testing, models within the unidimensional item response theory (IRT) framework locate a student’s overall ability along a latent continuum by modeling the response probabilities to a set of test items as a function of a single continuous latent variable. Diagnostic classification models (DCMs) are an emerging class of models that, in contrast to IRT models, identify the separate components of what students know (distinct skills or abilities called attributes) by modeling response probabilities as a function of a set of categorical latent variables.

Thursday, March 24, 2011 - 3:30pm

Vijay Nair

Reliability Inference Based on Multistate and Degradation Models

Reliability or survival analysis is traditionally based on time-to-failure data. In high-reliability applications, there is usually a high degree of censoring, which causes difficulties in making reasonable inference. There are a number of alternatives to increasing the efficiency of reliability inference in such cases: accelerated testing, collection and use of extensive covariate information, and the use of multistate and degradation data when available. This talk will focus on the last topic. The first part of the talk deals with degradation data.

Thursday, April 28, 2011 - 3:30pm

Hemant Tiwari

Finding Genes for Complex Diseases/Traits: Are we there yet?

Last decade has seen rapid advances in genomic technologies. These technologies have provided researchers with tools to probe the genetic basis of complex diseases/traits. There is a wide gap between these genomic technologies and the developments of methods to analyze the massive data as well as lack of computer technologies to facilitate the analyses. The analysis and interpretation of the data they generate is exceptionally challenging due to the amount and sophistication of these data. This presentation discusses the methods needed to understand the massive amount of data.

Thursday, April 14, 2011 - 3:30pm


Subscribe to RSS - Colloquium