Although optimal from a theoretical point of view, maximum likelihood estimation for Cox and cluster point processes can be cumbersome in practice due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process.
The Statistics Department hosts weekly colloquia on a variety of statistcal subjects, bringing in speakers from around the world.
Type of Event:
In the study of intrinsically stationary spatial processes, a new nonparametric variogram estimator is proposed through its spectral representation. The methodology is based on estimation of the variogram's spectrum, here for the isotropic case, which is formulated in terms of solving a regularized inverse problem. We use quadratic programming to obtain the solution. The estimated variogram is guaranteed to be conditionally negative-definite, a key property of variograms.
In this talk, we will consider parametric reliability estimation for load-sharing systems under a load-sharing rule. Consider a system of multiple components connected in parallel. In this system, as components fail one by one, the total load or traffic applied to the system is re-distributed among the remaining working components. This is commonly referred to as load-sharing. We will review the earlier work on load-sharing models and then discuss the problem of estimating load-sharing parameters.
In this talk, we focus on some new spatial point process models with their applications to meta analysis of functional neuroimaging data. We propose a Bayesian spatial hierarchical model using a marked independent cluster process for functional neuroimaging meta analysis. In contrast to the current approaches, our hierarchical model accounts for intra-study variation in location (if any), inter-study variation, and idiosyncratic foci that do not cluster between studies.
Data depth and its induced center-outward ordering have given rise to many useful tools in nonparametric multivariate analysis. A DD-plot (depth vs depth plot) is the two dimensional scatter plot of depth values of the given sample points with respect to the two underlying distributions. It can be a useful tool to visualize the difference of two distributions. We discuss some of the utilities of DD-plots in this presentation. In particular, we discuss approaches devised from DD-plots to classification (thus named DD classifier) and testing the difference between two samples.
In this talk, I will present the Correlation Pursuit (COP) method, a variable selection procedure developed under the sufficient dimension reduction framework. Unlike the conventional stepwise, COP does not impose a special form of relationship between the response variable and the predictor variables. The COP procedure selects variables that maximize the correlation between the transformed response and linear combinations of the predictors.
Statistical downscaling is a useful technique to localize global or regional climate model projections to assess the potential impact of climate changes. It requires quantifying a relationship between climate model output and local observations from the past, but the two sets of measurements are not necessarily taken simultaneously, so the usual regression techniques are not applicable. In the case of univariate downscaling, a simple quantile-matching approach with asynchronous measurements often works well, but challenges remain for downscaling bivariate data.
In this talk, the limitations of normal model for Analysis of Covariance for positive right-skewed variables are considered. Specifically, an Inverse Gaussian variable is considered whose variance depends on its mean thus violating the usual assumptions of Normal linear model. Instead of appealing to transformations which makes interpretations of the results awkward, we propose a method of direct statistical analysis from both Maximum Likelihood and Bayesian perspectives. The formulas for adjusting treatment effects are given and their properties are discussed.
We provide theoretical analysis of iterative algorithms for two problems in high-dimensional regression. In the first, a sparse linear model with a specific coefficient structure provides a framework for a problem in communication. We show that the algorithm has optimal performance when compared to information-theoretic limits. This provides theoretically provable, low computational complexity communication systems based on our statistical framework.
Consider a linear model Y = X + z, z N(0; In). Here, X = Xn;p, where both p and n are large, but p > n. We model the rows of X as i.i.d. samples from N(0; 1 n ), where is a pp correlation matrix, which is unknown to us but is presumably sparse. The vector is also unknown but has relatively few nonzero coordinates, and we are interested in identifying these nonzeros. We propose the Univariate Penalization Screeing (UPS) for variable selection. This is a screen and clean method where we screen with univariate thresholding, and clean with penalized MLE.