Data-based decision making has always been a fundamental part of banking and finance. This has become even more so after the 2008 crisis and the heightened regulatory environment. In this presentation, I will describe the role of statistics in risk modeling and management in large banks, covering model development and model assessment. The talk will give a glimpse into different types of data structures, computing/data platforms used for big data, types of models, and how they are developed and used.
The Statistics Department hosts weekly colloquia on a variety of statistcal subjects, bringing in speakers from around the world.
Type of Event:
In this talk, we consider two types of data from neuroscience: neuromorphology data and neuron activity data. First, we focus on data extracted from brain neuron cells of rodents and model each neuron as a data object with topological and geometric properties characterizing the branching structure, connectedness and orientation of a neuron. We define the notions of topological and geometric medians as well as quantiles based on newly-developed curve representations.
Monitoring the control and capability of process parameters is a continual and mammoth task for today’s manufacturers. The importance of simple, efficient, and automated approaches cannot be overstated. Paramount in this endeavor is the determination of extreme quantiles. I will review approaches for determining these quantile from the last 25 years of literature, as well as current usage at Eli Lilly and Company. A number of candidate approaches will be carried forward into a simulation to look at their performance against a variety of distributions.
In this talk, I will present statistical issues and challenges that I have encountered in my biomedical collaborative studies of item selection in disease screening, comparison and identification of biomarkers that are more informative to disease diagnosis, and estimation of weights on relatively importance of exposure variables on health outcome. After a discussion on the issues and challenges with real examples, I will review available statistical methods and present our newly developed methods.
We study a stylized multiple testing problem where the test statistics are independent and assumed to have the same distribution under their respective null hypotheses. We first show that, in the normal means model where the test statistics are normal Z-scores, the well-known method of (Benjamini and Hochberg, 1995) is optimal in some asymptotic sense. We then show that this is also the case of a recent distribution-free method proposed by Foygel-Barber and Candes (2015).
This talk includes two testing problems of regression functions with responses missing at random. One problem is minimum distance model checking. The proposed lack-of-fit tests are based on a class of minimum integrated square distances between a kernel type estimator of a regression function and the parametric regression function being fitted. These tests are shown to be consistent against a large class of fixed alternatives. The corresponding test statistics are shown to have asymptotic normal distributions under the null hypothesis.
In this paper, we develop and release measures of public ideology in 2010 for the 50 American states, 435 congressional districts, and state legislative districts. We do this using the geospatial statistical technique of kriging, which uses the locations of survey respondents, as well as population covariate values, to predict ideology for simulated citizens in districts across the country.
Unwanted variation in hidden variables often negatively impacts analysis of high-dimensional data, leading to high false discovery rates, and/or low rates of true discoveries. A number of procedures have been proposed to detect and estimate the hidden variables, including principal component analysis (PCA). However, empirical data analysis suggests that PCA is not efficient in identifying the hidden variables that only affect a subset of features but with relatively large effects.
Statistical inference along with the strategy of divide-and-combine for Big Data analysis has been little studied. As an effective inferential tool, confidence distribution (CD) has attracted a surge of renewed attention. The essence in constructing confidence distribution pertains to the availability of suitable pivotal quantities, which are usually obtained from the (asymptotical) distribution of point maximum likelihood estimator. We propose to use inference function, from which the parameter estimate is obtained, as the basis of constructing the pivotal.
The talk has two components. In the first component, to study the relation between a univariate response and multiple functional covariates, we propose a functional single index model that is semiparametric. The parametric part of the model integrates the linear regression modeling for functional data and the sucient dimension reduction structure. The nonparametric part of the model further allows the response-index dependence or the link function to be unspecied.