Hosts:

  • Department of Statistics and Biostatistics, Rutgers University
  • Center for Discrete Mathematics and Theoretical Computer Science (DIMACS)

Workshop Description

This two-and-half day workshop is sponsored by the National Science Foundation (NSF) and Rutgers University. The objective is to study the role and foundation of statistical inference in the era of data science and also its applications in fusion learning. Its main foci are:

  • Report new advances in and re-examine the foundations of statistical inferences in the modern era of data science,
  • Develop links to bridge gaps among different statistical paradigms, including Bayesian, frequentist and fiducial (BFF) inferences, and explore the possibility of unifying statistical theme for scientific learning and research
  • Disseminate new ideas and foster new research approaches in fusion learning from multiple diverse data sources.

The emergence of data explosion has magnified the need of efficient methodology for analyzing data and drawing inference. It has heightened the importance of statistics in recent decades.Despite the tremendous progress it has made, statistics is still a young discipline with several different and competing paths in its approaches and foundations. Most notable are the differences between the Bayesian, frequentist and fiducial (BFF) approaches. While competing approaches are a natural progression of any scientific discipline, the difference in the foundations of statistical inference can lead to different interpretations and possible misuses of inferences from the same data set. Such misuses of statistical inferences and the lack of coherent bridges for statistical inferences often enhance mistrust of statistics. Statisticians have long been aware of this hidden danger to the field, and many have stressed the urgent need to build modern statistical inference that "matches contemporary attitudes" (Kass, 2011, Stat. Sci.) and allows “Bayesian, fiducial and frequentist (BFF) inferences to thrive under one roof as BFFs (Best Friends Forever).” (Meng, 2014, IMS Bulletin).

The differences among the BFF approaches, “unlike most philosophical disputes, have immediate practical consequences.” (Efron 2013, Science). The developments in this topic will have direct impact in applications.  Case in point is the impacts of BFF approaches in fusion learning and combining information, an application area that is of vital importance especially in light of the trove of data nowadays collected routinely from various sources in all domains and at all time. The workshop will provide an ideal platform for comparing and connecting methods, theory and barriers for fusing inferences from multiple sources, from the different but possibly shared BFF perspectives. The goal is to seek approaches that can lead to decision making by exploiting inferences that are typically more efficient and potentially more accurate than those from any single source.

The workshop will bring together statisticians and data scientists across the aisles to address issues related to foundation of statistical inferences and its applications to combining information and fusion learning. Professors Jim Berger (Duke University) and Brad Efron (Stanford University) will provide the keynote addresses, to be followed by many other talks and discussions. This workshop will help disseminate new development of coherent BFF inferences and new advances in statistical inferences and their applications to both within the field of statistics and all fields that use statistics.