Goodness-of-Fit Statistics for Discrete Multivariate Data - Timothy R.C. Read, Noel A.C. Cressie

Goodness-of-Fit Statistics for Discrete Multivariate Data

Buch | Hardcover
212 Seiten
1988
Springer-Verlag New York Inc.
978-0-387-96682-3 (ISBN)
53,49 inkl. MwSt
The statistical analysis of discrete multivariate data has received a great deal of attention in the statistics literature over the past two decades. The develop­ ment ofappropriate models is the common theme of books such as Cox (1970), Haberman (1974, 1978, 1979), Bishop et al. (1975), Gokhale and Kullback (1978), Upton (1978), Fienberg (1980), Plackett (1981), Agresti (1984), Goodman (1984), and Freeman (1987). The objective of our book differs from those listed above. Rather than concentrating on model building, our intention is to describe and assess the goodness-of-fit statistics used in the model verification part of the inference process. Those books that emphasize model development tend to assume that the model can be tested with one of the traditional goodness-of-fit tests 2 2 (e.g., Pearson's X or the loglikelihood ratio G ) using a chi-squared critical value. However, it is well known that this can give a poor approximation in many circumstances. This book provides the reader with a unified analysis of the traditional goodness-of-fit tests, describing their behavior and relative merits as well as introducing some new test statistics. The power-divergence family of statistics (Cressie and Read, 1984) is used to link the traditional test statistics through a single real-valued parameter, and provides a way to consolidate and extend the current fragmented literature. As a by-product of our analysis, a new 2 2 statistic emerges "between" Pearson's X and the loglikelihood ratio G that has some valuable properties.

1 Introduction to the Power-Divergence Statistic.- 1.1 A Unified Approach to Model Testing.- 1.2 The Power-Divergence Statistic.- 1.3 Outline of the Chapters.- 2 Defining and Testing Models: Concepts and Examples.- 2.1 Modeling Discrete Multivariate Data.- 2.2 Testing the Fit of a Model.- 2.3 An Example: Time Passage and Memory Recall.- 2.4 Applying the Power-Divergence Statistic.- 2.5 Power-Divergence Measures in Visual Perception.- 3 Modeling Cross-Classified Categorical Data.- 3.1 Association Models and Contingency Tables.- 3.2 Two-Dimensional Tables: Independence and Homogeneity.- 3.3 Loglinear Models for Two and Three Dimensions.- 3.4 Parameter Estimation Methods: Minimum Distance Estimation.- 3.5 Model Generation: A Characterization of the Loglinear, Linear, and Other Models through Minimum Distance Estimation.- 3.6 Model Selection and Testing Strategy for Loglinear Models.- 4 Testing the Models: Large-Sample Results.- 4.1 Significance Levels under the Classical (Fixed-Cells) Assumptions.- 4.2 Efficiency under the Classical (Fixed-Cells) Assumptions.- 4.3 Significance Levels and Efficiency under Sparseness Assumptions.- 4.4 A Summary Comparison of the Power-Divergence Family Members.- 4.5 Which Test Statistic?.- 5 Improving the Accuracy of Tests with Small Sample Size.- 5.1 Improved Accuracy through More Accurate Moments.- 5.2 A Second-Order Correction Term Applied Directly to the Asymptotic Distribution.- 5.3 Four Approximations to the Exact Significance Level: How Do They Compare?.- 5.4 Exact Power Comparisons.- 5.5 Which Test Statistic?.- 6 Comparing the Sensitivity of the Test Statistics.- 6.1 Relative Deviations between Observed and Expected Cell Frequencies.- 6.2 Minimum Magnitude of the Power-Divergence Test Statistic.- 6.3 Further Insights into the Accuracy of Large-Sample Approximations.- 6.4 Three Illustrations.- 6.5 Transforming for Closer Asymptotic Approximations in Contingency Tables with Some Small Expected Cell Frequencies.- 6.6 A Geometric Interpretation of the Power-Divergence Statistic.- 6.7 Which Test Statistic?.- 7 Links with Other Test Statistics and Measures of Divergence.- 7.1 Test Statistics Based on Quantiles and Spacings.- 7.2 A Continuous Analogue to the Discrete Test Statistic.- 7.3 Comparisons of Discrete and Continuous Test Statistics.- 7.4 Diversity and Divergence Measures from Information Theory.- 8 Future Directions.- 8.1 Hypothesis Testing and Parameter Estimation under Sparseness Assumptions.- 8.2 The Parameter ? as a Transformation.- 8.3 A Generalization of Akaike’s Information Criterion.- 8.4 The Power-Divergence Statistic as a Measure of Loss and a Criterion for General Parameter Estimation.- 8.5 Generalizing the Multinomial Distribution.- Historical Perspective: Pearson’s X2 and the Loglikelihood Ratio Statistic G2.- 1. Small-Sample Comparisons of X2 and G2 under the Classical (Fixed-Cells) Assumptions.- 2. Comparing X2 and G2 under Sparseness Assumptions.- 3. Efficiency Comparisons.- 4. Modified Assumptions and Their Impact.- Appendix: Proofs of Important Results.- A1. Some Results on Rao Second-Order Efficiency and Hodges-Lehmann Deficiency (Section 3.4).- A2. Characterization of the Generalized Minimum Power-Divergence Estimate (Section 3.5).- A3. Characterization of the Lancaster-Additive Model (Section 3.5).- A4. Proof of Results (i), (ii), and (iii) (Section 4.1).- A5. Statement of Birch’s Regularity Conditions and Proof that the Minimum Power-Divergence Estimator Is BAN (Section 4.1).- A6. Proof of Results (i*), (ii*), and (iii*) (Section 4.1).- A7. The Power-DivergenceGeneralization of the Chernoff-Lehmann Statistic: An Outline (Section 4.1).- A8. Derivation of the Asymptotic Noncentral Chi-Squared Distribution for the Power-Divergence Statistic under Local Alternative Models (Section 4.2).- A9. Derivation of the Mean and Variance of the Power-Divergence Statistic for ? > -1 under a Nonlocal Alternative Model (Section 4.2).- A10. Proof of the Asymptotic Normality of the Power-Divergence Statistic under Sparseness Assumptions (Section 4.3).- A12. Derivation of the Second-Order Terms for the Distribution Function of the Power-Divergence Statistic under the Classical (Fixed-Cells) Assumptions (Section 5.2).- A13. Derivation of the Minimum Asymptotic Value of the Power-Divergence Statistic (Section 6.2).- A14. Limiting Form of the Power-Divergence Statistic as the Parameter ? ? ± ? (Section 6.2).- Author Index.

Erscheint lt. Verlag 1.8.1988
Reihe/Serie Springer Series in Statistics
Zusatzinfo XII, 212 p.
Verlagsort New York, NY
Sprache englisch
Maße 155 x 235 mm
Themenwelt Mathematik / Informatik Mathematik Angewandte Mathematik
Mathematik / Informatik Mathematik Statistik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
ISBN-10 0-387-96682-X / 038796682X
ISBN-13 978-0-387-96682-3 / 9780387966823
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Anwendungen und Theorie von Funktionen, Distributionen und Tensoren

von Michael Karbach

Buch | Softcover (2023)
De Gruyter Oldenbourg (Verlag)
69,95