Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis

Applied Univariate, Bivariate, and Multivariate Statistics

Understanding Statistics for Social and Natural Scientists, With Applications in SPSS and R

(Autor)

Buch | Hardcover
576 Seiten
2021 | 2nd edition
John Wiley & Sons Inc (Verlag)
978-1-119-58304-2 (ISBN)
132,63 inkl. MwSt
Studibuch Logo

...gebraucht verfügbar!

AN UPDATED GUIDE TO STATISTICAL MODELING TECHNIQUES USED IN THE SOCIAL AND NATURAL SCIENCES

This revised and updated second edition of Applied Univariate, Bivariate, and Multivariate Statistics: Understanding Statistics for Social and Natural Scientists, with Applications in SPSS and R contains an accessible introduction to statistical modeling techniques commonly used in the social and natural sciences. The text offers a blend of statistical theory and methodology and reviews both the technical and theoretical aspects of good data analysis.

Featuring applied resources at various levels, the book includes statistical techniques using software packages such as R and SPSS®. To promote a more in-depth interpretation of statistical techniques across the sciences, the book surveys some of the technical arguments underlying formulas and equations. The second edition has been designed to be more approachable by minimizing theoretical or technical jargon and maximizing conceptual understanding with easy-to-apply software examples. This important text:



Offers demonstrations of statistical techniques using software packages such as R and SPSS®
Contains examples of hypothetical and real data with statistical analyses
Provides historical and philosophical insights into many of the techniques used in modern science
Includes a companion website that features further instructional details, additional data sets, and solutions to selected exercises

 

Written for students of social and applied sciences, Applied Univariate, Bivariate, and Multivariate Statistics, Second Edition offers a thorough introduction to the world of statistical modeling techniques in the sciences.

DANIEL J. DENIS, PhD, is Professor of Quantitative Psychology at the University of Montana where he teaches courses in univariate and multivariate statistics. He has published a number of articles in peer-reviewed journals and has served as consultant to researchers and practitioners in a variety of fields.

Preface xviii

About the Companion Website xxi

1 Preliminary Considerations 1

1.1 The Philosophical Bases of Knowledge: Rationalistic Versus Empiricist Pursuits 1

1.2 What is a “Model”? 3

1.3 Social Sciences Versus Hard Sciences 5

1.4 Is Complexity a Good Depiction of Reality? Are Multivariate Methods Useful? 7

1.5 Causality 8

1.6 The Nature of Mathematics: Mathematics as a Representation of Concepts 8

1.7 As a Scientist How Much Mathematics Do You Need to Know? 10

1.8 Statistics and Relativity 11

1.9 Experimental Versus Statistical Control 12

1.10 Statistical Versus Physical Effects 12

1.11 Understanding What “Applied Statistics” Means 13

Review Exercises 14

Further Discussion and Activities 14

2 Introductory Statistics 16

2.1 Densities and Distributions 17

2.1.1 Plotting Normal Distributions 19

2.1.2 Binomial Distributions 21

2.1.3 Normal Approximation 23

2.1.4 Joint Probability Densities: Bivariate and Multivariate Distributions 24

2.2 Chi-Square Distributions and Goodness-of-Fit Test 27

2.2.1 Power for Chi-Square Test of Independence 30

2.3 Sensitivity and Specificity 31

2.4 Scales of Measurement: Nominal, Ordinal, Interval, Ratio 31

2.4.1 Nominal Scale 32

2.4.2 Ordinal Scale 32

2.4.3 Interval Scale 33

2.4.4 Ratio Scale 33

2.5 Mathematical Variables Versus Random Variables 34

2.6 Moments and Expectations 35

2.6.1 Sample and Population Mean Vectors 36

2.7 Estimation and Estimators 38

2.8 Variance 39

2.9 Degrees of Freedom 41

2.10 Skewness and Kurtosis 42

2.11 Sampling Distributions 44

2.11.1 Sampling Distribution of the Mean 44

2.12 Central Limit Theorem 47

2.13 Confidence Intervals 47

2.14 Maximum Likelihood 49

2.15 Akaike’s Information Criteria 50

2.16 Covariance and Correlation 50

2.17 Psychometric Validity Reliability: A Common Use of Correlation Coefficients 54

2.18 Covariance and Correlation Matrices 57

2.19 Other Correlation Coefficients 58

2.20 Student’s t Distribution 61

2.20.1 t-Tests for One Sample 61

2.20.2 t-Tests for Two Samples 65

2.20.3 Two-Sample t-Tests in R 65

2.21 Statistical Power 67

2.21.1 Visualizing Power 69

2.22 Power Estimation Using R and G∗Power 69

2.22.1 Estimating Sample Size and Power for Independent Samples t-Test 71

2.23 Paired-Samples t-Test: Statistical Test for Matched-Pairs (Elementary Blocking) Designs 73

2.24 Blocking With Several Conditions 76

2.25 Composite Variables: Linear Combinations 76

2.26 Models in Matrix Form 77

2.27 Graphical Approaches 79

2.27.1 Box-and-Whisker Plots 79

2.28 What Makes a p-Value Small? A Critical Overview and Practical Demonstration of Null Hypothesis Significance Testing 82

2.28.1 Null Hypothesis Significance Testing (NHST): A Legacy of Criticism 82

2.28.2 The Make-Up of a p-Value: A Brief Recap and Summary 85

2.28.3 The Issue of Standardized Testing: Are Students in Your School Achieving More than the National Average? 85

2.28.4 Other Test Statistics 86

2.28.5 The Solution 87

2.28.6 Statistical Distance: Cohen’s d 87

2.28.7 What Does Cohen’s d Actually Tell Us? 88

2.28.8 Why and Where the Significance Test Still Makes Sense 89

2.29 Chapter Summary and Highlights 89

Review Exercises 92

Further Discussion and Activities 95

3 Analysis of Variance: Fixed Effects Models 97

3.1 What is Analysis of Variance? Fixed Versus Random Effects 98

3.1.1 Small Sample Example: Achievement as a Function of Teacher 99

3.1.2 Is Achievement a Function of Teacher? 100

3.2 How Analysis of Variance Works: A Big Picture Overview 101

3.2.1 Is the Observed Difference Likely? ANOVA as a Comparison (Ratio) of Variances 102

3.3 Logic and Theory of ANOVA: A Deeper Look 103

3.3.1 Independent-Samples t-Tests Versus Analysis of Variance 104

3.3.2 The ANOVA Model: Explaining Variation 105

3.3.3 Breaking Down a Deviation 106

3.3.4 Naming the Deviations 107

3.3.5 The Sums of Squares of ANOVA 108

3.4 From Sums of Squares to Unbiased Variance Estimators: Dividing by Degrees of Freedom 109

3.5 Expected Mean Squares for One-Way Fixed Effects Model: Deriving the F-ratio 110

3.6 The Null Hypothesis in ANOVA 112

3.7 Fixed Effects ANOVA: Model Assumptions 113

3.8 A Word on Experimental Design and Randomization 115

3.9 A Preview of the Concept of Nesting 116

3.10 Balanced Versus Unbalanced Data in ANOVA Models 116

3.11 Measures of Association and Effect Size in ANOVA: Measures of Variance Explained 117

3.11.1 η2 Eta-Squared 117

3.11.2 Omega-Squared 118

3.12 The F-Test and the Independent Samples t-Test 118

3.13 Contrasts and Post-Hocs 119

3.13.1 Independence of Contrasts 122

3.13.2 Independent Samples t-Test as a Linear Contrast 123

3.14 Post-Hoc Tests 124

3.14.1 Newman–Keuls and Tukey HSD 126

3.14.2 Tukey HSD 127

3.14.3 Scheffé Test 128

3.14.4 Other Post-Hoc Tests 129

3.14.5 Contrast Versus Post-Hoc? Which Should I be Doing? 129

3.15 Sample Size and Power for ANOVA: Estimation With R and G∗Power 130

3.15.1 Power for ANOVA in R and G∗Power 130

3.15.2 Computing f 130

3.16 Fixed effects One-Way Analysis of Variance in R: Mathematics Achievement as a Function of Teacher 133

3.16.1 Evaluating Assumptions 134

3.16.2 Post-Hoc Tests on Teacher 137

3.17 Analysis of Variance Via R’s lm 138

3.18 Kruskal-Wallis Test in R and the Motivation Behind Nonparametric Tests 138

3.19 ANOVA in SPSS: Achievement as a Function of Teacher 140

3.20 Chapter Summary and Highlights 142

Review Exercises 143

Further Discussion and Activities 145

4 Factorial Analysis of Variance: Modeling Interactions 146

4.1 What is Factorial Analysis of Variance? 146

4.2 Theory of Factorial ANOVA: A Deeper Look 148

4.2.1 Deriving the Model for Two-Way Factorial ANOVA 149

4.2.2 Cell Effects 150

4.2.3 Interaction Effects 151

4.2.4 Cell Effects Versus Interaction Effects 152

4.2.5 A Model for the Two-Way Fixed Effects ANOVA 152

4.3 Comparing One-Way ANOVA to Two-Way ANOVA: Cell Effects in Factorial ANOVA Versus Sample Effects in One-Way ANOVA 153

4.4 Partitioning the Sums of Squares for Factorial ANOVA: The Case of Two Factors 153

4.4.1 SS Total: A Measure of Total Variation 154

4.4.2 Model Assumptions: Two-Way Factorial Model 155

4.4.3 Expected Mean Squares for Factorial Design 156

4.4.4 Recap of Expected Mean Squares 159

4.5 Interpreting Main Effects in the Presence of Interactions 159

4.6 Effect Size Measures 160

4.7 Three-Way, Four-Way, and Higher Models 161

4.8 Simple Main Effects 161

4.9 Nested Designs 162

4.9.1 Varieties of Nesting: Nesting of Levels Versus Subjects 163

4.10 Achievement as a Function of Teacher and Textbook: Example of Factorial ANOVA in R 164

4.10.1 Comparing Models Through AIC 167

4.10.2 Visualizing Main Effects and Interaction Effects Simultaneously 169

4.10.3 Simple Main Effects for Achievement Data: Breaking Down Interaction Effects 170

4.11 Interaction Contrasts 171

4.12 Chapter Summary and Highlights 172

Review Exercises 173

5 Introduction to Random Effects and Mixed Models 175

5.1 What is Random Effects Analysis of Variance? 176

5.2 Theory of Random Effects Models 177

5.3 Estimation in Random Effects Models 178

5.3.1 Transitioning from Fixed Effects to Random Effects 178

5.3.2 Expected Mean Squares for MS Between and MS Within 179

5.4 Defining Null Hypotheses in Random Effects Models 180

5.4.1 F-Ratio for Testing H0 181

5.5 Comparing Null Hypotheses in Fixed Versus Random Effects Models: The Importance of Assumptions 182

5.6 Estimating Variance Components in Random Effects Models: ANOVA ML REML Estimators 183

5.6.1 ANOVA Estimators of Variance Components 183

5.6.2 Maximum Likelihood and Restricted Maximum Likelihood 184

5.7 Is Achievement a Function of Teacher? One-Way Random Effects Model in R 185

5.7.1 Proportion of Variance Accounted for by Teacher 187

5.8 R Analysis Using REML 188

5.9 Analysis in SPSS: Obtaining Variance Components 188

5.10 Factorial Random Effects: A Two-Way Model 190

5.11 Fixed Effects Versus Random Effects: A Way of Conceptualizing Their Differences 191

5.12 Conceptualizing the Two-Way Random Effects Model: The Make-Up of a Randomly Chosen Observation 192

5.13 Sums of Squares and Expected Mean Squares for Random Effects: The Contaminating Influence of Interaction Effects 193

5.13.1 Testing Null Hypotheses 194

5.14 You Get What You Go In With: The Importance of Model Assumptions and Model Selection 195

5.15 Mixed Model Analysis of Variance: Incorporating Fixed and Random Effects 196

5.15.1 Mixed Model in R 199

5.16 Mixed Models in Matrices 199

5.17 Multilevel Modeling as a Special Case of the Mixed Model: Incorporating Nesting and Clustering 200

5.18 Chapter Summary and Highlights 201

Review Exercises 202

6 Randomized Blocks and Repeated Measures 204

6.1 What is a Randomized Block Design? 205

6.2 Randomized Block Designs: Subjects Nested Within Blocks 205

6.3 Theory of Randomized Block Designs 207

6.3.1 Nonadditive Randomized Block Design 208

6.3.2 Additive Randomized Block Design 209

6.4 Tukey Test for Nonadditivity 211

6.5 Assumptions for the Covariance Matrix 212

6.6 Intraclass Correlation 213

6.7 Repeated Measures Models: A Special Case of Randomized Block Designs 215

6.8 Independent Versus Paired-Samples t-Test 215

6.9 The Subject Factor: Fixed or Random Effect? 216

6.10 Model for One-Way Repeated Measures Design 217

6.10.1 Expected Mean Squares for Repeated Measures Models 217

6.11 Analysis Using R: One-Way Repeated Measures: Learning as a Function of Trial 218

6.12 Analysis Using SPSS: One-Way Repeated Measures: Learning as a Function of Trial 222

6.12.1 Which Results Should Be Interpreted? 224

6.13 SPSS Two-Way Repeated Measures Analysis of Variance Mixed Design: One Between Factor, One Within Factor 226

6.13.1 Another Look at the Between-Subjects Factor 229

6.14 Chapter Summary and Highlights 230

Review Exercises 231

7 Linear Regression 232

7.1 Brief History of Regression 233

7.2 Regression Analysis and Science: Experimental Versus Correlational Distinctions 235

7.3 A Motivating Example: Can Offspring Height Be Predicted? 236

7.4 Theory of Regression Analysis: A Deeper Look 238

7.5 Multilevel Yearnings 240

7.6 The Least-Squares Line 240

7.7 Making Predictions Without Regression 241

7.8 More about εi 243

7.9 Model Assumptions for Linear Regression 243

7.9.1 Model Specification 245

7.9.2 Measurement Error 245

7.10 Estimation of Model Parameters in Regression 246

7.10.1 Ordinary Least-Squares (OLS) 247

7.11 Null Hypotheses for Regression 248

7.12 Significance Tests and Confidence Intervals for Model Parameters 250

7.13 Other Formulations of the Regression Model 251

7.14 The Regression Model in Matrices: Allowing for More Complex Multivariable Models 252

7.15 Ordinary Least-Squares in Matrices 255

7.16 Analysis of Variance for Regression 256

7.17 Measures of Model Fit for Regression: How Well Does the Linear Equation Fit? 259

7.18 Adjusted R2 260

7.19 What “Explained Variance” Means and More Importantly, What It Does Not Mean 260

7.20 Values Fit by Regression 261

7.21 Least-Squares Regression in R: Using Matrix Operations 262

7.22 Linear Regression Using R 265

7.23 Regression Diagnostics: A Check on Model Assumptions 267

7.23.1 Understanding How Outliers Influence a Regression Model 268

7.23.2 Examining Outliers and Residuals 269

7.23.3 Detecting Outliers 272

7.23.4 Normality of Residuals 274

7.24 Regression in SPSS: Predicting Quantitative from Verbal 275

7.25 Power Analysis for Linear Regression in R 279

7.26 Chapter Summary and Highlights 281

Review Exercises 283

Further Discussion and Activities 285

8 Multiple Linear Regression 286

8.1 Theory of Partial Correlation 287

8.2 Semipartial Correlations 288

8.3 Multiple Regression 289

8.4 Some Perspective on Regression Coefficients: “Experimental Coefficients”? 290

8.5 Multiple Regression Model in Matrices 291

8.6 Estimation of Parameters 292

8.7 Conceptualizing Multiple R 292

8.8 Interpreting Regression Coefficients: Correlated Versus Uncorrelated Predictors 293

8.9 Anderson’s Iris Data: Predicting Sepal Length From Petal Length and Petal Width 293

8.10 Fitting Other Functional Forms: A Brief Look at Polynomial Regression 297

8.11 Measures of Collinearity in Regression: Variance Inflation Factor and Tolerance 298

8.12 R-squared as a Function of Partial and Semipartial Correlations: The Stepping Stones to Forward and Stepwise Regression 300

8.13 Model-Building Strategies: Simultaneous Hierarchical Forward Stepwise 301

8.13.1 Simultaneous Hierarchical Forward 303

8.13.2 Stepwise Regression 305

8.13.3 Selection Procedures in R 306

8.13.4 Which Regression Procedure Should Be Used? Concluding Comments and Recommendations Regarding Model-Building 306

8.14 Power Analysis for Multiple Regression 307

8.15 Introduction to Statistical Mediation: Concepts and Controversy 307

8.15.1 Statistical Versus True Mediation: Some Philosophical Pitfalls in the Interpretation of Mediation Analysis 309

8.16 Brief Survey of Ridge and Lasso Regression: Penalized Regression Models and the Concept of Shrinkage 311

8.17 Chapter Summary and Highlights 313

Review Exercises 314

Further Discussion and Activities 315

9 Interactions in Multiple Linear Regression 316

9.1 The Additive Regression Model With Two Predictors 317

9.2 Why the Interaction is the Product Term xizi: Drawing an Analogy to Factorial ANOVA 318

9.3 A Motivating Example of Interaction in Regression: Crossing a Continuous Predictor With a Dichotomous Predictor 319

9.4 Analysis of Covariance 323

9.4.1 Is ANCOVA “Controlling” for Anything? 325

9.5 Continuous Moderators 326

9.6 Summing Up the Idea of Interactions in Regression 326

9.7 Do Moderators Really “Moderate” Anything? 326

9.7.1 Some Philosophical Considerations 326

9.8 Interpreting Model Coefficients in the Context of Moderators 327

9.9 Mean-Centering Predictors: Improving the Interpretability of Simple Slopes 328

9.10 Multilevel Regression: Another Special Case of the Mixed Model 330

9.11 Chapter Summary and Highlights 331

Review Exercises 331

10 Logistic Regression and the Generalized Linear Model 333

10.1 Nonlinear Models 335

10.2 Generalized Linear Models 336

10.2.1 The Logic of the Generalized Linear Model: How the Link Function Transforms Nonlinear Response Variables 337

10.3 Canonical Links 338

10.3.1 Canonical Link for Gaussian Variable 339

10.4 Distributions and Generalized Linear Models 339

10.4.1 Logistic Models 339

10.4.2 Poisson Models 340

10.5 Dispersion Parameters and Deviance 340

10.6 Logistic Regression 341

10.6.1 A Generalized Linear Model for Binary Responses 341

10.6.2 Model for Single Predictor 342

10.7 Exponential and Logarithmic Functions 343

10.7.1 Logarithms 345

10.7.2 The Natural Logarithm 346

10.8 Odds and the Logit 347

10.9 Putting It All Together: Logistic Regression 348

10.9.1 The Logistic Regression Model 348

10.9.2 Interpreting the Logit: A Survey of Logistic Regression Output 348

10.10 Logistic Regression in R 351

10.10.1 Challenger O-ring Data 351

10.11 Challenger Analysis in SPSS 354

10.11.1 Predictions of New Cases 356

10.12 Sample Size, Effect Size, and Power 358

10.13 Further Directions 358

10.14 Chapter Summary and Highlights 359

Review Exercises 360

11 Multivariate Analysis of Variance 361

11.1 A Motivating Example: Quantitative and Verbal Ability as a Variate 362

11.2 Constructing the Composite 363

11.3 Theory of MANOVA 364

11.4 Is the Linear Combination Meaningful? 365

11.4.1 Control Over Type I Error Rate 365

11.4.2 Covariance Among Dependent Variables 366

11.4.3 Rao’s Paradox 367

11.5 Multivariate Hypotheses 368

11.6 Assumptions of MANOVA 368

11.7 Hotelling’s T2: The Case of Generalizing From Univariate to Multivariate 369

11.8 The Covariance Matrix S 373

11.9 From Sums of Squares and Cross-Products to Variances and Covariances 375

11.10 Hypothesis and Error Matrices of MANOVA 376

11.11 Multivariate Test Statistics 376

11.11.1 Pillai’s Trace 378

11.11.2 Lawley–Hotelling’s Trace 379

11.12 Equality of Covariance Matrices 379

11.13 Multivariate Contrasts 381

11.14 MANOVA in R and SPSS 382

11.14.1 Univariate Analyses 386

11.15 MANOVA of Fisher’s Iris Data 387

11.16 Power Analysis and Sample Size for MANOVA 388

11.17 Multivariate Analysis of Covariance and Multivariate Models: A Bird’s Eye View of Linear Models 389

11.18 Chapter Summary and Highlights 389

Review Exercises 391

Further Discussion and Activities 393

12 Discriminant Analysis 394

12.1 What is Discriminant Analysis? The Big Picture on the Iris Data 395

12.2 Theory of Discriminant Analysis 396

12.2.1 Discriminant Analysis for Two Populations 397

12.2.2 Substituting the Maximizing Vector into Squared Standardized Difference 398

12.3 LDA in R and SPSS 399

12.4 Discriminant Analysis for Several Populations 405

12.4.1 Theory for Several Populations 405

12.5 Discriminating Species of Iris: Discriminant Analyses for Three Populations 408

12.6 A Note on Classification and Error Rates 410

12.6.1 Statistical Lives 412

12.7 Discriminant Analysis and Beyond 412

12.8 Canonical Correlation 413

12.9 Motivating Example for Canonical Correlation: Hotelling’s 1936 Data 414

12.10 Canonical Correlation as a General Linear Model 415

12.11 Theory of Canonical Correlation 416

12.12 Canonical Correlation of Hotelling’s Data 418

12.13 Canonical Correlation on the Iris Data: Extracting Canonical Correlation From Regression, MANOVA, LDA 419

12.14 Chapter Summary and Highlights 420

Review Exercises 421

Further Discussion and Activities 422

13 Principal Components Analysis 423

13.1 History of Principal Components Analysis 424

13.2 Hotelling 1933 426

13.3 Theory of Principal Components Analysis 428

13.3.1 The Theorem of Principal Components Analysis 428

13.4 Eigenvalues as Variance 429

13.5 Principal Components as Linear Combinations 429

13.6 Extracting the First Component 430

13.6.1 Sample Variance of a Linear Combination 430

13.7 Extracting the Second Component 431

13.8 Extracting Third and Remaining Components 432

13.9 The Eigenvalue as the Variance of a Linear Combination Relative to its Length 432

13.10 Demonstrating Principal Components Analysis: Pearson’s 1901 Illustration 433

13.11 Scree Plots 436

13.12 Principal Components Versus Least-Squares Regression Lines 439

13.13 Covariance Versus Correlation Matrices: Principal Components and Scaling 441

13.14 Principal Components Analysis Using SPSS 441

13.15 Chapter Summary and Highlights 445

Review Exercises 446

Further Discussion and Activities 448

14 Factor Analysis 449

14.1 History of Factor Analysis 450

14.2 Factor Analysis at a Glance 450

14.3 Exploratory Versus Confirmatory Factor Analysis 451

14.4 Theory of Factor Analysis: The Exploratory Factor-Analytic Model 451

14.5 The Common Factor-Analytic Model 452

14.6 Assumptions of the Factor-Analytic Model 454

14.7 Why Model Assumptions are Important 455

14.8 The Factor Model as an Implication for the Covariance Matrix Σ 456

14.9 Again Why is Σ = ΛΛ + ψ So Important a Result? 457

14.10 The Major Critique Against Factor Analysis: Indeterminacy and the Nonuniqueness of Solutions 457

14.11 Has Your Factor Analysis Been Successful? 459

14.12 Estimation of Parameters in Exploratory Factor Analysis 460

14.13 Principal Factor 460

14.14 Maximum Likelihood 461

14.15 The Concepts (and Criticisms) of Factor Rotation 462

14.16 Varimax and Quartimax Rotation 464

14.17 Should Factors Be Rotated? Is That Not Cheating? 465

14.18 Sample Size for Factor Analysis 466

14.19 Principal Components Analysis Versus Factor Analysis: Two Key Differences 466

14.19.1 Hypothesized Model and Underlying Theoretical Assumptions 466

14.19.2 Solutions are Not Invariant in Factor Analysis 467

14.20 Principal Factor in SPSS: Principal Axis Factoring 468

14.21 Bartlett Test of Sphericity and Kaiser-Meyer-Olkin Measure of Sampling Adequacy (MSA) 474

14.22 Factor Analysis in R: Holzinger and Swineford (1939) 476

14.23 Cluster Analysis 477

14.24 What is Cluster Analysis? The Big Picture 478

14.25 Measuring Proximity 480

14.26 Hierarchical Clustering Approaches 483

14.27 Nonhierarchical Clustering Approaches 485

14.28 K-Means Cluster Analysis in R 486

14.29 Guidelines and Warnings About Cluster Analysis 489

14.30 A Brief Look at Multidimensional Scaling 489

14.31 Chapter Summary and Highlights 492

Review Exercises 493

Further Discussion and Activities 496

15 Path Analysis and Structural Equation Modeling 497

15.1 Path Analysis: A Motivating Example—Predicting IQ Across Generations 498

15.2 Path Analysis and “Causal Modeling” 500

15.3 Early Post-Wright Path Analysis: Predicting Child’s IQ (Burks 1928) 502

15.4 Decomposing Path Coefficients 503

15.5 Path Coefficients and Wright’s Contribution 504

15.6 Path Analysis in R—A Quick Overview: Modeling Galton’s Data 505

15.6.1 Path Model in AMOS 508

15.7 Confirmatory Factor Analysis: The Measurement Model 510

15.7.1 Confirmatory Factor Analysis as a Means of Evaluating Construct Validity and Assessing Psychometric Qualities 512

15.8 Structural Equation Models 514

15.9 Direct, Indirect, and Total Effects 515

15.10 Theory of Statistical Modeling: A Deeper Look Into Covariance Structures and General Modeling 516

15.11 The Discrepancy Function and Chi-Square 518

15.12 Identification 519

15.13 Disturbance Variables 520

15.14 Measures and Indicators of Model Fit 521

15.15 Overall Measures of Model Fit 522

15.15.1 Root Mean Square Residual and Standardized Root Mean Square Residual 522

15.15.2 Root Mean Square Error of Approximation 523

15.16 Model Comparison Measures: Incremental Fit Indices 523

15.17 Which Indicator of Model Fit is Best? 525

15.18 Structural Equation Model in R 526

15.19 How All Variables Are Latent: A Suggestion for Resolving the Manifest-Latent Distinction 528

15.20 The Structural Equation Model as a General Model: Some Concluding Thoughts on Statistics and Science 529

15.21 Chapter Summary and Highlights 530

Review Exercises 531

Further Discussion and Activities 533

References 534

Index 548

Erscheinungsdatum
Verlagsort New York
Sprache englisch
Maße 10 x 10 mm
Gewicht 454 g
Themenwelt Mathematik / Informatik Mathematik Analysis
ISBN-10 1-119-58304-7 / 1119583047
ISBN-13 978-1-119-58304-2 / 9781119583042
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich