Statistics and Probability for Engineers and Scientists - Bhisham C. Gupta, Irwin Guttman

Statistics and Probability for Engineers and Scientists

Buch | Softcover
984 Seiten
2011 | Preliminary Edition
John Wiley & Sons Inc (Verlag)
978-1-118-09872-1 (ISBN)
33,60 inkl. MwSt
zur Neuauflage
  • Titel erscheint in neuer Auflage
  • Artikel merken
Zu diesem Artikel existiert eine Nachauflage
Wiley is excited to provide you with a sneak preview of a ground-breaking new text for the Engineering Statistics course.

All statistical concepts are supported by a large number of examples using data encountered in real life situations; and the text illustrates how the statistical packages MINITAB®, Microsoft Excel ®, and JMP® may be used to aid in the analysis of various data sets.

The text also covers an appropriate and understandable level of the design of experiments. This includes randomized block designs, one and two-way designs, Latin square designs, factorial designs, response surface designs, and others.

This text is suitable for a one- or two-semester calculus-based undergraduate statistics course for engineers and scientists, and the presentation of material gives instructors flexibility to pick and choose topics for their particular courses.

Chapter1: Introduction
Chapter2: Describing Data Graphically and Numerically
2.1 Getting Started With Statistics
2.1.1 What is Statistics?
2.1.2 Population and Sample in a Statistical Study
2.2 Classification of Various Types of Data
2.3 Frequency Distribution Tables for Qualitative and Quantitative Data
2.4 Graphical Description of Qualitative and Quantitative Data
2.4.1 Dot Plot
2.4.2 Pie Chart
2.4.3 Bar Chart
2.4.4 Histograms
2.4.5 Line Graph
2.4.6 Stem-and-Leaf Plot
2.5 Numerical Measures of Quantitative Data
2.5.1 Measures of Centrality
2.5.2 Measures of Dispersion
2.6 Numerical Measures of Grouped Data
2.7 Measures of Relative Position
2.8 Box-Whisker Plot
2.9 Measures of Association
2.10 Case Studies
2.11 Using JMP
Review Practice Problems Chapter3 Elements of Probability
3.1 Random Experiments, Sample Spaces, and Events
3.2 Concepts of Probability
3.3 Techniques of Counting Sample Points
3.3.1 Tree Diagrams
3.3.2 Permutations
3.3.3 Combinations
3.3.4 Arrangements of n Objects Involving Several Kinds of Objects
3.3.5 Application of Combinations to Probability Problems
3.4 Conditional Probability
3.5 Bayes' Theorem
3.6 Introducing Random Variables
Review Practice Problems

Chapter 4 Discrete Random Variables and Some Important Discrete Probability Distributions

4.1 Graphical Descriptions of Discrete Distributions
4.2 Mean and Variance of a Discrete Random Variable
4.2.1 The Moment-Generating Function Expectation of a Special Function
4.3 The Discrete Uniform Distribution
4.4 The Hypergeometric Distribution
4.5 The Bernoulli Distribution
4.6 The Binomial Distribution
4.7 The Multinomial Distribution
4.8 The Poisson Distribution
4.8.1Poisson Distribution as a Limiting Form of the Binomial
4.9 The Negative Binomial Distribution
4.10 Some Derivations and Proofs (Optional)
4.10.1 Proof that the Probability Function of the Hypergeometric Distribution Sums to 1
4.10.2 Mean and the Variance of the Hypergeometric Distribution
4.10.3 Mean and the Variance of the Binomial Distribution
4.10.4 Mean and the Variance of the Poisson Distribution
4.10.5 Derivation of the Poisson Distribution
4.11 A Case Study
4.12 Using JMP
Review Practice Problems

Chapter 5 Continuous Random Variables and Some Important Continuous Probability Distributions

5.1 Continuous Random Variables
5.2 Mean and Variance of Continuous Random Variables
5.2.1 The Moment-Generating Function - Expectation of a Special Function
5.3 Chebychev's Inequality
5.4 The Uniform Distribution
5.5 The Normal Distribution
5.5.1 Definition and Properties
5.5.2 The Standard Normal Distribution
5.5.3 The Moment-Generating Function of the Normal Distribution
5.6 Distribution of Linear Combinations of Independent Normal Variables
5.7 Approximation of the Binomial Distribution by the Normal Distribution
5.8 A Test of Normality
5.9 The Lognormal Distribution
5.10 The Exponential Distribution
5.11 The Gamma Distribution
5.12 The Weibull Distribution
5.13 A Case Study
5.14 Using JMP
Review Practice Problems

Chapter 6 Distribution Functions of Random Variables

6.1 Distribution Functions of Two Random Variables
6.1.1 Case of Two Discrete Random Variables
6.1.2 Case of Two Continuous Random Variables
6.1.3 The Mean Value and Variance of Functions of Two Random Variables
6.1.4 Conditional Distributions
6.1.5 Correlation Between Two Random Variables
6.1.6 Bivariate Normal Distribution
6.2 Extension to Several Random Variables
6.3 The Moment-Generating Function Revisited
Review Practice Problems

Chapter 7 Sampling Distribution

7.1 Random Sampling
7.1.1 Random Sampling from an Infinite Population
7.1.2 Random Sampling from a Finite Population
7.2 The Sampling Distribution of the Mean
7.2.1 The Central Limit Theorem
7.3 Sampling from a Normal Population
7.3.1 The Chi-Square Distribution
7.3.2 The Student t Distribution
7.3.3 Snedecor's F Distribution
7.4 Order Statistics
7.4.1 Distribution of the Largest Element in a Sample
7.4.2 Distribution of the Smallest Element in a Sample
7.4.3 Distribution of the Median of a Sample and of the kth-Order Statistic
7.4.4 The Range as an Estimate of in Normal Samples
7.5 Using JMP
Review Practice Problems

Chapter 8 Estimation of Population Parameters

8.1 Introduction
8.2 Point Estimators for the Population Mean and Variance
8.2.1 Properties of Point Estimators
8.2.2 Methods of Finding Point Estimators
8.3 Interval Estimators for the Mean of a Normal Population
8.3.1 Known
8.3.2 Unknown
8.3.3 Sample Size is Large
8.4 Interval Estimators for the Difference of Means of Two Normal Populations
8.4.1. Variances are Known
8.4.2. Variances are Unknown
8.5 Interval Estimators for the Variance of a Normal Population
8.6 Interval Estimators for the Ratio of Variances of Two Normal Populations
8.7 Point and Interval Estimators for the Parameters of Binomial Populations
8.7.1 One Binomial Population
8.7.1 Two Binomial Populations
8.8 Determination of Sample Size
8.9 Some Supplemental Information (Optional)
8.9.1 Proof of
8.9.2 Predicting an Arbitrary Observation
8.10 A Case Study
8.11 Using JMP
Review Practice Problems
Chapter 9 Hypothesis Testing

9.1 Introduction
9.2 Basic Concepts of Testing a Statistical Hypothesis
9.3 Tests Concerning the Mean of a Normal Distribution Having Known Variance
9.4 Tests Concerning the Mean of a Normal Population Having Unknown Variance
9.5 Large Sample Theory
9.6 Tests Concerning the Difference of Means of Two Populations Having Distributions with Known Variances
9.7 Tests Concerning the Difference of Means of Two Populations Having Distributions with Unknown Variances
9.7.1 Two Population Variances Are Equal
9.7.2 Two Population Variances Are Not Equal
9.7.3 The Paired t-Test
9.8 Testing Population Proportions
9.8.1 Testing Concerning the One Population Proportion
9.8.2 Testing Concerning the Difference Between Two Population Proportions
9.9 Tests Concerning the Variance of a Normal Distribution
9.10 Tests Concerning the Ratio of Variances of Two Normal Populations
9.11 An Alternative Technique for Testing of Statistical Hypotheses: Using Confidence Intervals
9.12 Sequential Tests of Hypotheses (Optional)
9.12.1 A One-Sided Sequential Testing Procedure
9.12.2 A Two-Sided Sequential Testing Procedure
9.13 Case Studies
9.14 Using JMP
Review Practice Problems

Chapter 10 Elements of Reliability Theory
10.1 The Reliability Function
10.1.1 The Hazard Rate
10.1.2 Employing the Hazard Function
10.2 Estimation: Exponential Distribution
10.3 Hypothesis Testing: Exponential Distribution
10.4 Estimation: Weibull Distribution
10.5 Case Studies
10.6 Using JMP
Review Practice Problems

Chapter 11 Statistical Quality Control and Phase I Control Charts
11.1 Basic Concepts of Quality and Its Benefits
11.2 What Is a Process?
11.3 Common and Assignable Causes
11.4 Control Charts
11.5 Control Charts for Variables
11.5.1 Shewhart and R Control Chart
11.5.2 Shewhart and R Control Chart When Process Mean and Process Standard Deviation Are Known
11.5.3 The Shewhart and S Control Chart
11.6 Control Charts for Attributes
11.6.1 The p Chart: Control Chart for the Fraction of Nonconforming Units
11.6.2 The p Chart: Control Chart for the Fraction of Nonconforming units with Variable Sample Sizes
11.6.3 The np Control Chart: Control Chart for Number of Nonconforming Units
11.6.4 The C Control Chart
11.6.5 The U Control Chart
11.7 Process Capability
11.8 Case Studies
11.9 Using JMP
Review Practice Problems

Chapter12 Statistical Quality Control and Phase II Control Charts

12.1 Basic Concepts of CUSUM Control Chart
12.2 Designing a CUSUM Control Chart
12.2.1 Two-Sided CUSUM Control Chart Using a Numerical Procedure
12.2.2 The Fast Initial Response (FIR) Feature for the CUSUM Control Chart
12.2.3 The Combined Shewhart-CUSUM Control Chart
12.2.4 The CUSUM Control Chart for Controlling Process Variability
12.3 The Moving Average (MA) Control Chart
12.4 The Exponentially Weighted Moving Average (EWMA) Control Chart
12.5 Case Studies
12.6 Using JMP
Review Practice Problems

Chapter 13 Analysis of Categorical Data

13.1 Introduction
13.2 The Chi-Square Goodness of Fit Test
13.3 Contingency Tables
13.3.1 The 2 2 Case Parameters Known
13.3.2 The Case Parameters Unknown
13.3.3 The Contingency Table
13.4 Chi-Square Test for Homogeneity
13.5 Comments on the Distribution of the Lack-of-Fit Statistic (optional)
13.6 Case Studies
13.7 Using JMP
Review Practice Problems

Chapter 14 Nonparametric Tests

14.1 Introduction
14.2 The Sign Test
14.2.1 One-Sample Test
14.2.2 The Wilcoxon Signed-Rank Test
14.2.3. Two-sample Test
14.3 The Mann-Whitney (Wilcoxon) W Test for Two Samples
14.4 Run Tests
14.4.1 Runs Above and Below the Median
14.4.2 The Wald-Wolfowitz Run Test
14.5 Spearman Rank Correlation
14.6 Using JMP
Review Practice Problems

Chapter 15 Simple Linear Regression Analysis
15.1 Introduction
15.2 Fitting the Simple Linear Regression Model
15.2.1 Simple Linear Regression Model
15.2.2 Fitting a Straight Line by Least Squares
15.2.3 Sampling Distributions of the Estimators of Regression Coefficients
15.3 Unbiased Estimator of
15.4 Further Inferences Concerning Regression Coefficients and
15.4.1 Confidence Interval for with Confidence Coefficient
15.4.2 Confidence Interval for with Confidence Coefficient
15.4.3 Confidence Interval for with Confidence Coefficient
15.4.4 Prediction Interval for a Future Observation with Confidence Coefficient
15.5 Test of Hypotheses for
15.6 Analysis of Variance Approach to Simple Regression Analysis
15.7 Residual Analysis
15.8 Transformations
15.9 Inference About
15.10 A Case Study
15.11 Using JMP
Review Practice Problems

Chapter 16 Multiple Linear Regression Analysis

16.1 Introduction
16.2 The Multiple Linear Regression Model
16.3 Estimation of Regression Coefficients
16.3.1 Estimation of Regression Coefficients Using Matrix Notation
16.3.2 Properties of the Least-Squares Estimators
16.3.3 The Analysis of Variance Table
16.3.4 More Inferences About Regression Parameters
16.4 The Multiple Linear Regression Model Using Qualitative or Categorical Predictor Variables
16.5 Standardized Regression Coefficients
16.6 Building Regression Type Prediction Models
16.7 Residual Analysis
16.7.1 Certain Criteria for Model Selection
16.8 Logistic Regression
16.9 Using JMP
16.10 Case Studies
Review Practice Problems

Chapter 17 Analysis of Variance

17.1 Introduction
17.2 Design Models
17.3 One-Way Experimental Layouts
17.3.1 Confidence Intervals for Treatment Means
17.3.2 Multiple Comparisons
17.3.3 Determination of Sample Size
17.3.4 The Kruskal-Wallis Test for One-Way Layouts (Nonparametric Method)
17.4 Randomized Complete Block Designs
17.4.1 The Friedman Test for Randomized Complete Block Designs
17.4.2 Experiments with One Missing Observations in a RCB Design Experiment
17.4.3 Experiments with Several Missing Observation in a RCB Design Experiment
17.5 Two-Way Experimental Design
17.5.1 Two-way Experimental Layouts with One Observation per Cell
17.5.2 Two-way Experimental Layouts with r > 1 Observations per Cell
17.5.3 Blocking in Two-Way Experimental Designs
17.5.4 Extending Two-Way Experimental Designs to n-way Experimental Designs
17.6 Latin Square Designs
17.7 Random Effects Model
17.7.1 Mixed Effects Model
17.7.2 Nested (Hierarchical) Designs
17.8 Case Study
17.9 Using JMP
Review Practice Problems

Chapter 18 The 2k Factorial Designs
18.1 The Factorial Designs
18.2 The 2k Factorial Design
18.3 Unreplicated 2k Factorial Designs
18.4 Blocking the 2k Factorial Design
18.4.1 Confounding in the 2k Factorial Design
18.4.2 Yates' Algorithm for the 2k Factorial Designs
18.5 The Fractional Factorial Designs
18.5.1 One-Half Replicate of a Factorial Design
18.5.2 One-Quarter Replicate of a Factorial Design
18.6 Case Studies
18.7 Using JMP
Review Practice Problems

Chapter 19 Response Surfaces

19.1 Basic Concepts of Response Surface Methodology
19.2 First Order Designs
19.3 Second Order Designs
19.3.1 Central Composite Designs (CCD)
19.3.2 Some Other First-Order and Second-Order Designs
19.4 Determination of the Optimum or Near-Optimum Point
19.4.1 The Method of Steepest Ascent
19.4.2 Analysis of a Fitted Second-Order Response Surface
19.5 ANOVA Table for a Second-order Model
19.6 Case Studies
19.7 Using JMP
Review Practice Problems
Appendix A: Statistical Tables and Charts
Appendix B: Answers to Selected Problems
Appendix C: Bibliography
Index

 

Erscheint lt. Verlag 13.9.2011
Verlagsort New York
Sprache englisch
Maße 216 x 276 mm
Gewicht 1821 g
Themenwelt Technik Maschinenbau
ISBN-10 1-118-09872-2 / 1118098722
ISBN-13 978-1-118-09872-1 / 9781118098721
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Normung, Berechnung, Gestaltung

von Christian Spura; Herbert Wittel; Dieter Jannasch

Buch | Softcover (2023)
Springer Vieweg (Verlag)
39,99