Applied Statistical Methods -  Sachs

Applied Statistical Methods

(Autor)

Buch | Hardcover
706 Seiten
1982 | 1982 ed.
Springer-Verlag New York Inc.
978-0-387-90558-7 (ISBN)
85,55 inkl. MwSt
zur Neuauflage
  • Titel erscheint in neuer Auflage
  • Artikel merken
Zu diesem Artikel existiert eine Nachauflage
An English translation now joins the Russian and Spanish versions. It is based on the newly revised fifth edition of the German version of the book. The original edition has become very popular as a learning and reference source with easy to follow recipes and cross references for scientists in fields such as engineering, chemistry and the life sciences. Little mathematical background is required of the reader and some important topics, like the logarithm, are dealt with in the preliminaries preceding chapter one. The usefulness of the book as a reference is enhanced by a number of convenient tables and by references to other tables and methods, both in the text and in the bibliography. The English edition contains more material than the German original. I am most grateful to all who have in conversations, letters or reviews suggested improvements in or criticized earlier editions. Comments and suggestions will continue to be welcome. We are especially grateful to Mrs. Dorothy Aeppli of St. Paul, Minnesota, for providing numerous valuable comments during the preparation of the English manuscript. The author and the translator are responsible for any remaining faults and imperfections. I welcome any suggestions for improvement. My greatest personal gratitude goes to the translator, Mr. Zenon Reynaro wych, whose skills have done much to clarify the text, and to Springer-Verlag."

to Statistics.- 0 Preliminaries.- 0.1 Mathematical Abbreviations.- 0.2 Arithmetical Operations.- 0.3 Computational Aids.- 0.4 Rounding Off.- 0.5 Computations with Inaccurate Numbers.- 1 Statistical Decision Techniques.- 1.1 What Is Statistics? Statistics and the Scientific Method.- 1.2 Elements of Computational Probability.- ? 1.2.1 Statistical probability.- ? 1.2.2 The addition theorem of probability theory.- ? 1.2.3 Conditional probability and Statistical independence.- 1.2.4 Baye’s theorem.- ? 1.2.5 The random variable.- 1.2.6 The distribution function and the probability function.- 1.3 The Path to the Normal Distribution.- ? 1.3.1 The population and the sample.- ? 1.3.2 The generation of random samples.- ? 1.3.3 A frequency distribution.- ? 1.3.4 Bell-shaped curves and the normal distribution.- ? 1.3.5 Deviations from the normal distribution.- ? 1.3.6 Parameters of unimodal distributions.- ? 1.3.7 The probability plot.- 1.3.8 Additional statistics for the characterization of a one dimensional frequency distribution.- 1.3.9 The lognormal distribution.- 1.4 The Road to the Statistical Test.- 1.4.1 The confidence coefficient.- 1.4.2 Null hypotheses and alternative hypotheses.- 1.4.3 Risk I and risk II.- 1.4.4 The significance level and the hypotheses are, if possible, to be specified before collecting the data.- 1.4.5 The Statistical test.- 1.4.6 One sided and two sided tests.- 1.4.7 The power of a test.- 1.4.8 Distribution-free procedures.- 1.4.9 Decision principles.- 1.5 Three Important Families of Test Distributions.- 1.5.1 The Student’s t-distribution.- 1.5.2 The ?2 distribution.- 1.5.3 The F-distribution.- 1.6 Discrete Distributions.- 1.6.1 The binomial coefficient.- ? 1.6.2 The binomial distribution.- 1.6.3 The hypergeometric distribution.- 1.6.4 The Poisson distribution.- ? 1.6.5 The Thorndike nomogram.- 1.6.6 Comparison of means of Poisson distributions.- 1.6.7 The dispersion index.- 1.6.8 The multinomial coefficient.- 1.6.9 The multinomial distribution.- 2 Statistical Methods in Medicine and Technology.- 2.1 Medical Statistics.- 2.1.1 Critique of the source material.- 2.1.2 The reliability of laboratory methods.- 2.1.3 How to get unbiased information and how to investigate associations.- 2.1.4 Retrospective and prospective comparisons.- 2.1.5 The therapeutic comparison.- 2.1.6 The choice of appropriate sample sizes for the clinical trial.- 2.2 Sequential Test Plans.- 2.3 Evaluation of Biologically Active Substances Based on Dosage-Dichotomous Effect Curves.- 2.4 Statistics in Engineering.- 2.4.1 Quality control in industry.- 2.4.2 Life span and reliability of manufactured products.- 2.5 Operations Research.- 2.5.1 Linear programming.- 2.5.2 Game theory and the war game.- 2.5.3 The Monte Carlo method and Computer Simulation.- 3 The Comparison of Independent Data Samples.- 3.1 The Confidence Interval of the Mean and of the Median.- ? 3.1.1 Confidence interval for the mean.- ? 3.1.2 Estimation of sample sizes.- 3.1.3 The mean absolute deviation.- 3.1.4 Confidence interval for the median.- ? 3.2 Comparison of an Empirical Mean with the Mean of a Normally Distributed Population.- ? 3.3 Comparison of an Empirical Variance with Its Parameter.- 3.4 Confidence Interval for the Variance and for the Coefficient of Variation.- 3.5 Comparison of Two Empirically Determined Variances of Normally Distributed Populations.- 3.5.1 Small to medium sample size.- 3.5.2 Medium to large sample size.- 3.5.3 Large to very large sample size (n1, n2 ? 100).- ? 3.6 Comparison of Two Empirical Means of Normally Distributed Populations.- 3.6.1 Unknown but equal variances.- 3.6.2 Unknown, possibly unequal variances.- 3.7 Quick Tests Which Assume Nearly Normally Distributed Data.- 3.7.1 The comparison of the dispersions of two small samples according to Pillai and Buenaventura.- 3.7.2 The comparison of the means of two small samples according to Lord.- 3.7.3 Comparison of the means of several samples of equal size according to Dixon.- 3.8 The Problem of Outliers and Some Tables Useful in Setting Tolerance Limits.- 3.9 Distribution-Free Procedures for the Comparison of Independent Samples.- 3.9.1 The rank dispersion test of Siegel and Tukey.- 3.9.2 The comparison of two independent samples: Tukey’s quick and compact test.- 3.9.3 The comparison of two independent samples according to Kolmogoroff and Smirnoff.- ? 3.9.4 Comparison of two independent samples: The U-test of Wilcoxon, Mann, and Whitney.- 3.9.5 The comparison of several independent samples: The H-test of Kruskal and Wallis.- 4 Further Test Procedures.- 4.1 Reduction of Sampling Errors by Pairing Observations: Paired Samples.- 4.2 Observations Arranged in Pairs.- 4.2.1 The t-test for data arranged in pairs.- 4.2.2 The Wilcoxon matched pair signed-rank test.- 4.2.3 The maximum test for pair differences.- 4.2.4 The sign test of Dixon and Mood.- 4.3 The ?2 Goodness of Fit Test.- 4.3.1 Comparing observed frequencies with their expectations.- 4.3.2 Comparison of an empirical distribution with the uniform distribution.- 4.3.3 Comparison of an empirical distribution with the normal distribution.- 4.3.4 Comparison of an empirical distribution with the Poisson distribution.- 4.4 The Kolmogoroff-Smirnoff Goodness of Fit Test.- 4.5 The Frequency of Events.- 4.5.1 Confidence limits of an observed frequency for binomially distributed population. The comparison of a relative frequency with the underlying parameter.- 4.5.2 Clopper and Pearson’s quick estimation of the confidence intervals of a relative frequency.- 4.5.3 Estimation of the minimum size of a sample with counted data.- 4.5.4 The confidence interval for rare events.- 4.5.5 Comparison of two frequencies; testing whether they stand in a certain ratio.- 4.6 The Evaluation of Fourfold Tables.- 4.6.1 The comparison of two percentages—the analysis of fourfold tables.- 4.6.2 Repeated application of the fourfold ?2 test.- 4.6.3 The sign test modified by McNemar.- 4.6.4 The additive property of ?2.- 4.6.5 The combination of fourfold tables.- 4.6.6 The Pearson contingency coefficient.- 4.6.7 The exact Fisher test of independence, as well as an approximation for the comparison of two binomially distributed populations (based on very small samples).- 4.7 Testing the Randomness of a Sequence of Dichotomous Data or of Measured Data.- 4.7.1 The mean Square successive difference.- 4.7.2 The run test for testing whether a sequence of dichotomous data or of measured data is random.- 4.7.3 The phase frequency test of Wallis and Moore.- 4.8 The S3 Sign Test of Cox and Stuart for Detection of a Monotone Trend.- 5 Measures of Association: Correlation and Regression.- 5.1 Preliminary Remarks and Survey.- 5.1.1 The Bartlett procedure.- 5.1.2 The Kerrich procedure.- 5.2 Hypotheses on Causation Must Come from Outside, Not Necessarily from Statistics.- 5.3 Distribution-Free Measures of Association.- ? 5.3.1 The Spearman rank correlation coefficient.- 5.3.2 Quadrant correlation.- 5.3.3 The corner test of Olmstead and Tukey.- 5.4 Estimation Procedures.- 5.4.1 Estimation of the correlation coefficient.- ? 5.4.2 Estimation of the regression line.- 5.4.3 The estimation of some Standard deviations.- 5.4.4 Estimation of the correlation coefficients and the regression lines from a correlation table.- 5.4.5 Confidence limits of correlation coefficients.- 5.5 Test Procedures.- 5.5.1 Testing for the presence of correlation and some comparisons.- 5.5.2 Further applications of the ?-transformation.- ? 5.5.3 Testing the linearity of a regression.- ? 5.5.4 Testing the regression coefficient against zero.- 5.5.5 Testing the difference between an estimated and a hypothetical regression coefficient.- 5.5.6 Testing the difference between an estimated and a hypothetical axis intercept.- 5.5.7 Confidence limits for the regression coefficient, for the axis intercept, and for the residual variance.- ? 5.5.8 Comparing two regression coefficients and testing the equality of more than two regression lines.- ? 5.5.9 Confidence interval for the regression line.- 5.6 Nonlinear Regression.- 5.7 Some Linearizing Transformations.- ? 5.8 Partial and Multiple Correlations and Regressions.- 6 The Analysis of k × 2 and Other Two Way Tables.- 6.1 Comparison of Several Samples of Dichotomous Data and the Analysis of a k × 2 Two Way Table.- 6.1.1 k × 2 tables: The binomial homogeneity test.- 6.1.2 Comparison of two independent empirical distributions of frequency data.- 6.1.3 Partitioning the degrees of freedom of a k × 2 table.- 6.1.4 Testing a k × 2 table for trend: The share of linear regression in the overall Variation.- 6.2 The Analysis of r × c Contingency and Homogeneity Tables.- ? 6.2.1 Testing for independence or homogeneity.- 6.2.2 Testing the strength of the relation between two categorically itemized characteristics. The comparison of several contingency tables with respect to the strength of the relation by means of the corrected contingency coefficient of Pawlik.- 6.2.3 Testing for trend: The component due to linear regression in the overall Variation. The comparison of regression coefficients of corresponding two way tables.- 6.2.4 Testing Square tables for symmetry.- ? 6.2.5 Application of the minimum discrimination information statistic in testing two way tables for independence or homogeneity.- 7 Analysis of Variance Techniques.- ? 7.1 Preliminary Discussion and Survey.- 7.2 Testing the Equality of Several Variances.- 7.2.1 Testing the equality of several variances of equally large groups of samples.- 7.2.2 Testing the equality of several variances according to Cochran.- ? 7.2.3 Testing the equality of the variances of several samples of the same or different sizes according to Bartlett.- 7.3 One Way Analysis of Variance.- 7.3.1 Comparison of several means by analysis of variance.- 7.3.2 Assessment of linear contrasts according to Scheffé, and related topics.- 7.3.3 Transformations.- 7.4 Two Way and Three Way Analysis of Variance.- 7.4.1 Analysis of variance for 2ab observations.- ? 7.4.2 Multiple comparison of means according to Scheffé, according to Student, Newman and Keuls, and according to Tukey.- ? 7.4.3 Two way analysis of variance with a Single Observation per cell. A model without interaction.- 7.5 Rapid Tests of Analysis of Variance.- 7.5.1 Rapid test of analysis of variance and multiple comparisons of means according to Link and Wallace.- 7.5.2 Distribution-free multiple comparisons of independent samples according to Nemenyi: Pairwise comparisons of all possible pairs of treatments.- 7.6 Rank Analysis of Variance for Several Correlated Samples.- ? 7.6.1 The Friedman test: Double partitioning with a Single Observation per cell.- 7.6.2 Multiple comparisons of correlated samples according to Wilcoxon and Wilcox: Pairwise comparisons of several treatments which are repeated under a number of different conditions or in a number of different classes of subjects.- ? 7.7 Principles of Experimental Design.- Bibliography and General References.- Exercises.- Solutions to the Exercises.- Author Index.

Reihe/Serie Springer Series in Statistics
Verlagsort New York, NY
Sprache englisch
Themenwelt Mathematik / Informatik Mathematik Allgemeines / Lexika
Mathematik / Informatik Mathematik Angewandte Mathematik
Mathematik / Informatik Mathematik Statistik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
ISBN-10 0-387-90558-8 / 0387905588
ISBN-13 978-0-387-90558-7 / 9780387905587
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich