Data Analysis Methods in Physical Oceanography -  William J. Emery,  Richard E. Thomson

Data Analysis Methods in Physical Oceanography (eBook)

eBook Download: PDF | EPUB
2014 | 3. Auflage
728 Seiten
Elsevier Science (Verlag)
978-0-12-387783-3 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
98,95 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Data Analysis Methods in Physical Oceanography, Third Edition is a practical reference to established and modern data analysis techniques in earth and ocean sciences. Its five major sections address data acquisition and recording, data processing and presentation, statistical methods and error handling, analysis of spatial data fields, and time series analysis methods. The revised Third Edition updates the instrumentation used to collect and analyze physical oceanic data and adds new techniques including Kalman Filtering. Additionally, the sections covering spectral, wavelet, and harmonic analysis techniques are completely revised since these techniques have attracted significant attention over the past decade as more accurate and efficient data gathering and analysis methods. - Completely updated and revised to reflect new filtering techniques and major updating of the instrumentation used to collect and analyze data - Co-authored by scientists from academe and industry, both of whom have more than 30 years of experience in oceanographic research and field work - Significant revision of sections covering spectral, wavelet, and harmonic analysis techniques - Examples address typical data analysis problems yet provide the reader with formulaic 'recipes for working with their own data - Significant expansion to 350 figures, illustrations, diagrams and photos

Richard E. Thomson is a researcher in coastal and deep-sea physical oceanography within the Ocean Sciences Division. Coastal oceanographic processes on the continental shelf and slope including coastally trapped waves, upwelling and baroclinic instability; hydrothermal venting and the physics of buoyant plumes; linkage between circulation and zooplankton biomass aggregations at hydrothermal venting sites; analysis and modelling of landslide generated tsunamis; paleoclimate using tree ring records and sediment cores from coastal inlets and basins.
Data Analysis Methods in Physical Oceanography, Third Edition is a practical reference to established and modern data analysis techniques in earth and ocean sciences. Its five major sections address data acquisition and recording, data processing and presentation, statistical methods and error handling, analysis of spatial data fields, and time series analysis methods. The revised Third Edition updates the instrumentation used to collect and analyze physical oceanic data and adds new techniques including Kalman Filtering. Additionally, the sections covering spectral, wavelet, and harmonic analysis techniques are completely revised since these techniques have attracted significant attention over the past decade as more accurate and efficient data gathering and analysis methods. - Completely updated and revised to reflect new filtering techniques and major updating of the instrumentation used to collect and analyze data- Co-authored by scientists from academe and industry, both of whom have more than 30 years of experience in oceanographic research and field work- Significant revision of sections covering spectral, wavelet, and harmonic analysis techniques- Examples address typical data analysis problems yet provide the reader with formulaic "e;recipes for working with their own data- Significant expansion to 350 figures, illustrations, diagrams and photos

Front Cover 1
DATA ANALYSIS METHODS IN PHYSICAL OCEANOGRAPHY 4
Copyright 5
Dedication 6
Contents 8
Preface 10
Acknowledgments 12
Chapter 1 - Data Acquisition and Recording 14
1.1 INTRODUCTION 14
1.2 BASIC SAMPLING REQUIREMENTS 16
1.3 TEMPERATURE 23
1.4 SALINITY 50
1.5 DEPTH OR PRESSURE 61
1.6 SEA-LEVEL MEASUREMENT 74
1.7 EULERIAN CURRENTS 92
1.8 LAGRANGIAN CURRENT MEASUREMENTS 128
1.9 WIND 157
1.10 PRECIPITATION 165
1.11 CHEMICAL TRACERS 168
1.12 TRANSIENT CHEMICAL TRACERS 188
Chapter 2 - Data Processing and Presentation 200
2.1 INTRODUCTION 200
2.2 CALIBRATION 202
2.3 INTERPOLATION 203
2.4 DATA PRESENTATION 204
Chapter 3 - Statistical Methods and Error Handling 232
3.1 INTRODUCTION 232
3.2 SAMPLE DISTRIBUTIONS 233
3.3 PROBABILITY 235
3.4 MOMENTS AND EXPECTED VALUES 239
3.5 COMMON PDFS 241
3.6 CENTRAL LIMIT THEOREM 245
3.7 ESTIMATION 247
3.8 CONFIDENCE INTERVALS 249
3.9 SELECTING THE SAMPLE SIZE 256
3.10 CONFIDENCE INTERVALS FOR ALTIMETER-BIAS ESTIMATES 257
3.11 ESTIMATION METHODS 258
3.12 LINEAR ESTIMATION (REGRESSION) 263
3.13 RELATIONSHIP BETWEEN REGRESSION AND CORRELATION 270
3.14 HYPOTHESIS TESTING 275
3.15 EFFECTIVE DEGREES OF FREEDOM 282
3.16 EDITING AND DESPIKING TECHNIQUES: THE NATURE OF ERRORS 288
3.17 INTERPOLATION: FILLING THE DATA GAPS 300
3.18 COVARIANCE AND THE COVARIANCE MATRIX 312
3.19 THE BOOTSTRAP AND JACKKNIFE METHODS 315
Chapter 4 - The Spatial Analyses of Data Fields 326
4.1 TRADITIONAL BLOCK AND BULK AVERAGING 326
4.2 OBJECTIVE ANALYSIS 330
4.3 KRIGING 341
4.4 EMPIRICAL ORTHOGONAL FUNCTIONS 348
4.5 EXTENDED EMPIRICAL ORTHOGONAL FUNCTIONS 369
4.6 CYCLOSTATIONARY EOFS 376
4.7 FACTOR ANALYSIS 380
4.8 NORMAL MODE ANALYSIS 381
4.9 SELF ORGANIZING MAPS 392
4.10 KALMAN FILTERS 409
4.11 MIXED LAYER DEPTH ESTIMATION 419
4.12 INVERSE METHODS 427
Chapter 5 - Time Series Analysis Methods 438
5.1 BASIC CONCEPTS 438
5.2 STOCHASTIC PROCESSES AND STATIONARITY 440
5.3 CORRELATION FUNCTIONS 441
5.4 SPECTRAL ANALYSIS 446
5.5 SPECTRAL ANALYSIS (PARAMETRIC METHODS) 502
5.6 CROSS-SPECTRAL ANALYSIS 516
5.7 WAVELET ANALYSIS 534
5.8 FOURIER ANALYSIS 549
5.9 HARMONIC ANALYSIS 560
5.10 REGIME SHIFT DETECTION 570
5.11 VECTOR REGRESSION 581
5.12 FRACTALS 593
Chapter 6 - Digital Filters 606
6.1 INTRODUCTION 606
6.2 BASIC CONCEPTS 607
6.3 IDEAL FILTERS 609
6.4 DESIGN OF OCEANOGRAPHIC FILTERS 617
6.5 RUNNING-MEAN FILTERS 620
6.6 GODIN-TYPE FILTERS 622
6.7 LANCZOS-WINDOW COSINE FILTERS 625
6.8 BUTTERWORTH FILTERS 630
6.9 KAISER–BESSEL FILTERS 637
6.10 FREQUENCY-DOMAIN (TRANSFORM) FILTERING 640
References 652
Appendix A - Units in Physical Oceanography 678
Appendix B - Glossary of Statistical Terminology 682
Appendix C - Means, Variances and Moment-Generating Functions for Some Common Continuous Variables 686
Appendix D - Statistical Tables 688
Appendix E - Correlation Coefficients at the 5% and 1% Levels of Significance for Various Degrees of Freedom . 700
Appendix F - Approximations and Nondimensional Numbers in Physical Oceanography 702
References 708
Appendix G - Convolution 710
Appendix G CONVOLUTION AND FOURIER TRANSFORMS 710
Appendix G CONVOLUTION OF DISCRETE DATA 710
Appendix G CONVOLUTION AS TRUNCATION OF AN INFINITE TIME SERIES 711
Appendix G DECONVOLUTION 713
Index 714

1.2. Basic Sampling Requirements


A primary concern in most observational work is the accuracy of the measurement device, a common performance statistic for the instrument. Absolute accuracy requires frequent instrument calibration to detect and correct for any shifts in behavior. The inconvenience of frequent calibration often causes the scientist to substitute instrument precision as the measurement capability of an instrument. Unlike absolute accuracy, precision is a relative term and simply represents the ability of the instrument to repeat the observation without deviation. Absolute accuracy further requires that the observation be consistent in magnitude with some universally accepted reference standard. In most cases, the user must be satisfied with having good precision and repeatability of the measurement rather than having absolute measurement accuracy. Any instrument that fails to maintain its precision, fails to provide data that can be handled in any meaningful statistical fashion. The best instruments are those that provide both high precision and defensible absolute accuracy. It is sometimes advantageous to measure simultaneously the same variable with more than one reliable instrument. However, if the instruments have the same precision but not the same absolute accuracy, we are reminded of the saying that “a man with two watches does not know the time”.
Digital instrument resolution is measured in bits, where a resolution of N bits means that the full range of the sensor is partitioned into 2N equal segments (N = 1, 2…). For example, eight-bit resolution means that the specified full-scale range of the sensor, say V = 10 V, is divided into 28 = 256 increments, with a bit resolution of V/256 = 0.039 V. Whether the instrument can actually measure to a resolution or accuracy of V/2N units is another matter. The sensor range can always be divided into an increasing number of smaller increments but eventually one reaches a point where the value of each bit is buried in the noise level of the sensor and is no longer significant.

1.2.1. Sampling Interval


Assuming the instrument selected can produce reliable and useful data, the next highest priority sampling requirement is that the measurements be collected often enough in space and time to resolve the phenomena of interest. For example, in the days when oceanographers were only interested in the mean stratification of the world ocean, water property profiles from discrete-level hydrographic (bottle) casts were adequate to resolve the general vertical density structure. On the other hand, these same discrete-level profiles failed to resolve the detailed structure associated with interleaving and mixing processes, including those associated with thermohaline staircases (salt fingering and diffusive convection), that now are resolved by the rapid vertical sampling provided by modern conductivity-temperature-depth (CTD) probes. The need for higher resolution assumes that the oceanographer has some prior knowledge of the process of interest. Often this prior knowledge has been collected with instruments incapable of resolving the true variability and may, therefore, only be suggested by highly aliased (distorted) data collected using earlier techniques. In addition, laboratory and theoretical studies may provide information on the scales that must be resolved by the measurement system.
For discrete digital data x(ti) measured at times ti, the choice of the sampling increment ?t (or ?x in the case of spatial measurements) is the quantity of importance. In essence, we want to sample often enough that we can pick out the highest frequency component of interest in the time series but not oversample so that we fill up the data storage file, use up all the battery power, or become swamped with unnecessary data. In the case of real-time cabled observatories, it is also possible to sample so rapidly (hundreds of times per second) that inserting the essential time stamps in the data string can disrupt the cadence of the record. We might also want to sample at irregular intervals to avoid built-in bias in our sampling scheme. If the sampling interval is too large to resolve higher frequency components, it becomes necessary to suppress these components during sampling using a sensor whose response is limited to frequencies equal to that of the sampling frequency. As we discuss in our section on processing satellite-tracked drifter data, these lessons are often learned too late—after the buoys have been cast adrift in the sea.
The important aspect to keep in mind is that, for a given sampling interval ?t, the highest frequency we can hope to resolve is the Nyquist (or folding) frequency, fN, defined as

N=1/2?t

(1.1)

We cannot resolve any higher frequencies than this. For example, if we sample every 10 h, the highest frequency we can hope to see in the data is fN = 0.05 cph (cycles per hour). Equation (1.1) states the obvious—that it takes at least two sampling intervals (or three data points) to resolve a sinusoidal-type oscillation with period 1/fN (Figure 1.1). In practice, we need to contend with noise and sampling errors so that it takes something like three or more sampling increments (i.e., ? four data points) to accurately determine the highest observable frequency. Thus, fN is an upper limit. The highest frequency we can resolve for a sampling of ?t = 10 h in Figure 1.1 is closer to 1/(3?t) ? 0.033 cph. (Replacing ?t with ?x in the case of spatial sampling increments allows us to interpret these limitations in terms of the highest wavenumber (Nyquist wavenumber) the data are able to resolve.)
An important consequence of Equation (1.1) is the problem of aliasing. In particular, if there is energy at frequencies f > fN—which we obviously cannot resolve because of the ?t we picked—this energy gets folded back into the range of frequencies, f < fN, which we are attempting to resolve (hence, the alternate name “folding frequency” for fN). This unresolved energy does not disappear but gets redistributed within the frequency range of interest. To make matters worse, the folded-back energy is disguised (or aliased) within frequency components different from those of its origin. We cannot distinguish this folded-back energy from that which actually belongs to the lower frequencies. Thus, we end up with erroneous (aliased) estimates of the spectral energy variance over the resolvable range of frequencies. An example of highly aliased data would be current meter data collected using 13-h sampling in a region dominated by strong semidiurnal (12.42-h period) tidal currents. More will be said on this topic in Chapter 5.
As a general rule, one should plan a measurement program based on the frequencies and wavenumbers (estimated from the corresponding periods and wavelengths) of the parameters of interest over the study domain. This requirement may then dictate the selection of the measurement tool or technique. If the instrument cannot sample rapidly enough to resolve the frequencies of concern it should not be used. It should be emphasized that the Nyquist frequency concept applies to both time and space and the Nyquist wavenumber is a valid means of determining the fundamental wavelength that must be sampled.

FIGURE 1.1 Plot of the function F(n) = sin (2?n/20 + ?) where time is given by the integer n = ?1, 0, …, 24. The period 2?t = 1/fN is 20 units and ? is a random phase with a small magnitude in the range ±0.1 radians. Open circles denote measured points and solid points the curve F(n). Noise makes it necessary to use more than three data values to accurately define the oscillation period.

1.2.2. Sampling Duration


The next concern is that one samples long enough to establish a statistically significant determination of the process being studied. For time-series measurements, this amounts to a requirement that the data be collected over a period sufficiently long that repeated cycles of the phenomenon are observed. This also applies to spatial sampling where statistical considerations require a large enough sample to define...

Erscheint lt. Verlag 14.7.2014
Sprache englisch
Themenwelt Naturwissenschaften Biologie Ökologie / Naturschutz
Naturwissenschaften Biologie Zoologie
Naturwissenschaften Geowissenschaften Hydrologie / Ozeanografie
Naturwissenschaften Physik / Astronomie
Technik Umwelttechnik / Biotechnologie
ISBN-10 0-12-387783-0 / 0123877830
ISBN-13 978-0-12-387783-3 / 9780123877833
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 54,3 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

EPUBEPUB (Adobe DRM)
Größe: 35,7 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich