Statistical Significance Testing for Natural Language Processing
Seiten
2020
Morgan & Claypool Publishers (Verlag)
978-1-68173-797-3 (ISBN)
Morgan & Claypool Publishers (Verlag)
978-1-68173-797-3 (ISBN)
- Keine Verlagsinformationen verfügbar
- Artikel merken
Discusses the main aspects of statistical significance testing in Natural Language Processing (NLP). The guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one.
Data-driven experimental analysis has become the main evaluation tool of Natural Language Processing (NLP) algorithms. In fact, in the last decade, it has become rare to see an NLP paper, particularly one that proposes a new algorithm, that does not include extensive experimental analysis, and the number of involved tasks, datasets, domains, and languages is constantly growing. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: If we, as a community, rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental.
The goal of this book is to discuss the main aspects of statistical significance testing in NLP. Our guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one. This question drives the field forward as it allows the constant progress of developing better technology for language processing challenges. In practice, researchers and engineers would like to draw the right conclusion from a limited set of experiments, and this conclusion should hold for other experiments with datasets they do not have at their disposal or that they cannot perform due to limited time and resources. The book hence discusses the opportunities and challenges in using statistical significance testing in NLP, from the point of view of experimental comparison between two algorithms. We cover topics such as choosing an appropriate significance test for the major NLP tasks, dealing with the unique aspects of significance testing for non-convex deep neural networks, accounting for a large number of comparisons between two NLP algorithms in a statistically valid manner (multiple hypothesis testing), and, finally, the unique challenges yielded by the nature of the data and practices of the field.
Data-driven experimental analysis has become the main evaluation tool of Natural Language Processing (NLP) algorithms. In fact, in the last decade, it has become rare to see an NLP paper, particularly one that proposes a new algorithm, that does not include extensive experimental analysis, and the number of involved tasks, datasets, domains, and languages is constantly growing. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: If we, as a community, rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental.
The goal of this book is to discuss the main aspects of statistical significance testing in NLP. Our guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one. This question drives the field forward as it allows the constant progress of developing better technology for language processing challenges. In practice, researchers and engineers would like to draw the right conclusion from a limited set of experiments, and this conclusion should hold for other experiments with datasets they do not have at their disposal or that they cannot perform due to limited time and resources. The book hence discusses the opportunities and challenges in using statistical significance testing in NLP, from the point of view of experimental comparison between two algorithms. We cover topics such as choosing an appropriate significance test for the major NLP tasks, dealing with the unique aspects of significance testing for non-convex deep neural networks, accounting for a large number of comparisons between two NLP algorithms in a statistically valid manner (multiple hypothesis testing), and, finally, the unique challenges yielded by the nature of the data and practices of the field.
Technion - Israel Institute of Technology
Preface
Acknowledgments
Introduction
Statistical Hypothesis Testing
Statistical Significance Tests
Statistical Significance in NLP
Deep Significance
Replicability Analysis
Open Questions and Challenges
Conclusions
Bibliography
Authors' Biographies
Erscheinungsdatum | 19.05.2020 |
---|---|
Reihe/Serie | Synthesis Lectures on Human Language Technologies |
Verlagsort | San Rafael |
Sprache | englisch |
Maße | 191 x 235 mm |
Themenwelt | Informatik ► Software Entwicklung ► Qualität / Testen |
Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
ISBN-10 | 1-68173-797-3 / 1681737973 |
ISBN-13 | 978-1-68173-797-3 / 9781681737973 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Aus- und Weiterbildung zum Certified Tester – Foundation Level nach …
Buch | Hardcover (2024)
dpunkt (Verlag)
39,90 €
Die Softwaretest-Normen verstehen und anwenden
Buch | Hardcover (2024)
dpunkt (Verlag)
44,90 €
Methoden und Techniken für Softwarequalität in der agilen Welt
Buch | Hardcover (2023)
dpunkt (Verlag)
39,90 €