The Four Generations of Entity Resolution
Seiten
2021
Morgan & Claypool Publishers (Verlag)
978-1-63639-058-1 (ISBN)
Morgan & Claypool Publishers (Verlag)
978-1-63639-058-1 (ISBN)
To derive benefit from a new information system, it must be integrated into the structures and processes of an organisation. That is, the system must be organisationally implemented. This book is about organisational implementation, which requires thorough preparations but also continues long after the system has gone live.
This book organizes entity resolution (ER) into four generations based on the challenges posed by "the four Vs," Veracity, Volume, Variety, and Velocity. Entity resolution lies at the core of data integration and cleaning and, thus, a bulk of the research examines ways for improving its effectiveness and time efficiency.
For each generation, we outline the corresponding ER workflow, discuss the state-of-the-art methods per workflow step, and present current research directions. The discussion of these methods takes into account a historical perspective, explaining the evolution of the methods over time along with their similarities and differences. The lecture also discusses the available ER tools and benchmark datasets that allow expert as well as novice users to make use of the available solutions.
The initial ER methods primarily target Veracity in the context of structured (relational) data that are described by a schema of well-known quality and meaning. To achieve high effectiveness, they leverage schema, expert, and/or external knowledge. Part of these methods are extended to address Volume, processing large datasets through multi-core or massive parallelization approaches, such as the MapReduce paradigm. However, these early schema-based approaches are inapplicable to Web Data, which abound in voluminous, noisy, semi-structured, and highly heterogeneous information. To address the additional challenge of Variety, recent works on ER adopt a novel, loosely schema-aware functionality that emphasizes scalability and robustness to noise. Another line of present research focuses on the additional challenge of Velocity, aiming to process data collections of a continuously increasing volume. The latest works, though, take advantage of the significant breakthroughs in Deep Learning and Crowdsourcing, incorporating external knowledge to enhance the existing words to a significant extent.
This book organizes entity resolution (ER) into four generations based on the challenges posed by "the four Vs," Veracity, Volume, Variety, and Velocity. Entity resolution lies at the core of data integration and cleaning and, thus, a bulk of the research examines ways for improving its effectiveness and time efficiency.
For each generation, we outline the corresponding ER workflow, discuss the state-of-the-art methods per workflow step, and present current research directions. The discussion of these methods takes into account a historical perspective, explaining the evolution of the methods over time along with their similarities and differences. The lecture also discusses the available ER tools and benchmark datasets that allow expert as well as novice users to make use of the available solutions.
The initial ER methods primarily target Veracity in the context of structured (relational) data that are described by a schema of well-known quality and meaning. To achieve high effectiveness, they leverage schema, expert, and/or external knowledge. Part of these methods are extended to address Volume, processing large datasets through multi-core or massive parallelization approaches, such as the MapReduce paradigm. However, these early schema-based approaches are inapplicable to Web Data, which abound in voluminous, noisy, semi-structured, and highly heterogeneous information. To address the additional challenge of Variety, recent works on ER adopt a novel, loosely schema-aware functionality that emphasizes scalability and robustness to noise. Another line of present research focuses on the additional challenge of Velocity, aiming to process data collections of a continuously increasing volume. The latest works, though, take advantage of the significant breakthroughs in Deep Learning and Crowdsourcing, incorporating external knowledge to enhance the existing words to a significant extent.
Preface
Acknowledgments
Introduction
The Context and Rationale: Organizational Change
Technology Adoption: Boosters and Barriers
Implementing Information Systems: Three Phases
Preparations: Planning the Implementation
Going Live: The Initial, Planned Change
Continuing Design During Use: The Long, Improvisational Process
The Larger Picture and the Local Needs
References
Author Biography
Erscheinungsdatum | 03.04.2021 |
---|---|
Reihe/Serie | Synthesis Lectures on Human-Centered Informatics |
Verlagsort | San Rafael |
Sprache | englisch |
Maße | 191 x 235 mm |
Gewicht | 333 g |
Themenwelt | Informatik ► Software Entwicklung ► User Interfaces (HCI) |
Mathematik / Informatik ► Informatik ► Theorie / Studium | |
ISBN-10 | 1-63639-058-7 / 1636390587 |
ISBN-13 | 978-1-63639-058-1 / 9781636390581 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Aus- und Weiterbildung nach iSAQB-Standard zum Certified Professional …
Buch | Hardcover (2023)
dpunkt Verlag
34,90 €
Lean UX und Design Thinking: Teambasierte Entwicklung …
Buch | Hardcover (2022)
dpunkt (Verlag)
34,90 €
Wissensverarbeitung - Neuronale Netze
Buch | Hardcover (2023)
Carl Hanser (Verlag)
34,99 €