Entity Resolution and Information Quality

Entity Resolution and Information Quality
Author: John R. Talburt
Publisher: Elsevier
Total Pages: 254
Release: 2011-01-14
Genre: Computers
ISBN: 0123819733

Entity Resolution and Information Quality presents topics and definitions, and clarifies confusing terminologies regarding entity resolution and information quality. It takes a very wide view of IQ, including its six-domain framework and the skills formed by the International Association for Information and Data Quality {IAIDQ). The book includes chapters that cover the principles of entity resolution and the principles of Information Quality, in addition to their concepts and terminology. It also discusses the Fellegi-Sunter theory of record linkage, the Stanford Entity Resolution Framework, and the Algebraic Model for Entity Resolution, which are the major theoretical models that support Entity Resolution. In relation to this, the book briefly discusses entity-based data integration (EBDI) and its model, which serve as an extension of the Algebraic Model for Entity Resolution. There is also an explanation of how the three commercial ER systems operate and a description of the non-commercial open-source system known as OYSTER. The book concludes by discussing trends in entity resolution research and practice. Students taking IT courses and IT professionals will find this book invaluable. First authoritative reference explaining entity resolution and how to use it effectively Provides practical system design advice to help you get a competitive advantage Includes a companion site with synthetic customer data for applicatory exercises, and access to a Java-based Entity Resolution program.

Entity Information Life Cycle for Big Data

Entity Information Life Cycle for Big Data
Author: John R. Talburt
Publisher: Morgan Kaufmann
Total Pages: 255
Release: 2015-04-20
Genre: Computers
ISBN: 012800665X

Entity Information Life Cycle for Big Data walks you through the ins and outs of managing entity information so you can successfully achieve master data management (MDM) in the era of big data. This book explains big data’s impact on MDM and the critical role of entity information management system (EIMS) in successful MDM. Expert authors Dr. John R. Talburt and Dr. Yinle Zhou provide a thorough background in the principles of managing the entity information life cycle and provide practical tips and techniques for implementing an EIMS, strategies for exploiting distributed processing to handle big data for EIMS, and examples from real applications. Additional material on the theory of EIIM and methods for assessing and evaluating EIMS performance also make this book appropriate for use as a textbook in courses on entity and identity management, data management, customer relationship management (CRM), and related topics. Explains the business value and impact of entity information management system (EIMS) and directly addresses the problem of EIMS design and operation, a critical issue organizations face when implementing MDM systems Offers practical guidance to help you design and build an EIM system that will successfully handle big data Details how to measure and evaluate entity integrity in MDM systems and explains the principles and processes that comprise EIM Provides an understanding of features and functions an EIM system should have that will assist in evaluating commercial EIM systems Includes chapter review questions, exercises, tips, and free downloads of demonstrations that use the OYSTER open source EIM system Executable code (Java .jar files), control scripts, and synthetic input data illustrate various aspects of CSRUD life cycle such as identity capture, identity update, and assertions

Innovative Techniques and Applications of Entity Resolution

Innovative Techniques and Applications of Entity Resolution
Author: Wang, Hongzhi
Publisher: IGI Global
Total Pages: 433
Release: 2014-02-28
Genre: Computers
ISBN: 1466651997

Entity resolution is an essential tool in processing and analyzing data in order to draw precise conclusions from the information being presented. Further research in entity resolution is necessary to help promote information quality and improved data reporting in multidisciplinary fields requiring accurate data representation. Innovative Techniques and Applications of Entity Resolution draws upon interdisciplinary research on tools, techniques, and applications of entity resolution. This research work provides a detailed analysis of entity resolution applied to various types of data as well as appropriate techniques and applications and is appropriately designed for students, researchers, information professionals, and system developers.

Data Matching

Data Matching
Author: Peter Christen
Publisher: Springer Science & Business Media
Total Pages: 279
Release: 2012-07-04
Genre: Computers
ISBN: 3642311644

Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases. Peter Christen’s book is divided into three parts: Part I, “Overview”, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, “Steps of the Data Matching Process”, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, “Further Topics”, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today. By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.

The Four Generations of Entity Resolution

The Four Generations of Entity Resolution
Author: George Papadakis
Publisher: Springer Nature
Total Pages: 152
Release: 2022-06-01
Genre: Computers
ISBN: 3031018788

Entity Resolution (ER) lies at the core of data integration and cleaning and, thus, a bulk of the research examines ways for improving its effectiveness and time efficiency. The initial ER methods primarily target Veracity in the context of structured (relational) data that are described by a schema of well-known quality and meaning. To achieve high effectiveness, they leverage schema, expert, and/or external knowledge. Part of these methods are extended to address Volume, processing large datasets through multi-core or massive parallelization approaches, such as the MapReduce paradigm. However, these early schema-based approaches are inapplicable to Web Data, which abound in voluminous, noisy, semi-structured, and highly heterogeneous information. To address the additional challenge of Variety, recent works on ER adopt a novel, loosely schema-aware functionality that emphasizes scalability and robustness to noise. Another line of present research focuses on the additional challenge of Velocity, aiming to process data collections of a continuously increasing volume. The latest works, though, take advantage of the significant breakthroughs in Deep Learning and Crowdsourcing, incorporating external knowledge to enhance the existing words to a significant extent. This synthesis lecture organizes ER methods into four generations based on the challenges posed by these four Vs. For each generation, we outline the corresponding ER workflow, discuss the state-of-the-art methods per workflow step, and present current research directions. The discussion of these methods takes into account a historical perspective, explaining the evolution of the methods over time along with their similarities and differences. The lecture also discusses the available ER tools and benchmark datasets that allow expert as well as novice users to make use of the available solutions.

Entity Resolution in the Web of Data

Entity Resolution in the Web of Data
Author: Vassilis Christophides
Publisher: Springer Nature
Total Pages: 106
Release: 2022-05-31
Genre: Mathematics
ISBN: 3031794680

In recent years, several knowledge bases have been built to enable large-scale knowledge sharing, but also an entity-centric Web search, mixing both structured data and text querying. These knowledge bases offer machine-readable descriptions of real-world entities, e.g., persons, places, published on the Web as Linked Data. However, due to the different information extraction tools and curation policies employed by knowledge bases, multiple, complementary and sometimes conflicting descriptions of the same real-world entities may be provided. Entity resolution aims to identify different descriptions that refer to the same entity appearing either within or across knowledge bases. The objective of this book is to present the new entity resolution challenges stemming from the openness of the Web of data in describing entities by an unbounded number of knowledge bases, the semantic and structural diversity of the descriptions provided across domains even for the same real-world entities, as well as the autonomy of knowledge bases in terms of adopted processes for creating and curating entity descriptions. The scale, diversity, and graph structuring of entity descriptions in the Web of data essentially challenge how two descriptions can be effectively compared for similarity, but also how resolution algorithms can efficiently avoid examining pairwise all descriptions. The book covers a wide spectrum of entity resolution issues at the Web scale, including basic concepts and data structures, main resolution tasks and workflows, as well as state-of-the-art algorithmic techniques and experimental trade-offs.

Entity Resolution for Large-Scale Databases

Entity Resolution for Large-Scale Databases
Author: Kunho Kim
Publisher:
Total Pages:
Release: 2019
Genre:
ISBN:

Entity resolution involves the problem of identifying, matching, and grouping the same entities from a single collection or multiple ones of data. Real-world databases often comprise data from multiple sources; hence, this process is an essential preprocessing step for correctly processing queries on a particular entity. An example of entity resolution is finding a person's medical records from multiple hospital records. In entity resolution, there commonly arise two main problems. One is the issue of disambiguation (or deduplication), which involves clustering records that correspond to the same entity within a database. The other problem is record linkage which involves matching records between multiple databases. In this dissertation, we focus on studying entity resolution on large-scale structured data such as CiteSeerX, PubMed and the United States Patent and Trademark Office (USPTO) patent database in several aspects. First, we review our proposed entity resolution framework, and discuss how to apply the framework on two practical problems; inventor name disambiguation on the USPTO patent database and financial entity record linkage. Second, we investigate building a web service to improve ease of using entity resolution results in several scenarios. We define two types of queries--attribute and record-based ones--and discuss how we design the web service to handle those queries efficiently. We demonstrate that our algorithm can accelerate the record-based query by a factor of 4.01 compared to a baseline naive approach. Third, we discuss improving the entity resolution in two directions. One direction is to improve the blocking method to reduce unnecessary comparison to improve scalability on author name disambiguation problems. We show that our proposed conjuctive normal form (CNF) blocking tested on the entire PubMed database of 80 million author mentions efficiently removes 82.17% of all author record pairs. Another direction is to improve accuracy; we study enhancing pairwise classification, which estimates the probability of a pair of records being from the same name entity. Our purposed hybrid method using both structure-aware and global features shows an improvement on mean average precision by up to 7.45% points. Finally, we discuss entity and attribute extraction. Entity extraction is important in terms of improving the input data quality for entity resolution and can also be used to extract useful entities from external sources. In this dissertation, we study the problem of extracting entities for task oriented spoken language understanding in human-to-human conversation scenarios. Our proposed bidirectional LSTM architecture with supplemental knowledge extracted from web data, search engine query logs, prior sentences, and task transfer demnstrates an improvement in F1-score by up to 2.92% compared to existing approaches.

Handbook of Data Quality

Handbook of Data Quality
Author: Shazia Sadiq
Publisher: Springer Science & Business Media
Total Pages: 440
Release: 2013-08-13
Genre: Computers
ISBN: 3642362575

The issue of data quality is as old as data itself. However, the proliferation of diverse, large-scale and often publically available data on the Web has increased the risk of poor data quality and misleading data interpretations. On the other hand, data is now exposed at a much more strategic level e.g. through business intelligence systems, increasing manifold the stakes involved for individuals, corporations as well as government agencies. There, the lack of knowledge about data accuracy, currency or completeness can have erroneous and even catastrophic results. With these changes, traditional approaches to data management in general, and data quality control specifically, are challenged. There is an evident need to incorporate data quality considerations into the whole data cycle, encompassing managerial/governance as well as technical aspects. Data quality experts from research and industry agree that a unified framework for data quality management should bring together organizational, architectural and computational approaches. Accordingly, Sadiq structured this handbook in four parts: Part I is on organizational solutions, i.e. the development of data quality objectives for the organization, and the development of strategies to establish roles, processes, policies, and standards required to manage and ensure data quality. Part II, on architectural solutions, covers the technology landscape required to deploy developed data quality management processes, standards and policies. Part III, on computational solutions, presents effective and efficient tools and techniques related to record linkage, lineage and provenance, data uncertainty, and advanced integrity constraints. Finally, Part IV is devoted to case studies of successful data quality initiatives that highlight the various aspects of data quality in action. The individual chapters present both an overview of the respective topic in terms of historical research and/or practice and state of the art, as well as specific techniques, methodologies and frameworks developed by the individual contributors. Researchers and students of computer science, information systems, or business management as well as data professionals and practitioners will benefit most from this handbook by not only focusing on the various sections relevant to their research area or particular practical work, but by also studying chapters that they may initially consider not to be directly relevant to them, as there they will learn about new perspectives and approaches.