An Architecture for Fast and General Data Processing on Large Clusters

An Architecture for Fast and General Data Processing on Large Clusters
Author: Matei Zaharia
Publisher: Morgan & Claypool
Total Pages: 242
Release: 2016-05-01
Genre: Computers
ISBN: 1970001585

The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too. This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing. We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective. This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

An Architecture for Fast and General Data Processing on Large Clusters

An Architecture for Fast and General Data Processing on Large Clusters
Author: Matei Zaharia
Publisher: Morgan & Claypool
Total Pages: 141
Release: 2016-05-01
Genre: Computers
ISBN: 1970001577

The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too. This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing. We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective. This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

Streaming Systems

Streaming Systems
Author: Tyler Akidau
Publisher: "O'Reilly Media, Inc."
Total Pages: 362
Release: 2018-07-16
Genre: Computers
ISBN: 1491983825

Streaming data is a big deal in big data these days. As more and more businesses seek to tame the massive unbounded data sets that pervade our world, streaming systems have finally reached a level of maturity sufficient for mainstream adoption. With this practical guide, data engineers, data scientists, and developers will learn how to work with streaming data in a conceptual and platform-agnostic way. Expanded from Tyler Akidau’s popular blog posts "Streaming 101" and "Streaming 102", this book takes you from an introductory level to a nuanced understanding of the what, where, when, and how of processing real-time data streams. You’ll also dive deep into watermarks and exactly-once processing with co-authors Slava Chernyak and Reuven Lax. You’ll explore: How streaming and batch data processing patterns compare The core principles and concepts behind robust out-of-order data processing How watermarks track progress and completeness in infinite datasets How exactly-once data processing techniques ensure correctness How the concepts of streams and tables form the foundations of both batch and streaming data processing The practical motivations behind a powerful persistent state mechanism, driven by a real-world example How time-varying relations provide a link between stream processing and the world of SQL and relational algebra

Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020

Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020
Author: Aboul Ella Hassanien
Publisher: Springer Nature
Total Pages: 893
Release: 2020-09-19
Genre: Technology & Engineering
ISBN: 3030586693

This book presents the proceedings of the 6th International Conference on Advanced Intelligent Systems and Informatics 2020 (AISI2020), which took place in Cairo, Egypt, from October 19 to 21, 2020. This international and interdisciplinary conference, which highlighted essential research and developments in the fields of informatics and intelligent systems, was organized by the Scientific Research Group in Egypt (SRGE). The book is divided into several sections, covering the following topics: Intelligent Systems, Deep Learning Technology, Document and Sentiment Analysis, Blockchain and Cyber Physical System, Health Informatics and AI against COVID-19, Data Mining, Power and Control Systems, Business Intelligence, Social Media and Digital Transformation, Robotic, Control Design, and Smart Systems.

Big Data and HPC: Ecosystem and Convergence

Big Data and HPC: Ecosystem and Convergence
Author: L. Grandinetti
Publisher: IOS Press
Total Pages: 338
Release: 2018-08-22
Genre: Computers
ISBN: 1614998825

Due to the increasing need to solve complex problems, high-performance computing (HPC) is now one of the most fundamental infrastructures for scientific development in all disciplines, and it has progressed massively in recent years as a result. HPC facilitates the processing of big data, but the tremendous research challenges faced in recent years include: the scalability of computing performance for high velocity, high variety and high volume big data; deep learning with massive-scale datasets; big data programming paradigms on multi-core; GPU and hybrid distributed environments; and unstructured data processing with high-performance computing. This book presents 19 selected papers from the TopHPC2017 congress on Advances in High-Performance Computing and Big Data Analytics in the Exascale era, held in Tehran, Iran, in April 2017. The book is divided into 3 sections: State of the Art and Future Scenarios, Big Data Challenges, and HPC Challenges, and will be of interest to all those whose work involves the processing of Big Data and the use of HPC.

Big Data Technology and Applications

Big Data Technology and Applications
Author: Wenguang Chen
Publisher: Springer
Total Pages: 335
Release: 2016-02-02
Genre: Computers
ISBN: 9811004579

This book constitutes the refereed proceedings of the First National Conference on Big Data Technology and Applications, BDTA 2015, held in Harbin, China, in December 2015. The 26 revised papers presented were carefully reviewed and selected from numerous submissions. The papers address issues such as the storage technology of Big Data; analysis of Big Data and data mining; visualization of Big Data; the parallel computing framework under Big Data; the architecture and basic theory of Big Data; collection and preprocessing of Big Data; innovative applications in some areas, such as internet of things and cloud computing.

Big Data Processing with Apache Spark

Big Data Processing with Apache Spark
Author: Srini Penchikala
Publisher: Lulu.com
Total Pages: 106
Release: 2018-03-13
Genre: Computers
ISBN: 1387659952

Apache Spark is a popular open-source big-data processing framework thatÕs built around speed, ease of use, and unified distributed computing architecture. Not only it supports developing applications in different languages like Java, Scala, Python, and R, itÕs also hundred times faster in memory and ten times faster even when running on disk compared to traditional data processing frameworks. Whether you are currently working on a big data project or interested in learning more about topics like machine learning, streaming data processing, and graph data analytics, this book is for you. You can learn about Apache Spark and develop Spark programs for various use cases in big data analytics using the code examples provided. This book covers all the libraries in Spark ecosystem: Spark Core, Spark SQL, Spark Streaming, Spark ML, and Spark GraphX.

Data Analytics

Data Analytics
Author: Mohiuddin Ahmed
Publisher: CRC Press
Total Pages: 451
Release: 2018-09-21
Genre: Computers
ISBN: 0429820917

Large data sets arriving at every increasing speeds require a new set of efficient data analysis techniques. Data analytics are becoming an essential component for every organization and technologies such as health care, financial trading, Internet of Things, Smart Cities or Cyber Physical Systems. However, these diverse application domains give rise to new research challenges. In this context, the book provides a broad picture on the concepts, techniques, applications, and open research directions in this area. In addition, it serves as a single source of reference for acquiring the knowledge on emerging Big Data Analytics technologies.

Big Data in Engineering Applications

Big Data in Engineering Applications
Author: Sanjiban Sekhar Roy
Publisher: Springer
Total Pages: 381
Release: 2018-05-02
Genre: Technology & Engineering
ISBN: 9811084769

This book presents the current trends, technologies, and challenges in Big Data in the diversified field of engineering and sciences. It covers the applications of Big Data ranging from conventional fields of mechanical engineering, civil engineering to electronics, electrical, and computer science to areas in pharmaceutical and biological sciences. This book consists of contributions from various authors from all sectors of academia and industries, demonstrating the imperative application of Big Data for the decision-making process in sectors where the volume, variety, and velocity of information keep increasing. The book is a useful reference for graduate students, researchers and scientists interested in exploring the potential of Big Data in the application of engineering areas.

Shared-Memory Parallelism Can be Simple, Fast, and Scalable

Shared-Memory Parallelism Can be Simple, Fast, and Scalable
Author: Julian Shun
Publisher: Morgan & Claypool
Total Pages: 445
Release: 2017-06-01
Genre: Computers
ISBN: 1970001895

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era. The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression. The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores. This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.