Demand-based Data Stream Gathering, Processing, and Transmission

Demand-based Data Stream Gathering, Processing, and Transmission
Author: Jonas Traub
Publisher: BoD – Books on Demand
Total Pages: 206
Release: 2021-04-28
Genre: Computers
ISBN: 3753488941

This book presents an end-to-end architecture for demand-based data stream gathering, processing, and transmission. The Internet of Things (IoT) consists of billions of devices which form a cloud of network connected sensor nodes. These sensor nodes supply a vast number of data streams with massive amounts of sensor data. Real-time sensor data enables diverse applications including traffic-aware navigation, machine monitoring, and home automation. Current stream processing pipelines are demand-oblivious, which means that they gather, transmit, and process as much data as possible. In contrast, a demand-based processing pipeline uses requirement specifications of data consumers, such as failure tolerances and latency limitations, to save resources. Our solution unifies the way applications express their data demands, i.e., their requirements with respect to their input streams. This unification allows for multiplexing the data demands of all concurrently running applications. On sensor nodes, we schedule sensor reads based on the data demands of all applications, which saves up to 87% in sensor reads and data transfers in our experiments with real-world sensor data. Our demand-based control layer optimizes the data acquisition from thousands of sensors. We introduce time coherence as a fundamental data characteristic. Time coherence is the delay between the first and the last sensor read that contribute values to a tuple. A large scale parameter exploration shows that our solution scales to large numbers of sensors and operates reliably under varying latency and coherence constraints. On stream analysis systems, we tackle the problem of efficient window aggregation. We contribute a general aggregation technique, which adapts to four key workload characteristics: Stream (dis)order, aggregation types, window types, and window measures. Our experiments show that our solution outperforms alternative solutions by an order of magnitude in throughput, which prevents expensive system scale-out. We further derive data demands from visualization needs of applications and make these data demands available to streaming systems such as Apache Flink. This enables streaming systems to pre-process data with respect to changing visualization needs. Experiments show that our solution reliably prevents overloads when data rates increase.

Demand-based Data Stream Gathering, Processing, and Transmission

Demand-based Data Stream Gathering, Processing, and Transmission
Author: Jonas Traub
Publisher: BoD – Books on Demand
Total Pages: 208
Release: 2021-04-09
Genre: Computers
ISBN: 3752671254

This book presents an end-to-end architecture for demand-based data stream gathering, processing, and transmission. The Internet of Things (IoT) consists of billions of devices which form a cloud of network connected sensor nodes. These sensor nodes supply a vast number of data streams with massive amounts of sensor data. Real-time sensor data enables diverse applications including traffic-aware navigation, machine monitoring, and home automation. Current stream processing pipelines are demand-oblivious, which means that they gather, transmit, and process as much data as possible. In contrast, a demand-based processing pipeline uses requirement specifications of data consumers, such as failure tolerances and latency limitations, to save resources. Our solution unifies the way applications express their data demands, i.e., their requirements with respect to their input streams. This unification allows for multiplexing the data demands of all concurrently running applications. On sensor nodes, we schedule sensor reads based on the data demands of all applications, which saves up to 87% in sensor reads and data transfers in our experiments with real-world sensor data. Our demand-based control layer optimizes the data acquisition from thousands of sensors. We introduce time coherence as a fundamental data characteristic. Time coherence is the delay between the first and the last sensor read that contribute values to a tuple. A large scale parameter exploration shows that our solution scales to large numbers of sensors and operates reliably under varying latency and coherence constraints. On stream analysis systems, we tackle the problem of efficient window aggregation. We contribute a general aggregation technique, which adapts to four key workload characteristics: Stream (dis)order, aggregation types, window types, and window measures. Our experiments show that our solution outperforms alternative solutions by an order of magnitude in throughput, which prevents expensive system scale-out. We further derive data demands from visualization needs of applications and make these data demands available to streaming systems such as Apache Flink. This enables streaming systems to pre-process data with respect to changing visualization needs. Experiments show that our solution reliably prevents overloads when data rates increase.

Database and Expert Systems Applications

Database and Expert Systems Applications
Author: Sourav S. Bhowmick
Publisher: Springer
Total Pages: 890
Release: 2009-08-25
Genre: Computers
ISBN: 3642035736

This book constitutes the refereed proceedings of the 20th International Conference on Database and Expert Systems Applications, DEXA 2009, held in Linz, Austria, in August/September 2009. The 35 revised full papers and 35 short papers presented were carefully reviewed and selected from 202 submissions. The papers are organized in topical sections on XML and databases; Web, semantics and ontologies; temporal, spatial, and high dimensional databases; database and information system architecture, performance and security; query processing and optimisation; data and information integration and quality; data and information streams; data mining algorithms; data and information modelling; information retrieval and database systems; and database and information system architecture and performance.

Handbook of Natural Gas Transmission and Processing

Handbook of Natural Gas Transmission and Processing
Author: Saeid Mokhatab
Publisher: Gulf Professional Publishing
Total Pages: 830
Release: 2012-07-02
Genre: Technology & Engineering
ISBN: 0123869145

A unique, well-documented, and forward-thinking work, the second edition of Handbook of Natural Gas Transmission and Processing continues to present a thoroughly updated, authoritative, and comprehensive description of all major aspects of natural gas transmission and processing. It provides an ideal platform for engineers, technologists, and operations personnel working in the natural gas industry to get a better understanding of any special requirements for optimal design and operations of natural gas transmission pipelines and processing plants. First book of its kind that covers all aspects of natural gas transmission and processing Provides pivotal updates on the latest technologies, which have not been addressed in-depth in any existing books Offers practical advice for design and operation based on sound engineering principles and established techniques Examines ways to select the best processing route for optimal design of gas-processing plants Contains new discussions on process modeling, control, and optimization in gas processing industry

Registries for Evaluating Patient Outcomes

Registries for Evaluating Patient Outcomes
Author: Agency for Healthcare Research and Quality/AHRQ
Publisher: Government Printing Office
Total Pages: 385
Release: 2014-04-01
Genre: Medical
ISBN: 1587634333

This User’s Guide is intended to support the design, implementation, analysis, interpretation, and quality evaluation of registries created to increase understanding of patient outcomes. For the purposes of this guide, a patient registry is an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes. A registry database is a file (or files) derived from the registry. Although registries can serve many purposes, this guide focuses on registries created for one or more of the following purposes: to describe the natural history of disease, to determine clinical effectiveness or cost-effectiveness of health care products and services, to measure or monitor safety and harm, and/or to measure quality of care. Registries are classified according to how their populations are defined. For example, product registries include patients who have been exposed to biopharmaceutical products or medical devices. Health services registries consist of patients who have had a common procedure, clinical encounter, or hospitalization. Disease or condition registries are defined by patients having the same diagnosis, such as cystic fibrosis or heart failure. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews.

Data Streams

Data Streams
Author: S. Muthukrishnan
Publisher: Now Publishers Inc
Total Pages: 136
Release: 2005
Genre: Computers
ISBN: 193301914X

In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges.