IBM InfoSphere Streams: Assembling Continuous Insight in the Information Revolution

IBM InfoSphere Streams: Assembling Continuous Insight in the Information Revolution
Author: Chuck Ballard
Publisher: IBM Redbooks
Total Pages: 456
Release: 2012-05-02
Genre: Computers
ISBN: 0738436151

In this IBM® Redbooks® publication, we discuss and describe the positioning, functions, capabilities, and advanced programming techniques for IBM InfoSphereTM Streams (V2), a new paradigm and key component of IBM Big Data platform. Data has traditionally been stored in files or databases, and then analyzed by queries and applications. With stream computing, analysis is performed moment by moment as the data is in motion. In fact, the data might never be stored (perhaps only the analytic results). The ability to analyze data in motion is called real-time analytic processing (RTAP). IBM InfoSphere Streams takes a fundamentally different approach to Big Data analytics and differentiates itself with its distributed runtime platform, programming model, and tools for developing and debugging analytic applications that have a high volume and variety of data types. Using in-memory techniques and analyzing record by record enables high velocity. Volume, variety and velocity are the key attributes of Big Data. The data streams that are consumable by IBM InfoSphere Streams can originate from sensors, cameras, news feeds, stock tickers, and a variety of other sources, including traditional databases. It provides an execution platform and services for applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams. This book is intended for professionals that require an understanding of how to process high volumes of streaming data or need information about how to implement systems to satisfy those requirements. See: http://www.redbooks.ibm.com/abstracts/sg247865.html for the IBM InfoSphere Streams (V1) release.

IBM InfoSphere Streams: Accelerating Deployments with Analytic Accelerators

IBM InfoSphere Streams: Accelerating Deployments with Analytic Accelerators
Author: Chuck Ballard
Publisher: IBM Redbooks
Total Pages: 556
Release: 2014-02-07
Genre: Computers
ISBN: 0738439193

This IBM® Redbooks® publication describes visual development, visualization, adapters, analytics, and accelerators for IBM InfoSphere® Streams (V3), a key component of the IBM Big Data platform. Streams was designed to analyze data in motion, and can perform analysis on incredibly high volumes with high velocity, using a wide variety of analytic functions and data types. The Visual Development environment extends Streams Studio with drag-and-drop development, provides round tripping with existing text editors, and is ideal for rapid prototyping. Adapters facilitate getting data in and out of Streams, and V3 supports WebSphere MQ, Apache Hadoop Distributed File System, and IBM InfoSphere DataStage. Significant analytics include the native Streams Processing Language, SPSS Modeler analytics, Complex Event Processing, TimeSeries Toolkit for machine learning and predictive analytics, Geospatial Toolkit for location-based applications, and Annotation Query Language for natural language processing applications. Accelerators for Social Media Analysis and Telecommunications Event Data Analysis sample programs can be modified to build production level applications. Want to learn how to analyze high volumes of streaming data or implement systems requiring high performance across nodes in a cluster? Then this book is for you.

Addressing Data Volume, Velocity, and Variety with IBM InfoSphere Streams V3.0

Addressing Data Volume, Velocity, and Variety with IBM InfoSphere Streams V3.0
Author: Mike Ebbers
Publisher: IBM Redbooks
Total Pages: 326
Release: 2013-03-12
Genre: Computers
ISBN: 0738437808

There are multiple uses for big data in every industry—from analyzing larger volumes of data than was previously possible to driving more precise answers, to analyzing data at rest and data in motion to capture opportunities that were previously lost. A big data platform will enable your organization to tackle complex problems that previously could not be solved using traditional infrastructure. As the amount of data available to enterprises and other organizations dramatically increases, more and more companies are looking to turn this data into actionable information and intelligence in real time. Addressing these requirements requires applications that are able to analyze potentially enormous volumes and varieties of continuous data streams to provide decision makers with critical information almost instantaneously. IBM® InfoSphere® Streams provides a development platform and runtime environment where you can develop applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams based on defined, proven, and analytical rules that alert you to take appropriate action, all within an appropriate time frame for your organization. This IBM Redbooks® publication is written for decision-makers, consultants, IT architects, and IT professionals who will be implementing a solution with IBM InfoSphere Streams.

Implementing IBM InfoSphere BigInsights on IBM System x

Implementing IBM InfoSphere BigInsights on IBM System x
Author: Mike Ebbers
Publisher: IBM Redbooks
Total Pages: 224
Release: 2013-06-12
Genre: Computers
ISBN: 0738438286

As world activities become more integrated, the rate of data growth has been increasing exponentially. And as a result of this data explosion, current data management methods can become inadequate. People are using the term big data (sometimes referred to as Big Data) to describe this latest industry trend. IBM® is preparing the next generation of technology to meet these data management challenges. To provide the capability of incorporating big data sources and analytics of these sources, IBM developed a stream-computing product that is based on the open source computing framework Apache Hadoop. Each product in the framework provides unique capabilities to the data management environment, and further enhances the value of your data warehouse investment. In this IBM Redbooks® publication, we describe the need for big data in an organization. We then introduce IBM InfoSphere® BigInsightsTM and explain how it differs from standard Hadoop. BigInsights provides a packaged Hadoop distribution, a greatly simplified installation of Hadoop and corresponding open source tools for application development, data movement, and cluster management. BigInsights also brings more options for data security, and as a component of the IBM big data platform, it provides potential integration points with the other components of the platform. A new chapter has been added to this edition. Chapter 11 describes IBM Platform Symphony®, which is a new scheduling product that works with IBM Insights, bringing low-latency scheduling and multi-tenancy to IBM InfoSphere BigInsights. The book is designed for clients, consultants, and other technical professionals.

IBM FlashSystem 5200 Product Guide

IBM FlashSystem 5200 Product Guide
Author: Aldo Araujo Fonseca
Publisher: IBM Redbooks
Total Pages: 68
Release: 2022-07-22
Genre: Computers
ISBN: 0738459666

This IBM® Redbooks® Product Guide publication describes the IBM FlashSystem® 5200 solution, which is a next-generation IBM FlashSystem control enclosure. It is an NVMe end-to-end platform that is targeted at the entry and midrange market and delivers the full capabilities of IBM FlashCore® technology. It also provides a rich set of software-defined storage (SDS) features that are delivered by IBM Spectrum® Virtualize, including the following features: Data reduction and deduplication Dynamic tiering Thin provisioning Snapshots Cloning Replication Data copy services Transparent Cloud Tiering IBM HyperSwap® including 3-site replication for high availability (HA) Scale-out and scale-up configurations further enhance capacity and throughput for better availability. The IBM FlashSystem 5200 is a high-performance storage solution that is based on a revolutionary 1U form factor. It consists of 12 NVMe Flash Devices in a 1U storage enclosure drawer with full redundant canister components and no single point of failure. It is designed for businesses of all sizes, including small, remote, branch offices and regional clients. It is a smarter, self-optimizing solution that requires less management, which enables organizations to overcome their storage challenges. Flash has come of age and price point reductions mean that lower parts of the storage market are seeing the value of moving over to flash and NVMe--based solutions. The IBM FlashSystem 5200 advances this transition by providing incredibly dense tiers of flash in a more affordable package. With the benefit of IBM FlashCore Module compression and new QLC flash-based technology becoming available, a compelling argument exists to move away from Nearline SAS storage and on to NVMe. With the release of IBM FlashSystem 5200 Software V8.4, extra functions and features are available, including support for new Distributed RAID1 (DRAID1) features, GUI enhancements, Redirect-on-write for Data Reduction Pool (DRP) snapshots, and 3-site replication capabilities. This book is aimed at pre-sales and post-sales technical support and marketing and storage administrators.

Information Governance Principles and Practices for a Big Data Landscape

Information Governance Principles and Practices for a Big Data Landscape
Author: Chuck Ballard
Publisher: IBM Redbooks
Total Pages: 280
Release: 2014-03-31
Genre: Computers
ISBN: 0738439592

This IBM® Redbooks® publication describes how the IBM Big Data Platform provides the integrated capabilities that are required for the adoption of Information Governance in the big data landscape. As organizations embark on new use cases, such as Big Data Exploration, an enhanced 360 view of customers, or Data Warehouse modernization, and absorb ever growing volumes and variety of data with accelerating velocity, the principles and practices of Information Governance become ever more critical to ensure trust in data and help organizations overcome the inherent risks and achieve the wanted value. The introduction of big data changes the information landscape. Data arrives faster than humans can react to it, and issues can quickly escalate into significant events. The variety of data now poses new privacy and security risks. The high volume of information in all places makes it harder to find where these issues, risks, and even useful information to drive new value and revenue are. Information Governance provides an organization with a framework that can align their wanted outcomes with their strategic management principles, the people who can implement those principles, and the architecture and platform that are needed to support the big data use cases. The IBM Big Data Platform, coupled with a framework for Information Governance, provides an approach to build, manage, and gain significant value from the big data landscape.

Implementing an Advanced Application Using Processes, Rules, Events, and Reports

Implementing an Advanced Application Using Processes, Rules, Events, and Reports
Author: Ahmed Abdel-Gayed
Publisher: IBM Redbooks
Total Pages: 318
Release: 2012-10-12
Genre: Computers
ISBN: 0738437387

In this IBM® Redbooks® publication we describe how to build an advanced business application from end to end. We use a fictional scenario to define the application, document the deployment methodology, and confirm the roles needed to support its development and deployment. Through step-by-step instructions you learn how to: - Define the project lifecycle using IBM Solution for Collaborative Lifecycle Management - Build a logical and physical data model in IBM InfoSphere® Data Architect - Confirm business rules and business events using IBM WebSphere® Operational Decision Management - Map a business process and mediation using IBM Business Process Manager - Use IBM Cognos® Business Intelligence to develop business insight In addition, we articulate a testing strategy using IBM Rational® Quality Manager and deployment options using IBM Workload Deployer. Taken together, this book provides comprehensive guidance for building and testing a solution using core IBM Rational, Information Management, WebSphere, Cognos and Business Process Management software. It seeks to demystify the notion that developing and deploying advanced solutions is taxing. This book will appeal to IT architects and specialists who seek straightforward guidance on how to build comprehensive solutions. They will be able to adapt these materials to kick-start their own end-to-end projects.

Enabling Real-time Analytics on IBM z Systems Platform

Enabling Real-time Analytics on IBM z Systems Platform
Author: Lydia Parziale
Publisher: IBM Redbooks
Total Pages: 218
Release: 2016-08-08
Genre: Computers
ISBN: 0738441864

Regarding online transaction processing (OLTP) workloads, IBM® z SystemsTM platform, with IBM DB2®, data sharing, Workload Manager (WLM), geoplex, and other high-end features, is the widely acknowledged leader. Most customers now integrate business analytics with OLTP by running, for example, scoring functions from transactional context for real-time analytics or by applying machine-learning algorithms on enterprise data that is kept on the mainframe. As a result, IBM adds investment so clients can keep the complete lifecycle for data analysis, modeling, and scoring on z Systems control in a cost-efficient way, keeping the qualities of services in availability, security, reliability that z Systems solutions offer. Because of the changed architecture and tighter integration, IBM has shown, in a customer proof-of-concept, that a particular client was able to achieve an orders-of-magnitude improvement in performance, allowing that client's data scientist to investigate the data in a more interactive process. Open technologies, such as Predictive Model Markup Language (PMML) can help customers update single components instead of being forced to replace everything at once. As a result, you have the possibility to combine your preferred tool for model generation (such as SAS Enterprise Miner or IBM SPSS® Modeler) with a different technology for model scoring (such as Zementis, a company focused on PMML scoring). IBM SPSS Modeler is a leading data mining workbench that can apply various algorithms in data preparation, cleansing, statistics, visualization, machine learning, and predictive analytics. It has over 20 years of experience and continued development, and is integrated with z Systems. With IBM DB2 Analytics Accelerator 5.1 and SPSS Modeler 17.1, the possibility exists to do the complete predictive model creation including data transformation within DB2 Analytics Accelerator. So, instead of moving the data to a distributed environment, algorithms can be pushed to the data, using cost-efficient DB2 Accelerator for the required resource-intensive operations. This IBM Redbooks® publication explains the overall z Systems architecture, how the components can be installed and customized, how the new IBM DB2 Analytics Accelerator loader can help efficient data loading for z Systems data and external data, how in-database transformation, in-database modeling, and in-transactional real-time scoring can be used, and what other related technologies are available. This book is intended for technical specialists and architects, and data scientists who want to use the technology on the z Systems platform. Most of the technologies described in this book require IBM DB2 for z/OS®. For acceleration of the data investigation, data transformation, and data modeling process, DB2 Analytics Accelerator is required. Most value can be achieved if most of the data already resides on z Systems platforms, although adding external data (like from social sources) poses no problem at all.

Building the Hyperconnected Society

Building the Hyperconnected Society
Author: Ovidiu Vermesan
Publisher: River Publishers
Total Pages: 330
Release: 2015-06-16
Genre: Computers
ISBN: 8793237995

This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment. The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade.