DuckDB in Action

DuckDB in Action
Author: Mark Needham
Publisher: Simon and Schuster
Total Pages: 310
Release: 2024-09-10
Genre: Computers
ISBN: 1638355592

Dive into DuckDB and start processing gigabytes of data with ease—all with no data warehouse. DuckDB is a cutting-edge SQL database that makes it incredibly easy to analyze big data sets right from your laptop. In DuckDB in Action you’ll learn everything you need to know to get the most out of this awesome tool, keep your data secure on prem, and save you hundreds on your cloud bill. From data ingestion to advanced data pipelines, you’ll learn everything you need to get the most out of DuckDB—all through hands-on examples. Open up DuckDB in Action and learn how to: • Read and process data from CSV, JSON and Parquet sources both locally and remote • Write analytical SQL queries, including aggregations, common table expressions, window functions, special types of joins, and pivot tables • Use DuckDB from Python, both with SQL and its "Relational"-API, interacting with databases but also data frames • Prepare, ingest and query large datasets • Build cloud data pipelines • Extend DuckDB with custom functionality Pragmatic and comprehensive, DuckDB in Action introduces the DuckDB database and shows you how to use it to solve common data workflow problems. You won’t need to read through pages of documentation—you’ll learn as you work. Get to grips with DuckDB's unique SQL dialect, learning to seamlessly load, prepare, and analyze data using SQL queries. Extend DuckDB with both Python and built-in tools such as MotherDuck, and gain practical insights into building robust and automated data pipelines. About the technology DuckDB makes data analytics fast and fun! You don’t need to set up a Spark or run a cloud data warehouse just to process a few hundred gigabytes of data. DuckDB is easily embeddable in any data analytics application, runs on a laptop, and processes data from almost any source, including JSON, CSV, Parquet, SQLite and Postgres. About the book DuckDB in Action guides you example-by-example from setup, through your first SQL query, to advanced topics like building data pipelines and embedding DuckDB as a local data store for a Streamlit web app. You’ll explore DuckDB’s handy SQL extensions, get to grips with aggregation, analysis, and data without persistence, and use Python to customize DuckDB. A hands-on project accompanies each new topic, so you can see DuckDB in action. What's inside • Prepare, ingest and query large datasets • Build cloud data pipelines • Extend DuckDB with custom functionality • Fast-paced SQL recap: From simple queries to advanced analytics About the reader For data pros comfortable with Python and CLI tools. About the author Mark Needham is a blogger and video creator at @?LearnDataWithMark. Michael Hunger leads product innovation for the Neo4j graph database. Michael Simons is a Java Champion, author, and Engineer at Neo4j.

Getting Started with DuckDB

Getting Started with DuckDB
Author: Simon Aubury
Publisher: Packt Publishing Ltd
Total Pages: 382
Release: 2024-06-24
Genre: Computers
ISBN: 1803232536

Analyze and transform data efficiently with DuckDB, a versatile, modern, in-process SQL database Key Features Use DuckDB to rapidly load, transform, and query data across a range of sources and formats Gain practical experience using SQL, Python, and R to effectively analyze data Learn how open source tools and cloud services in the broader data ecosystem complement DuckDB’s versatile capabilities Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionDuckDB is a fast in-process analytical database. Getting Started with DuckDB offers a practical overview of its usage. You'll learn to load, transform, and query various data formats, including CSV, JSON, and Parquet. The book covers DuckDB's optimizations, SQL enhancements, and extensions for specialized applications. Working with examples in SQL, Python, and R, you'll explore analyzing public datasets and discover tools enhancing DuckDB workflows. This guide suits both experienced and new data practitioners, quickly equipping you to apply DuckDB's capabilities in analytical projects. You'll gain proficiency in using DuckDB for diverse tasks, enabling effective integration into your data workflows.What you will learn Understand the properties and applications of a columnar in-process database Use SQL to load, transform, and query a range of data formats Discover DuckDB's rich extensions and learn how to apply them Use nested data types to model semi-structured data and extract and model JSON data Integrate DuckDB into your Python and R analytical workflows Effectively leverage DuckDB's convenient SQL enhancements Explore the wider ecosystem and pathways for building DuckDB-powered data applications Who this book is for If you’re interested in expanding your analytical toolkit, this book is for you. It will be particularly valuable for data analysts wanting to rapidly explore and query complex data, data and software engineers looking for a lean and versatile data processing tool, along with data scientists needing a scalable data manipulation library that integrates seamlessly with Python and R. You will get the most from this book if you have some familiarity with SQL and foundational database concepts, as well as exposure to a programming language such as Python or R.

DuckDB in Action

DuckDB in Action
Author: Mark Needham
Publisher: Simon and Schuster
Total Pages: 310
Release: 2024-08-27
Genre: Computers
ISBN: 1633437256

Dive into DuckDB and start processing gigabytes of data with ease—all with no data warehouse. You don’t need expensive hardware or to spin up a whole new cluster whenever you want to analyze a big data set. You just need DuckDB! This modern and fast embedded database runs on a laptop, and lets you easily process data from almost any source, including JSON, CSV, Parquet, SQLite and Postgres. In DuckDB in Action you’ll learn everything you need to know to get the most out of this awesome tool, keep your data secure on prem, and save you hundreds on your cloud bill. Open up DuckDB in Action and learn how to: Read and process data from CSV, JSON and Parquet sources both locally and remote Write analytical SQL queries, including aggregations, common table expressions, window functions, special types of joins, and pivot tables Use DuckDB from Python, both with SQL and its "Relational"-API, interacting with databases but also data frames Prepare, ingest and query large datasets Build cloud data pipelines Extend DuckDB with custom functionality DuckDB in Action introduces the DuckDB database and shows you how to use it to solve common data workflow problems. It’s full of quick wins—right from chapter one, you’ll be finding new ways that DuckDB can speed up your work as a data professional. Each new concept is paired with a hands-on project example, so you can easily see how DuckDB works in action. Purchase of the print book includes a free eBook in PDF and ePub formats from Manning Publications. About the book DuckDB in Action will show you how to quickly get your hands dirty with DuckDB. You won’t need to read through pages of documentation—you’ll learn as you work. Begin with DuckDB’s CLI embedded mode, then dive straight into modern SQL queries and utilizing DuckDB’s handy SQL extensions. From there, you’ll explore the different ways you can analyze data with DuckDB, including advanced aggregation and analysis, data without persistence, and DuckDB’s underlying architecture. Learn how to combine DuckDB with the Python ecosystem for even greater customization, and how to extend DuckDB with its own tools. You’ll take to DuckDB like a duck to water, rapidly solving almost any relational data task with zero friction. About the reader For data scientists, data engineers, and developers interested in analyzing structured data. You’ll need some knowledge of Python, CLI tools, and SQL to get the most out of this guide. About the author Mark Needham is a blogger, and video creator at @?LearnDataWithMark, where his series on DuckDB offers viewers hands-on insights into practical database applications. Michael Hunger works on the open source Neo4j graph database filling many roles, where leads the product innovation and developer product strategy. Michael Simons is a Java Champion, author, and Staff Software Engineer at Neo4j and has been working professionally as a developer for more than 20 years.

Advanced Data Analytics with AWS

Advanced Data Analytics with AWS
Author: Joseph Conley
Publisher: Orange Education Pvt Ltd
Total Pages: 268
Release: 2024-04-17
Genre: Computers
ISBN: 8197081891

Master the Fundamentals of Data Analytics at Scale KEY FEATURES ● Comprehensive guide to constructing data engineering workflows spanning diverse data sources ● Expert techniques for transforming and visualizing data to extract actionable insights ● Advanced methodologies for analyzing data and employing machine learning to uncover intricate patterns DESCRIPTION Embark on a transformative journey into the realm of data analytics with AWS with this practical and incisive handbook. Begin your exploration with an insightful introduction to the fundamentals of data analytics, setting the stage for your AWS adventure. The book then covers collecting data efficiently and effectively on AWS, laying the groundwork for insightful analysis. It will dive deep into processing data, uncovering invaluable techniques to harness the full potential of your datasets. The book will equip you with advanced data analysis skills, unlocking the ability to discern complex patterns and insights. It covers additional use cases for data analysis on AWS, from predictive modeling to sentiment analysis, expanding your analytical horizons. The final section of the book will utilize the power of data virtualization and interaction, revolutionizing the way you engage with and derive value from your data. Gain valuable insights into emerging trends and technologies shaping the future of data analytics, and conclude your journey with actionable next steps, empowering you to continue your data analytics odyssey with confidence. WHAT WILL YOU LEARN ● Construct streamlined data engineering workflows capable of ingesting data from diverse sources and formats. ● Employ data transformation tools to efficiently cleanse and reshape data, priming it for analysis. ● Perform ad-hoc queries for preliminary data exploration, uncovering initial insights. ● Utilize prepared datasets to craft compelling, interactive data visualizations that communicate actionable insights. ● Develop advanced machine learning and Generative AI workflows to delve into intricate aspects of complex datasets, uncovering deeper insights. WHO IS THIS BOOK FOR? This book is ideal for aspiring data engineers, analysts, and data scientists seeking to deepen their understanding and practical skills in data engineering, data transformation, visualization, and advanced analytics. It is also beneficial for professionals and students looking to leverage AWS services for their data-related tasks. TABLE OF CONTENTS 1. Introduction to Data Analytics and AWS 2. Getting Started with AWS 3. Collecting Data with AWS 4. Processing Data on AWS 5. Descriptive Analytics on AWS 6. Advanced Data Analysis on AWS 7. Additional Use Cases for Data Analysis 8. Data Visualization and Interaction on AWS 9. The Future of Data Analytics 10. Conclusion and Next Steps Index

In-Memory Analytics with Apache Arrow

In-Memory Analytics with Apache Arrow
Author: Matthew Topol
Publisher: Packt Publishing Ltd
Total Pages: 406
Release: 2024-09-30
Genre: Computers
ISBN: 183546968X

Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionApache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author’s 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You’ll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You’ll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You’ll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you’ll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files, both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve the read performance of data pipelines by memory-mapping Arrow files Produce and consume Apache Arrow data efficiently by sharing memory with the C API Leverage the Arrow compute engine, Acero, to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute to the community Who this book is for This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you’re building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.

DevOps for Data Science

DevOps for Data Science
Author: Alex Gold
Publisher: CRC Press
Total Pages: 274
Release: 2024-06-19
Genre: Business & Economics
ISBN: 104003442X

Data Scientists are experts at analyzing, modelling and visualizing data but, at one point or another, have all encountered difficulties in collaborating with or delivering their work to the people and systems that matter. Born out of the agile software movement, DevOps is a set of practices, principles and tools that help software engineers reliably deploy work to production. This book takes the lessons of DevOps and aplies them to creating and delivering production-grade data science projects in Python and R. This book’s first section explores how to build data science projects that deploy to production with no frills or fuss. Its second section covers the rudiments of administering a server, including Linux, application, and network administration before concluding with a demystification of the concerns of enterprise IT/Administration in its final section, making it possible for data scientists to communicate and collaborate with their organization’s security, networking, and administration teams. Key Features: • Start-to-finish labs take readers through creating projects that meet DevOps best practices and creating a server-based environment to work on and deploy them. • Provides an appendix of cheatsheets so that readers will never be without the reference they need to remember a Git, Docker, or Command Line command. • Distills what a data scientist needs to know about Docker, APIs, CI/CD, Linux, DNS, SSL, HTTP, Auth, and more. • Written specifically to address the concern of a data scientist who wants to take their Python or R work to production. There are countless books on creating data science work that is correct. This book, on the otherhand, aims to go beyond this, targeted at data scientists who want their work to be than merely accurate and deliver work that matters.

High Performance Python

High Performance Python
Author: Micha Gorelick
Publisher: O'Reilly Media
Total Pages: 469
Release: 2020-04-30
Genre: Computers
ISBN: 1492054992

Your Python code may run correctly, but you need it to run faster. Updated for Python 3, this expanded edition shows you how to locate performance bottlenecks and significantly speed up your code in high-data-volume programs. By exploring the fundamental theory behind design choices, High Performance Python helps you gain a deeper understanding of Python’s implementation. How do you take advantage of multicore architectures or clusters? Or build a system that scales up and down without losing reliability? Experienced Python programmers will learn concrete solutions to many issues, along with war stories from companies that use high-performance Python for social media analytics, productionized machine learning, and more. Get a better grasp of NumPy, Cython, and profilers Learn how Python abstracts the underlying computer architecture Use profiling to find bottlenecks in CPU time and memory usage Write efficient programs by choosing appropriate data structures Speed up matrix and vector computations Use tools to compile Python down to machine code Manage multiple I/O and computational operations concurrently Convert multiprocessing code to run on local or remote clusters Deploy code faster using tools like Docker

Beginning Android Application Development

Beginning Android Application Development
Author: Wei-Meng Lee
Publisher: John Wiley & Sons
Total Pages: 448
Release: 2011-03-10
Genre: Computers
ISBN: 1118087801

Create must-have applications for the latest Android OS The Android OS is a popular and flexible platform for many of today's most in-demand mobile devices. This full-color guide offers you a hands-on introduction to creating Android applications for the latest mobile devices. Veteran author Wei Meng Lee accompanies each lesson with real-world examples to drive home the content he covers. Beginning with an overview of core Android features and tools, he moves at a steady pace while teaching everything you need to know to successfully develop your own Android applications. Explains what an activity is and reviews its lifecycle Zeroes in on customizing activities by applying styles and themes Looks at the components of a screen, including LinearLayout, AbsoluteLayout, and RelativeLayout, among others Details ways to adapt to different screen sizes and adjust display orientation Reviews the variety of views such as TextView, ProgressBar, TimePicker, and more Beginning Android Application Development pares down the most essential steps you need to know so you can start creating Android applications today.

Main Memory Database Systems

Main Memory Database Systems
Author: Frans Faerber
Publisher: Foundations and Trends in Databases
Total Pages: 144
Release: 2017-07-20
Genre: Probabilistic databases
ISBN: 9781680833249

With growing memory sizes and memory prices dropping by a factor of 10 every 5 years, data having a "primary home" in memory is now a reality. Main-memory databases eschew many of the traditional architectural pillars of relational database systems that optimized for disk-resident data. The result of these memory-optimized designs are systems that feature several innovative approaches to fundamental issues (e.g., concurrency control, query processing) that achieve orders of magnitude performance improvements over traditional designs. This monograph provides an overview of recent developments in main-memory database systems. It covers five main issues and architectural choices that need to be made when building a high performance main-memory optimized database: data organization and storage, indexing, concurrency control, durability and recovery techniques, and query processing and compilation. The monograph focuses on four commercial and research systems: H-Store/VoltDB, Hekaton, HyPer, and SAPHANA. These systems are diverse in their design choices and form a representative sample of the state of the art in main-memory database systems. It also covers other commercial and academic systems, along with current and future research trends.