- Table View
- List View
Data Engineering in Medical Imaging: First MICCAI Workshop, DEMI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 8, 2023, Proceedings (Lecture Notes in Computer Science #14314)
by Binod Bhattarai Sharib Ali Anita Rau Anh Nguyen Ana Namburete Razvan Caramalau Danail StoyanovVolume LNCS 14414 constitutes the refereed proceedings of the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2023, which was held in Vancouver, Canada in October 2023.The DEMI 2023 proceedings contain 11 high-quality papers of 9 to 15 pages pre-selected through a rigorous peer review process (with an average of three reviews per paper). All submissions were peer-reviewed through a double-blind process by at least three members of the scientific review committee, comprising 16 experts in the field of medical imaging. The accepted manuscripts cover various medical image analysis methods and applications.
Data Engineering on Azure
by Vlad RiscutiaBuild a data platform to the industry-leading standards set by Microsoft&’s own infrastructure.Summary In Data Engineering on Azure you will learn how to: Pick the right Azure services for different data scenarios Manage data inventory Implement production quality data modeling, analytics, and machine learning workloads Handle data governance Using DevOps to increase reliability Ingesting, storing, and distributing data Apply best practices for compliance and access control Data Engineering on Azure reveals the data management patterns and techniques that support Microsoft&’s own massive data infrastructure. Author Vlad Riscutia, a data engineer at Microsoft, teaches you to bring an engineering rigor to your data platform and ensure that your data prototypes function just as well under the pressures of production. You'll implement common data modeling patterns, stand up cloud-native data platforms on Azure, and get to grips with DevOps for both analytics and machine learning. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology Build secure, stable data platforms that can scale to loads of any size. When a project moves from the lab into production, you need confidence that it can stand up to real-world challenges. This book teaches you to design and implement cloud-based data infrastructure that you can easily monitor, scale, and modify. About the book In Data Engineering on Azure you&’ll learn the skills you need to build and maintain big data platforms in massive enterprises. This invaluable guide includes clear, practical guidance for setting up infrastructure, orchestration, workloads, and governance. As you go, you&’ll set up efficient machine learning pipelines, and then master time-saving automation and DevOps solutions. The Azure-based examples are easy to reproduce on other cloud platforms. What's inside Data inventory and data governance Assure data quality, compliance, and distribution Build automated pipelines to increase reliability Ingest, store, and distribute data Production-quality data modeling, analytics, and machine learning About the reader For data engineers familiar with cloud computing and DevOps. About the author Vlad Riscutia is a software architect at Microsoft. Table of Contents 1 Introduction PART 1 INFRASTRUCTURE 2 Storage 3 DevOps 4 Orchestration PART 2 WORKLOADS 5 Processing 6 Analytics 7 Machine learning PART 3 GOVERNANCE 8 Metadata 9 Data quality 10 Compliance 11 Distributing data
Data Engineering with Alteryx: Helping data engineers apply DataOps practices with Alteryx
by Paul HoughtonBuild and deploy data pipelines with Alteryx by applying practical DataOps principlesKey FeaturesLearn DataOps principles to build data pipelines with AlteryxBuild robust data pipelines with Alteryx DesignerUse Alteryx Server and Alteryx Connect to share and deploy your data pipelinesBook DescriptionAlteryx is a GUI-based development platform for data analytic applications.Data Engineering with Alteryx will help you leverage Alteryx's code-free aspects which increase development speed while still enabling you to make the most of the code-based skills you have.This book will teach you the principles of DataOps and how they can be used with the Alteryx software stack. You'll build data pipelines with Alteryx Designer and incorporate the error handling and data validation needed for reliable datasets. Next, you'll take the data pipeline from raw data, transform it into a robust dataset, and publish it to Alteryx Server following a continuous integration process.By the end of this Alteryx book, you'll be able to build systems for validating datasets, monitoring workflow performance, managing access, and promoting the use of your data sources.What you will learnBuild a working pipeline to integrate an external data sourceDevelop monitoring processes for the pipeline exampleUnderstand and apply DataOps principles to an Alteryx data pipelineGain skills for data engineering with the Alteryx software stackWork with spatial analytics and machine learning techniques in an Alteryx workflow Explore Alteryx workflow deployment strategies using metadata validation and continuous integrationOrganize content on Alteryx Server and secure user accessWho this book is forIf you're a data engineer, data scientist, or data analyst who wants to set up a reliable process for developing data pipelines using Alteryx, this book is for you. You'll also find this book useful if you are trying to make the development and deployment of datasets more robust by following the DataOps principles. Familiarity with Alteryx products will be helpful but is not necessary.
Data Engineering with Apache Spark, Delta Lake, and Lakehouse: Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way
by Manoj Kukreja Danil ZburivskyUnderstand the complexities of modern-day data engineering platforms and explore strategies to deal with them with the help of use case scenarios led by an industry expert in big dataKey FeaturesBecome well-versed with the core concepts of Apache Spark and Delta Lake for building data platformsLearn how to ingest, process, and analyze data that can be later used for training machine learning modelsUnderstand how to operationalize data models in production using curated dataBook DescriptionIn the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on.Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through real-world examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way.By the end of this data engineering book, you'll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks.What you will learnDiscover the challenges you may face in the data engineering worldAdd ACID transactions to Apache Spark using Delta LakeUnderstand effective design strategies to build enterprise-grade data lakesExplore architectural and design patterns for building efficient data ingestion pipelinesOrchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIsAutomate deployment and monitoring of data pipelines in productionGet to grips with securing, monitoring, and managing data pipelines models efficientlyWho this book is forThis book is for aspiring data engineers and data analysts who are new to the world of data engineering and are looking for a practical guide to building scalable data platforms. If you already work with PySpark and want to use Delta Lake for data engineering, you'll find this book useful. Basic knowledge of Python, Spark, and SQL is expected.
Data Engineering with AWS: Learn how to design and build cloud-based data transformation pipelines using AWS
by Gareth Eagar Rafael Pecora Marcos AmorimThe missing expert-led manual for the AWS ecosystem — go from foundations to building data engineering pipelines effortlesslyPurchase of the print or Kindle book includes a free eBook in the PDF format.Key FeaturesLearn about common data architectures and modern approaches to generating value from big dataExplore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelinesLearn how to architect and implement data lakes and data lakehouses for big data analytics from a data lakes expertBook DescriptionWritten by a Senior Data Architect with over twenty-five years of experience in the business, Data Engineering for AWS is a book whose sole aim is to make you proficient in using the AWS ecosystem. Using a thorough and hands-on approach to data, this book will give aspiring and new data engineers a solid theoretical and practical foundation to succeed with AWS. As you progress, you'll be taken through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. You'll also learn about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data. By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.What you will learnUnderstand data engineering concepts and emerging technologiesIngest streaming data with Amazon Kinesis Data FirehoseOptimize, denormalize, and join datasets with AWS Glue StudioUse Amazon S3 events to trigger a Lambda process to transform a fileRun complex SQL queries on data lake data using Amazon AthenaLoad data into a Redshift data warehouse and run queriesCreate a visualization of your data using Amazon QuickSightExtract sentiment data from a dataset using Amazon ComprehendWho this book is forThis book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone new to data engineering who wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful.A basic understanding of big data-related topics and Python coding will help you get the most out of this book but it's not a prerequisite. Familiarity with the AWS console and core services will also help you follow along.
Data Engineering with Databricks Cookbook: Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake
by null Pulkit ChadhaWork through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your dataKey FeaturesLearn data ingestion, data transformation, and data management techniques using Apache Spark and Delta LakeGain practical guidance on using Delta Lake tables and orchestrating data pipelinesImplement reliable DataOps and DevOps practices, and enforce data governance policies on DatabricksPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionWritten by a Senior Solutions Architect at Databricks, Data Engineering with Databricks Cookbook will show you how to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, starting with comprehensive introduction to data ingestion and loading with Apache Spark. What makes this book unique is its recipe-based approach, which will help you put your knowledge to use straight away and tackle common problems. You’ll be introduced to various data manipulation and data transformation solutions that can be applied to data, find out how to manage and optimize Delta tables, and get to grips with ingesting and processing streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Advanced recipes later in the book will teach you how to use Databricks to implement DataOps and DevOps practices, as well as how to orchestrate and schedule data pipelines using Databricks Workflows. You’ll also go through the full process of setup and configuration of the Unity Catalog for data governance. By the end of this book, you’ll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.What you will learnPerform data loading, ingestion, and processing with Apache SparkDiscover data transformation techniques and custom user-defined functions (UDFs) in Apache SparkManage and optimize Delta tables with Apache Spark and Delta Lake APIsUse Spark Structured Streaming for real-time data processingOptimize Apache Spark application and Delta table query performanceImplement DataOps and DevOps practices on DatabricksOrchestrate data pipelines with Delta Live Tables and Databricks WorkflowsImplement data governance policies with Unity CatalogWho this book is forThis book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming.
Data Engineering with Google Cloud Platform: A practical guide to operationalizing scalable data analytics systems on GCP
by Adi WijayaBuild and deploy your own data pipelines on GCP, make key architectural decisions, and gain the confidence to boost your career as a data engineerKey FeaturesUnderstand data engineering concepts, the role of a data engineer, and the benefits of using GCP for building your solutionLearn how to use the various GCP products to ingest, consume, and transform data and orchestrate pipelinesDiscover tips to prepare for and pass the Professional Data Engineer examBook DescriptionWith this book, you'll understand how the highly scalable Google Cloud Platform (GCP) enables data engineers to create end-to-end data pipelines right from storing and processing data and workflow orchestration to presenting data through visualization dashboards. Starting with a quick overview of the fundamental concepts of data engineering, you'll learn the various responsibilities of a data engineer and how GCP plays a vital role in fulfilling those responsibilities. As you progress through the chapters, you'll be able to leverage GCP products to build a sample data warehouse using Cloud Storage and BigQuery and a data lake using Dataproc. The book gradually takes you through operations such as data ingestion, data cleansing, transformation, and integrating data with other sources. You'll learn how to design IAM for data governance, deploy ML pipelines with the Vertex AI, leverage pre-built GCP models as a service, and visualize data with Google Data Studio to build compelling reports. Finally, you'll find tips on how to boost your career as a data engineer, take the Professional Data Engineer certification exam, and get ready to become an expert in data engineering with GCP. By the end of this data engineering book, you'll have developed the skills to perform core data engineering tasks and build efficient ETL data pipelines with GCP.What you will learnLoad data into BigQuery and materialize its output for downstream consumptionBuild data pipeline orchestration using Cloud ComposerDevelop Airflow jobs to orchestrate and automate a data warehouseBuild a Hadoop data lake, create ephemeral clusters, and run jobs on the Dataproc clusterLeverage Pub/Sub for messaging and ingestion for event-driven systemsUse Dataflow to perform ETL on streaming dataUnlock the power of your data with Data StudioCalculate the GCP cost estimation for your end-to-end data solutionsWho this book is forThis book is for data engineers, data analysts, and anyone looking to design and manage data processing pipelines using GCP. You'll find this book useful if you are preparing to take Google's Professional Data Engineer exam. Beginner-level understanding of data science, the Python programming language, and Linux commands is necessary. A basic understanding of data processing and cloud computing, in general, will help you make the most out of this book.
Data Engineering with Google Cloud Platform: A practical guide to operationalizing scalable data analytics systems on GCP
by Adi WijayaBuild and deploy your own data pipelines on GCP, make key architectural decisions, and gain the confidence to boost your career as a data engineerKey FeaturesUnderstand data engineering concepts, the role of a data engineer, and the benefits of using GCP for building your solutionLearn how to use the various GCP products to ingest, consume, and transform data and orchestrate pipelinesDiscover tips to prepare for and pass the Professional Data Engineer examBook DescriptionWith this book, you'll understand how the highly scalable Google Cloud Platform (GCP) enables data engineers to create end-to-end data pipelines right from storing and processing data and workflow orchestration to presenting data through visualization dashboards. Starting with a quick overview of the fundamental concepts of data engineering, you'll learn the various responsibilities of a data engineer and how GCP plays a vital role in fulfilling those responsibilities. As you progress through the chapters, you'll be able to leverage GCP products to build a sample data warehouse using Cloud Storage and BigQuery and a data lake using Dataproc. The book gradually takes you through operations such as data ingestion, data cleansing, transformation, and integrating data with other sources. You'll learn how to design IAM for data governance, deploy ML pipelines with the Vertex AI, leverage pre-built GCP models as a service, and visualize data with Google Data Studio to build compelling reports. Finally, you'll find tips on how to boost your career as a data engineer, take the Professional Data Engineer certification exam, and get ready to become an expert in data engineering with GCP. By the end of this data engineering book, you'll have developed the skills to perform core data engineering tasks and build efficient ETL data pipelines with GCP.What you will learnLoad data into BigQuery and materialize its output for downstream consumptionBuild data pipeline orchestration using Cloud ComposerDevelop Airflow jobs to orchestrate and automate a data warehouseBuild a Hadoop data lake, create ephemeral clusters, and run jobs on the Dataproc clusterLeverage Pub/Sub for messaging and ingestion for event-driven systemsUse Dataflow to perform ETL on streaming dataUnlock the power of your data with Data StudioCalculate the GCP cost estimation for your end-to-end data solutionsWho this book is forThis book is for data engineers, data analysts, and anyone looking to design and manage data processing pipelines using GCP. You'll find this book useful if you are preparing to take Google's Professional Data Engineer exam. Beginner-level understanding of data science, the Python programming language, and Linux commands is necessary. A basic understanding of data processing and cloud computing, in general, will help you make the most out of this book.
Data Engineering with Python: Work with massive datasets to design data models and automate data pipelines using Python
by Paul CrickardBuild, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projectsKey FeaturesBecome well-versed in data architectures, data preparation, and data optimization skills with the help of practical examplesDesign data models and learn how to extract, transform, and load (ETL) data using PythonSchedule, automate, and monitor complex data pipelines in productionBook DescriptionData engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.What you will learnUnderstand how data engineering supports data science workflowsDiscover how to extract data from files and databases and then clean, transform, and enrich itConfigure processors for handling different file formats as well as both relational and NoSQL databasesFind out how to implement a data pipeline and dashboard to visualize resultsUse staging and validation to check data before landing in the warehouseBuild real-time pipelines with staging areas that perform validation and handle failuresGet to grips with deploying pipelines in the production environmentWho this book is forThis book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Data Engineering with Python: Work with massive datasets to design data models and automate data pipelines using Python
by Paul CrickardBuild, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projectsKey FeaturesBecome well-versed in data architectures, data preparation, and data optimization skills with the help of practical examplesDesign data models and learn how to extract, transform, and load (ETL) data using PythonSchedule, automate, and monitor complex data pipelines in productionBook DescriptionData engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.What you will learnUnderstand how data engineering supports data science workflowsDiscover how to extract data from files and databases and then clean, transform, and enrich itConfigure processors for handling different file formats as well as both relational and NoSQL databasesFind out how to implement a data pipeline and dashboard to visualize resultsUse staging and validation to check data before landing in the warehouseBuild real-time pipelines with staging areas that perform validation and handle failuresGet to grips with deploying pipelines in the production environmentWho this book is forThis book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Data Envelopment Analysis: A Handbook of Modeling Internal Structure and Network (International Series in Operations Research & Management Science #208)
by Joe Zhu Wade D. CookThis handbook serves as a complement to the Handbook on Data Envelopment Analysis (eds, W. W. Cooper, L. M. Seiford and J, Zhu, 2011, Springer) in an effort to extend the frontier of DEA research. It provides a comprehensive source for the state-of-the art DEA modeling on internal structures and network DEA. Chapter 1 provides a survey on two-stage network performance decomposition and modeling techniques. Chapter 2 discusses the pitfalls in network DEA modeling. Chapter 3 discusses efficiency decompositions in network DEA under three types of structures, namely series, parallel and dynamic. Chapter 4 studies the determination of the network DEA frontier. In chapter 5 additive efficiency decomposition in network DEA is discussed. An approach in scale efficiency measurement in two-stage networks is presented in chapter 6. Chapter 7 further discusses the scale efficiency decomposition in two stage networks. Chapter 8 offers a bargaining game approach to modeling two-stage networks. Chapter 9 studies shared resources and efficiency decomposition in two-stage networks. Chapter 10 introduces an approach to computing the technical efficiency scores for a dynamic production network and its sub-processes. Chapter 11 presents a slacks-based network DEA. Chapter 12 discusses a DEA modeling technique for a two-stage network process where the inputs of the second stage include both the outputs from the first stage and additional inputs to the second stage. Chapter 13 presents an efficiency measurement methodology for multi-stage production systems. Chapter 14 discusses network DEA models, both static and dynamic. The discussion also explores various useful objective functions that can be applied to the models to find the optimal allocation of resources for processes within the black box, that are normally invisible to DEA. Chapter 15 provides a comprehensive review of various type network DEA modeling techniques. Chapter 16 presents shared resources models for deriving aggregate measures of bank-branch performance, with accompanying component measures that make up that aggregate value. Chapter 17 examines a set of manufacturing plants operating under a single umbrella, with the objective being to use the component or function measures to decide what might be considered as each plant's core business. Chapter 18 considers problem settings where there may be clusters or groups of DMUs that form a hierarchy. The specific case of a set off electric power plants is examined in this context. Chapter 19 models bad outputs in two-stage network DEA. Chapter 20 presents an application of network DEA to performance measurement of Major League Baseball (MLB) teams. Chapter 21 presents an application of a two-stage network DEA model for examining the performance of 30 U. S. airline companies. Chapter 22 then presents two distinct network efficiency models that are applied to engineering systems.
Data Envelopment Analysis with R (Studies in Fuzziness and Soft Computing #386)
by Farhad Hosseinzadeh Lotfi Ali Ebrahimnejad Mohsen Vaez-Ghasemi Zohreh MoghaddasThis book introduces readers to the use of R codes for optimization problems. First, it provides the necessary background to understand data envelopment analysis (DEA), with a special emphasis on fuzzy DEA. It then describes DEA models, including fuzzy DEA models, and shows how to use them to solve optimization problems with R. Further, it discusses the main advantages of R in optimization problems, and provides R codes based on real-world data sets throughout. Offering a comprehensive review of DEA and fuzzy DEA models and the corresponding R codes, this practice-oriented reference guide is intended for masters and Ph.D. students in various disciplines, as well as practitioners and researchers.
Data Ethics: Practical Strategies for Implementing Ethical Information Management and Governance
by Katherine O'Keefe Daragh O BrienData-gathering technology is more sophisticated than ever, as are the ethical standards for using this data. This second edition shows how to navigate this complex environment.Data Ethics provides a practical framework for the implementation of ethical principles into information management systems. It shows how to assess the types of ethical dilemmas organizations might face as they become more data-driven. This fully updated edition includes guidance on sustainability and environmental management and on how ethical frameworks can be standardized across cultures that have conflicting values. There is also discussion of data colonialism, the challenge of ethical trade-offs with ad-tech and analytics such as Covid-19 tracking systems and case studies on Smart Cities and Demings Principles.As the pace of developments in data-processing technology continues to increase, it is vital to capitalize on the opportunities this affords while ensuring that ethical standards and ideals are not compromised. Written by internationally regarded experts in the field, Data Ethics is the essential guide for students and practitioners to optimizing ethical data standards in organizations.
Data Ethics and Challenges (SpringerBriefs in Applied Sciences and Technology)
by Samiksha Shukla Jossy P. George Kapil Tiwari Joseph Varghese KureetharaThis book gives a thorough and systematic introduction to Data, Data Sources, Dimensions of Data, Privacy, and Security Challenges associated with Data, Ethics, Laws, IPR Copyright, and Technology Law. This book will help students, scholars, and practitioners to understand the challenges while dealing with data and its ethical and legal aspects. The book focuses on emerging issues while working with the Data.
Data Exfiltration Threats and Prevention Techniques: Machine Learning and Memory-Based Data Security
by Zahir Tari Nasrin Sohrabi Yasaman Samadi Jakapan SuabootDATA EXFILTRATION THREATS AND PREVENTION TECHNIQUES Comprehensive resource covering threat prevention techniques for data exfiltration and applying machine learning applications to aid in identification and prevention Data Exfiltration Threats and Prevention Techniques provides readers the knowledge needed to prevent and protect from malware attacks by introducing existing and recently developed methods in malware protection using AI, memory forensic, and pattern matching, presenting various data exfiltration attack vectors and advanced memory-based data leakage detection, and discussing ways in which machine learning methods have a positive impact on malware detection. Providing detailed descriptions of the recent advances in data exfiltration detection methods and technologies, the authors also discuss details of data breach countermeasures and attack scenarios to show how the reader may identify a potential cyber attack in the real world. Composed of eight chapters, this book presents a better understanding of the core issues related to the cyber-attacks as well as the recent methods that have been developed in the field. In Data Exfiltration Threats and Prevention Techniques, readers can expect to find detailed information on: Sensitive data classification, covering text pre-processing, supervised text classification, automated text clustering, and other sensitive text detection approaches Supervised machine learning technologies for intrusion detection systems, covering taxonomy and benchmarking of supervised machine learning techniques Behavior-based malware detection using API-call sequences, covering API-call extraction techniques and detecting data stealing behavior based on API-call sequences Memory-based sensitive data monitoring for real-time data exfiltration detection and advanced time delay data exfiltration attack and detection Aimed at professionals and students alike, Data Exfiltration Threats and Prevention Techniques highlights a range of machine learning methods that can be used to detect potential data theft and identifies research gaps and the potential to make change in the future as technology continues to grow.
Data Fabric and Data Mesh Approaches with AI: A Guide to AI-based Data Cataloging, Governance, Integration, Orchestration, and Consumption
by Eberhard Hechler Maryela Weihrauch Yan (Catherine) WuUnderstand modern data fabric and data mesh concepts using AI-based self-service data discovery and delivery capabilities, a range of intelligent data integration styles, and automated unified data governance—all designed to deliver "data as a product" within hybrid cloud landscapes.This book teaches you how to successfully deploy state-of-the-art data mesh solutions and gain a comprehensive overview on how a data fabric architecture uses artificial intelligence (AI) and machine learning (ML) for automated metadata management and self-service data discovery and consumption. You will learn how data fabric and data mesh relate to other concepts such as data DataOps, MLOps, AIDevOps, and more. Many examples are included to demonstrate how to modernize the consumption of data to enable a shopping-for-data (data as a product) experience.By the end of this book, you will understand the data fabric concept and architecture as it relates to themes such as automated unified data governance and compliance, enterprise information architecture, AI and hybrid cloud landscapes, and intelligent cataloging and metadata management. What You Will LearnDiscover best practices and methods to successfully implement a data fabric architecture and data mesh solutionUnderstand key data fabric capabilities, e.g., self-service data discovery, intelligent data integration techniques, intelligent cataloging and metadata management, and trustworthy AIRecognize the importance of data fabric to accelerate digital transformation and democratize data accessDive into important data fabric topics, addressing current data fabric challengesConceive data fabric and data mesh concepts holistically within an enterprise contextBecome acquainted with the business benefits of data fabric and data mesh Who This Book Is ForAnyone who is interested in deploying modern data fabric architectures and data mesh solutions within an enterprise, including IT and business leaders, data governance and data office professionals, data stewards and engineers, data scientists, and information and data architects. Readers should have a basic understanding of enterprise information architecture.
data_flood: Helping the Navy Address the Rising Tide of Sensor Information
by Isaac R. Porche III Bradley Wilson Erin-Elizabeth Johnson Shane Tierney Evan SaltzmanNavy analysts are struggling to keep pace with the growing flood of data collected by intelligence, surveillance, and reconnaissance sensors. This challenge is sure to intensify as the Navy continues to field new and additional sensors. The authors explore options for solving the Navy's "big data" challenge, considering changes across four dimensions: people, tools and technology, data and data architectures, and demand and demand management.
Data Flow Analysis: Theory and Practice
by Uday Khedker Amitabha Sanyal Bageshri SatheData flow analysis is used to discover information for a wide variety of useful applications, ranging from compiler optimizations to software engineering and verification. Modern compilers apply it to produce performance-maximizing code, and software engineers use it to re-engineer or reverse engineer programs and verify the integrity of their programs. Supplementary Online Materials to Strengthen Understanding Unlike most comparable books, many of which are limited to bit vector frameworks and classical constant propagation, Data Flow Analysis: Theory and Practice offers comprehensive coverage of both classical and contemporary data flow analysis. It prepares foundations useful for both researchers and students in the field by standardizing and unifying various existing research, concepts, and notations. It also presents mathematical foundations of data flow analysis and includes study of data flow analysis implantation through use of the GNU Compiler Collection (GCC). Divided into three parts, this unique text combines discussions of inter- and intraprocedural analysis and then describes implementation of a generic data flow analyzer (gdfa) for bit vector frameworks in GCC. Through the inclusion of case studies and examples to reinforce material, this text equips readers with a combination of mutually supportive theory and practice, and they will be able to access the author’s accompanying Web page. Here they can experiment with the analyses described in the book, and can make use of updated features, including: Slides used in the authors’ courses The source of the generic data flow analyzer (gdfa) An errata that features errors as they are discovered Additional updated relevant material discovered in the course of research
Data Fluency
by Richard Galentino Zach Gemignani Patrick Schuermann Chris GemignaniA dream come true for those looking to improve their data fluencyAnalytical data is a powerful tool for growing companies, but what good is it if it hides in the shadows? Bring your data to the forefront with effective visualization and communication approaches, and let Data Fluency: Empowering Your Organization with Effective Communication show you the best tools and strategies for getting the job done right. Learn the best practices of data presentation and the ways that reporting and dashboards can help organizations effectively gauge performance, identify areas for improvement, and communicate results.Topics covered in the book include data reporting and communication, audience and user needs, data presentation tools, layout and styling, and common design failures. Those responsible for analytics, reporting, or BI implementation will find a refreshing take on data and visualization in this resource, as will report, data visualization, and dashboard designers.Conquer the challenge of making valuable data approachable and easy to understandDevelop unique skills required to shape data to the needs of different audiencesFull color book links to bonus content at juiceanalytics.comWritten by well-known and highly esteemed authors in the data presentation communityData Fluency: Empowering Your Organization with Effective Communication focuses on user experience, making reports approachable, and presenting data in a compelling, inspiring way. The book helps to dissolve the disconnect between your data and those who might use it and can help make an impact on the people who are most affected by data. Use Data Fluency today to develop the skills necessary to turn data into effective displays for decision-making.
Data for All
by John K. ThompsonDo you know what happens to your personal data when you are browsing, buying, or using apps? Discover how your data is harvested and exploited, and what you can do to access, delete, and monetize it.Data for All empowers everyone—from tech experts to the general public—to control how third parties use personal data. Read this eye-opening book to learn: The types of data you generate with every action, every day Where your data is stored, who controls it, and how much money they make from it How you can manage access and monetization of your own data Restricting data access to only companies and organizations you want to support The history of how we think about data, and why that is changing The new data ecosystem being built right now for your benefit The data you generate every day is the lifeblood of many large companies—and they make billions of dollars using it. In Data for All, bestselling author John K. Thompson outlines how this one-sided data economy is about to undergo a dramatic change. Thompson pulls back the curtain to reveal the true nature of data ownership, and how you can turn your data from a revenue stream for companies into a financial asset for your benefit. Foreword by Thomas H. Davenport. About the Technology Do you know what happens to your personal data when you&’re browsing and buying? New global laws are turning the tide on companies who make billions from your clicks, searches, and likes. This eye-opening book provides an inspiring vision of how you can take back control of the data you generate every day. About the Book Data for All gives you a step-by-step plan to transform your relationship with data and start earning a &“data dividend&”—hundreds or thousands of dollars paid out simply for your online activities. You&’ll learn how to oversee who accesses your data, how much different types of data are worth, and how to keep private details private. What&’s Inside The types of data you generate with every action, every day How you can manage access and monetization of your own data The history of how we think about data, and why that is changing The new data ecosystem being built right now for your benefit About the Reader For anyone who is curious or concerned about how their data is used. No technical knowledge required. About the Author John K. Thompson is an international technology executive with over 37 years of experience in the fields of data, advanced analytics, and artificial intelligence. Table of Contents 1 A history of data 2 How data works today 3 You and your data 4 Trust 5 Privacy 6 Moving from Open Data to Our Data 7 Derived data, synthetic data, and analytics 8 Looking forward: What&’s next for our data?
Data for the People: How to Make Our Post-Privacy Economy Work for You
by Andreas WeigendA long-time chief data scientist at Amazon shows how open data can make everyone, not just corporations, richerEvery time we Google something, Facebook someone, Uber somewhere, or even just turn on a light, we create data that businesses collect and use to make decisions about us. In many ways this has improved our lives, yet, we as individuals do not benefit from this wealth of data as much as we could. Moreover, whether it is a bank evaluating our credit worthiness, an insurance company determining our risk level, or a potential employer deciding whether we get a job, it is likely that this data will be used against us rather than for us.In Data for the People, Andreas Weigend draws on his years as a consultant for commerce, education, healthcare, travel and finance companies to outline how Big Data can work better for all of us. As of today, how much we benefit from Big Data depends on how closely the interests of big companies align with our own. Too often, outdated standards of control and privacy force us into unfair contracts with data companies, but it doesn't have to be this way. Weigend makes a powerful argument that we need to take control of how our data is used to actually make it work for us. Only then can we the people get back more from Big Data than we give it.Big Data is here to stay. Now is the time to find out how we can be empowered by it.
Data for the Public Good: How Data Can Help Citizens and Government
by Alex HowardAs we move into an era of unprecedented volumes of data and computing power, the benefits aren't for business alone. Data can help citizens access government, hold it accountable and build new services to help themselves. Simply making data available is not sufficient. The use of data for the public good is being driven by a distributed community of media, nonprofits, academics and civic advocates.This report from O'Reilly Radar highlights the principles of data in the public good, and surveys areas where data is already being used to great effect, covering: Consumer finance; Transit data; Government transparency; Data journalism; Aid and development; Crisis and emergency response; and Healthcare.
Data Forecasting and Segmentation Using Microsoft Excel: Perform data grouping, linear predictions, and time series machine learning statistics without using code
by Fernando RoquePerform time series forecasts, linear prediction, and data segmentation with no-code Excel machine learningKey FeaturesSegment data, regression predictions, and time series forecasts without writing any codeGroup multiple variables with K-means using Excel plugin without programmingBuild, validate, and predict with a multiple linear regression model and time series forecastsBook DescriptionData Forecasting and Segmentation Using Microsoft Excel guides you through basic statistics to test whether your data can be used to perform regression predictions and time series forecasts. The exercises covered in this book use real-life data from Kaggle, such as demand for seasonal air tickets and credit card fraud detection.You'll learn how to apply the grouping K-means algorithm, which helps you find segments of your data that are impossible to see with other analyses, such as business intelligence (BI) and pivot analysis. By analyzing groups returned by K-means, you'll be able to detect outliers that could indicate possible fraud or a bad function in network packets.By the end of this Microsoft Excel book, you'll be able to use the classification algorithm to group data with different variables. You'll also be able to train linear and time series models to perform predictions and forecasts based on past data.What you will learnUnderstand why machine learning is important for classifying data segmentationFocus on basic statistics tests for regression variable dependencyTest time series autocorrelation to build a useful forecastUse Excel add-ins to run K-means without programmingAnalyze segment outliers for possible data anomalies and fraudBuild, train, and validate multiple regression models and time series forecastsWho this book is forThis book is for data and business analysts as well as data science professionals. MIS, finance, and auditing professionals working with MS Excel will also find this book beneficial.
Data Governance: From the Fundamentals to Real Cases
by Ismael Caballero Mario PiattiniThis book presents a set of models, methods, and techniques that allow the successful implementation of data governance (DG) in an organization and reports real experiences of data governance in different public and private sectors. To this end, this book is composed of two parts. Part I on “Data Governance Fundamentals” begins with an introduction to the concept of data governance that stresses that DG is not primarily focused on databases, clouds, or other technologies, but that the DG framework must be understood by business users, systems personnel, and the systems themselves alike. Next, chapter 2 addresses crucial topics for DG, such as the evolution of data management in organizations, data strategy and policies, and defensive and offensive approaches to data strategy. Chapter 3 then details the central role that human resources play in DG, analysing the key responsibilities of the different DG-related roles and boards, while chapter 4 discusses the most common barriers to DG in practice. Chapter 5 summarizes the paradigm shifts in DG from control to value creation. Subsequently chapter 6 explores the needs, characteristics and key functionalities of DG tools, before this part ends with a chapter on maturity models for data governance. Part II on “Data Governance Applied” consists of five chapters which review the situation of DG in different sectors and industries. Details about DG in the banking sector, public administration, insurance companies, healthcare and telecommunications each are presented in one chapter. The book is aimed at academics, researchers and practitioners (especially CIOs, Data Governors, or Data Stewards) involved in DG. It can also serve as a reference for courses on data governance in information systems.
Data Governance: A Guide
by Dimitrios SargiotisThis book is a comprehensive resource designed to demystify the complex world of data governance for professionals across various sectors. This guide provides in-depth insights, methodologies, and best practices to help organizations manage their data effectively and securely. It covers essential topics such as data quality, privacy, security, and management ensuring that readers gain a holistic understanding of how to establish and maintain a robust data governance framework. Through a blend of theoretical knowledge and practical applications, this book addresses the challenges and benefits of data governance, equipping readers with the tools needed to navigate the evolving data landscape. In addition to foundational principles, this book explores real-world case studies that illustrate the tangible benefits and common pitfalls of implementing data governance. Emerging trends and technologies, including artificial intelligence, machine learning, and blockchain are also examined to prepare readers for future developments in the field. Whether you are a seasoned data management professional or new to the discipline, this book serves as an invaluable resource for mastering the intricacies of data governance and leveraging data as a strategic asset for organizational success. This resourceful guide targets data management professionals, IT managers, Compliance officers, Data Stewards, Data Owners Data Governance Managers and more. Business leaders, business executives academic researchers, students focused on computer science in data-related fields will also find this book a useful resource.