- Table View
- List View
Databases Theory and Applications: 34th Australasian Database Conference, ADC 2023, Melbourne, VIC, Australia, November 1-3, 2023, Proceedings (Lecture Notes in Computer Science #14386)
by Zhifeng Bao Renata Borovica-Gajic Ruihong Qiu Farhana Choudhury Zhengyi YangThis book constitutes the refereed proceedings of the 34th Australasian Database Conference on Databases Theory and Applications, ADC 2023, held in Melbourne, VIC, Australia, during November 1-3, 2023. The 26 full papers presented in this volume are carefully reviewed and selected from 41 submissions. They were organized in topical sections named: Mining Complex Types of Data, Natural Language Processing and Text Analysis, Machine Learning and Computer Vision, Database Systems and Data Storage, Data Quality and Fairness for Graphs and Graph Mining and Graph Algorithms.
Databases Theory and Applications: 30th Australasian Database Conference, ADC 2019, Sydney, NSW, Australia, January 29 – February 1, 2019, Proceedings (Lecture Notes in Computer Science #11393)
by Lijun Chang Junhao Gan Xin CaoThis book constitutes the refereed proceedings of the 30th Australasian Database Conference, ADC 2019, held in Sydney, NSW, Australia, in January/February 2019. The 9 full papers presented together with one demo paper were carefully reviewed and selected from 19 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research progresses and novel applications of database systems, data management, data mining and data analytics for researchers and practitioners in these areas from Australia, New Zealand and in the world
Databases Theory and Applications: 27th Australasian Database Conference, ADC 2016, Sydney, NSW, September 28-29, 2016, Proceedings (Lecture Notes in Computer Science #9877)
by Muhammad Aamir Cheema Wenjie Zhang Lijun ChangThis book constitutes the refereed proceedings of the 26th Australasian Database Conference, ADC 2015, held in Melbourne, VIC, Australia, in June 2015. The 24 full papers presented together with 5 demo papers were carefully reviewed and selected from 43 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data driven applications and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand. The mission of ADC is to share novel research solutions to problems of today's information society that fulfill the needs of heterogeneous applications and environments and to identify new issues and directions for future research. ADC seeks papers from academia and industry presenting research on all practical and theoretical aspects of advanced database theory and applications, as well as case studies and implementation experiences.
Databases Theory and Applications: 35th Australasian Database Conference, ADC 2024, Gold Coast, QLD, Australia, December 16–18, 2024, Proceedings (Lecture Notes in Computer Science #15449)
by Tong Chen Yang Cao Quoc Viet Hung Nguyen Thanh Tam NguyenThis LNCS volume constitutes the referred proceedings of 35th Australasian Database Conference, ADC 2024, held in Gold Coast, QLD, Australia, during December 16-18, 2024. The 38 full papers included in these proceedings were carefully reviewed and selected from 90 submissions. They focus on latest advancements and innovative applications in database systems, data-driven applications, and data analytics.
Databases Theory and Applications: 33rd Australasian Database Conference, ADC 2022, Sydney, NSW, Australia, September 2–4, 2022, Proceedings (Lecture Notes in Computer Science #13459)
by Wen Hua Hua Wang Lei LiThis book constitutes the refereed proceedings of the 33rd International Conference on Databases Theory and Applications, ADC 2022, held in Sydney, Australia, in September 2022. The conference is co-located with the 48th International Conference on Very Large Data Bases, VLDB 2022. The 9 full papers presented together with 8 short papers were carefully reviewed and selected from 36 submissions. ADC focuses on database systems, data-driven applications, and data analytics.
Databases Theory and Applications: 31st Australasian Database Conference, ADC 2020, Melbourne, VIC, Australia, February 3–7, 2020, Proceedings (Lecture Notes in Computer Science #12008)
by Jianzhong Qi Renata Borovica-Gajic Weiqing WangThis book constitutes the refereed proceedings of the 31th Australasian Database Conference, ADC 2019, held in Melbourne, VIC, Australia, in February 2020. The 14 full and 5 short papers presented were carefully reviewed and selected from 30 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data driven applications and data analytics between researchers and practitioners from around the globe, particularly Australia, New Zealand and in the World.
Databases Theory and Applications: 26th Australasian Database Conference, Adc 2015, Melbourne, Vic, Australia, June 4-7, 2015. Proceedings (Theoretical Computer Science and General Issues #9093)
by Jianzhong Qi Jinjun Chen Gao Cong Junhu WangThis book constitutes the refereed proceedings of the 29th Australasian Database Conference, ADC 2018, held in Gold Coast, QLD, Australia, in May 2018.The 23 full papers plus 6 short papers presented together with 3 demo papers were carefully reviewed and selected from 53 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data-driven applications, and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand.
Databases Theory and Applications: 32nd Australasian Database Conference, ADC 2021, Dunedin, New Zealand, January 29 – February 5, 2021, Proceedings (Lecture Notes in Computer Science #12610)
by Miao Qiao Gottfried Vossen Sen Wang Lei LiThis book constitutes the refereed proceedings of the 32nd Australasian Database Conference, ADC 2021, held in Dunedin, New Zealand, in January/February 2021. The 17 full papers presented were carefully reviewed and selected from 21 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data-driven applications, and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand. ADC shares novel research solutions to problems of todays information society that fullfil the needs of heterogeneous applications and environments and to identify new issues and directions for future research and development work.
Databases Theory and Applications: 28th Australasian Database Conference, ADC 2017, Brisbane, QLD, Australia, September 25–28, 2017, Proceedings (Lecture Notes in Computer Science #10538)
by Zi Huang, Xiaokui Xiao and Xin CaoThis book constitutes the refereed proceedings of the 28th Australasian Database Conference, ADC 2017, held in Brisbane, QLD, Australia, in September 2017. The 20 full papers presented together with 2 demo papers were carefully reviewed and selected from 32 submissions. The mission of ADC is to share novel research solutions to problems of today’s information society that fulfill the needs of heterogeneous applications and environments and to identify new issues and directions for future research and development work. The topics of the presented papers are related to all practical and theoretical aspects of advanced database theory and applications, as well as case studies and implementation experiences.
Databases with Access
by MOIRA StephenThis handy textbook covers all you need to know to begin to use databases such as Microsoft Access.Learning Made Simple books give readers skills without frills. They are matched to the main qualifications, and written by experienced teachers and authors to make often tricky subjects simple to learn. Every book is designed carefully to provide bite-sized lessons matched to learners’ needs. Building on the multi-million success of the previous series Made Simple Books, Learning Made Simple titles provide both a new colourful way to study and a useful adjunct to any training course. Using full colour throughout, and written by leading teachers and writers, Learning Made Simple books will help readers learn new skills and develop their talents. Whether studying at college, training at work, or reading at home, aiming for a qualification or simply getting up to speed, Learning Made Simple books will give readers the advantage of easy, well-organised training materials in a handy volume with two or four-page sections for each topic for ease of use.
Databricks Data Intelligence Platform: Unlocking the GenAI Revolution
by Nikhil Gupta Jason YipThis book is your comprehensive guide to building robust Generative AI solutions using the Databricks Data Intelligence Platform. Databricks is the fastest-growing data platform offering unified analytics and AI capabilities within a single governance framework, enabling organizations to streamline their data processing workflows, from ingestion to visualization. Additionally, Databricks provides features to train a high-quality large language model (LLM), whether you are looking for Retrieval-Augmented Generation (RAG) or fine-tuning. Databricks offers a scalable and efficient solution for processing large volumes of both structured and unstructured data, facilitating advanced analytics, machine learning, and real-time processing. In today's GenAI world, Databricks plays a crucial role in empowering organizations to extract value from their data effectively, driving innovation and gaining a competitive edge in the digital age. This book will not only help you master the Data Intelligence Platform but also help power your enterprise to the next level with a bespoke LLM unique to your organization. Beginning with foundational principles, the book starts with a platform overview and explores features and best practices for ingestion, transformation, and storage with Delta Lake. Advanced topics include leveraging Databricks SQL for querying and visualizing large datasets, ensuring data governance and security with Unity Catalog, and deploying machine learning and LLMs using Databricks MLflow for GenAI. Through practical examples, insights, and best practices, this book equips solution architects and data engineers with the knowledge to design and implement scalable data solutions, making it an indispensable resource for modern enterprises. Whether you are new to Databricks and trying to learn a new platform, a seasoned practitioner building data pipelines, data science models, or GenAI applications, or even an executive who wants to communicate the value of Databricks to customers, this book is for you. With its extensive feature and best practice deep dives, it also serves as an excellent reference guide if you are preparing for Databricks certification exams. What You Will Learn Foundational principles of Lakehouse architecture Key features including Unity Catalog, Databricks SQL (DBSQL), and Delta Live Tables Databricks Intelligence Platform and key functionalities Building and deploying GenAI Applications from data ingestion to model serving Databricks pricing, platform security, DBRX, and many more topics Who This Book Is For Solution architects, data engineers, data scientists, Databricks practitioners, and anyone who wants to deploy their Gen AI solutions with the Data Intelligence Platform. This is also a handbook for senior execs who need to communicate the value of Databricks to customers. People who are new to the Databricks Platform and want comprehensive insights will find the book accessible.
The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (Second Edition) (Synthesis Lectures on Computer Architecture #Lecture #24)
by Luiz Andre Barroso Jimmy Clidaras Urs HolzleAs computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. <p>These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. <p><p>Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. <p>The first major update to this lecture is presented after nearly four years of substantial academic and industrial developments in warehouse-scale computing.
Datacenter Connectivity Technologies: Principles and Practice
by Frank ChangIn recent years, investments by cloud companies in mega data centers and associated network infrastructure has created a very active and dynamic segment in the optical components and modules market. Optical interconnect technologies at high speed play a critical role for the growth of mega data centers, which flood the networks with unprecedented amount of data traffic. Datacenter Connectivity Technologies: Principles and Practice provides a comprehensive and in-depth look at the development of various optical connectivity technologies which are making an impact on the building of data centers. The technologies span from short range connectivity, as low as 100 meters with multi-mode fiber (MMF) links inside data centers, to long distances of hundreds of kilometers with single-mode fiber (SMF) links between data centers.This book is the first of its kind to address various advanced technologies connecting data centers. It represents a collection of achievements and the latest developments from well-known industry experts and academic researchers active in this field.
Dataclismo: Amor, sexo, raza e identidad; lo que nuestra vida online cuenta de nosotros
by Christian Rudder¿Has pensado alguna vez las consecuencias de colgar tu intimidad en las redes sociales? Dataclismo es una investigación audaz e irreverente sobre el comportamiento humano; un nuevo enfoque a una revolución en marcha. Durante décadas las únicas herramientas para estudiar el comportamiento humano partían de encuestas o pequeños experimentos. Hoy es posible un nuevo enfoque. Debido a Internet cada vez mostramos más aspectos de nuestra vida y los investigadores pueden observar nuestro comportamiento directamente, extrapolando a grandes cifras y sin filtros. Los analistas son a día de hoy los nuevos sociólogos. En Dataclismo Christian Rudder nos explica de forma original cómo a través de los likes de Facebook se puede predecir con precisión de cirujano la orientación sexual de una persona e incluso su inteligencia, cómo las mujeres atractivas reciben más solicitudes de entrevistas; y por qué hay que tener haters para estar de moda. Muestra cómo se expresa la gente en público y en privado y otras muchas cosas más. Dataclismo es una nueva forma de vernos a nosotros mismos, un libro de alquimia en el que las matemáticas se humanizan y los números se convierten en los trovadores de nuestro tiempo. Bestseller en The New York Times La crítica ha dicho...«Rudder nos demuestra que la información que proporcionamos a título individual dice mucho sobre quiénes somos como colectivo. Una lectura que es visualmente cautivadora y un tema fascinante hacen de este libro una elección ideal.»Library Journal «Demógrafos, emprendedores, estudiantes de historia y sociología y ciudadanos de a pie por igual encontrarán multitud de provocaciones y, sí, muchos datos en las páginas de este libro.»Kirkus «Estudiar la conducta humana es como explorar una jungla: es trabajoso y lioso, y uno se pierde con facilidad. Pero Christian Rudder es un guía experto que nos desvela verdades fundamentales sobre quiénes somos.»Dan Ariely, autor de Predictably Irrational «Dataclismo es un libro repleto de secretos suculentos: secretos sobre a quién amamos, lo que anhelamos, por qué nos gusta lo que nos gusta y cómo nos hacemos cambiar de opinión y de vida unos a otros muchas veces sin siquiera saberlo.»Jane McGonigal, autora de Reality is Broken «Una mirada fascinante, casi voyeurística, sobre quiénes somos en realidad.»Steven Strogatz, profesor en la universidad de Cornell y autor de The Joy of X «Un libro divertido y profundo sobre temas importantes. La raza, el amor, el sexo... ¿Somos la suma de los datos que generamos? Lee este libro de inmediato y verás si eres capaz de responder a esa pregunta.»Errol Morris
Datacloud: Toward a New Theory of Online Work
by Johndan Johnson-EilolaThis book offers new dimensions in Computers and Composition. It expands on: The Changing Shapes of Computer Spaces; Tendential Forces; Changing Articulations of Interface Design; and Interface Overflow.
Dataclysm: Love, Sex, Race, and Identity--What Our Online Lives Tell Us about Our Offline Selves
by Christian RudderAn audacious, irreverent investigation of human behavior--and a first look at a revolution in the making Our personal data has been used to spy on us, hire and fire us, and sell us stuff we don't need. In Dataclysm, Christian Rudder uses it to show us who we truly are. For centuries, we've relied on polling or small-scale lab experiments to study human behavior. Today, a new approach is possible. As we live more of our lives online, researchers can finally observe us directly, in vast numbers, and without filters. Data scientists have become the new demographers. In this daring and original book, Rudder explains how Facebook "likes" can predict, with surprising accuracy, a person's sexual orientation and even intelligence; how attractive women receive exponentially more interview requests; and why you must have haters to be hot. He charts the rise and fall of America's most reviled word through Google Search and examines the new dynamics of collaborative rage on Twitter. He shows how people express themselves, both privately and publicly. What is the least Asian thing you can say? Do people bathe more in Vermont or New Jersey? What do black women think about Simon & Garfunkel? (Hint: they don't think about Simon & Garfunkel.) Rudder also traces human migration over time, showing how groups of people move from certain small towns to the same big cities across the globe. And he grapples with the challenge of maintaining privacy in a world where these explorations are possible. Visually arresting and full of wit and insight, Dataclysm is a new way of seeing ourselves--a brilliant alchemy, in which math is made human and numbers become the narrative of our time.From the Hardcover edition.
Dataclysm
by Christian RudderProvocative, illuminating, and visually arresting, Dataclysm is a portrait of how big data reveals our essential selves--and a first look at a revolution in the making. What is the secret to a stable marriage? How many gay people are still in the closet? Do we truly live in a postracial society? Has Twitter made us dumber? These are just a few of the questions Christian Rudder answers in Dataclysm, a smart, funny, irreverent look at how we act when we think no one's looking. For centuries we've relied on polling or small-scale lab experiments to study human behavior. Today a new approach is possible. As we live more of our lives online, researchers can finally observe us directly, in vast numbers and without filters. Data scientists can quantify the formerly unquantifiable and show with unprecedented precision how we fight, how we age, how we love, and how we change. Our personal data has been used to spy on us, hire and fire us, and sell us stuff we don't need. InDataclysm, Rudder uses it to show us who we are as people. He reveals how Facebook "likes" can predict, with surprising accuracy, a person's sexual orientation and even intelligence; how attractive women receive exponentially more job interview requests; and why you have to have haters to be hot. He charts the rise and fall of America's most reviled word through Google Search and examines the new dynamics of collaborative rage on Twitter. He shows how people express themselves, both privately and publicly. What is the least Asian thing you can say? Do people bathe more in Vermont or New Jersey? What do black women think about Simon & Garfunkel? Hint: They don't think about Simon & Garfunkel. Rudder also tracks human migration in real time, showing how groups of people move from certain small towns to the same big cities across the globe. And he grapples with the challenge of maintaining privacy in a world where these explorations are possible.
Datadog Cloud Monitoring Quick Start Guide: Proactively create dashboards, write scripts, manage alerts, and monitor containers using Datadog
by Thomas Kurian TheakanathA comprehensive guide to rolling out Datadog to monitor infrastructure and applications running in both cloud and datacenter environmentsKey FeaturesLearn Datadog to proactively monitor your infrastructure and cloud servicesUse Datadog as a platform for aggregating monitoring efforts in your organizationLeverage Datadog's alerting service to implement on-call and site reliability engineering (SRE) processesBook DescriptionDatadog is an essential cloud monitoring and operational analytics tool which enables the monitoring of servers, virtual machines, containers, databases, third-party tools, and application services. IT and DevOps teams can easily leverage Datadog to monitor infrastructure and cloud services, and this book will show you how. The book starts by describing basic monitoring concepts and types of monitoring that are rolled out in a large-scale IT production engineering environment. Moving on, the book covers how standard monitoring features are implemented on the Datadog platform and how they can be rolled out in a real-world production environment. As you advance, you'll discover how Datadog is integrated with popular software components that are used to build cloud platforms. The book also provides details on how to use monitoring standards such as Java Management Extensions (JMX) and StatsD to extend the Datadog platform. Finally, you'll get to grips with monitoring fundamentals, learn how monitoring can be rolled out using Datadog proactively, and find out how to extend and customize the Datadog platform. By the end of this Datadog book, you will have gained the skills needed to monitor your cloud infrastructure and the software applications running on it using Datadog.What you will learnUnderstand monitoring fundamentals, including metrics, monitors, alerts, and thresholdsImplement core monitoring requirements using Datadog featuresExplore Datadog's integration with cloud platforms and toolsExtend Datadog using custom scripting and standards such as JMX and StatsDDiscover how proactive monitoring can be rolled out using various Datadog featuresUnderstand how Datadog can be used to monitor microservices in both Docker and Kubernetes environmentsGet to grips with advanced Datadog features such as APM and Security MonitoringWho this book is forThis book is for DevOps engineers, site reliability engineers (SREs), IT Production engineers, software developers and architects, cloud engineers, system administrators, and anyone looking to monitor and visualize their infrastructure and applications with Datadog. Basic working knowledge of cloud and infrastructure is useful. Working experience of Linux distribution and some scripting knowledge is required to fully take advantage of the material provided in the book.
DataFlow Supercomputing Essentials: Research, Development and Education (Computer Communications and Networks)
by Dejan Markovic Jakob Salom Veljko Milutinovic Dragan Veljovic Nenad Korolija Luka PetrovicThis informative text/reference highlights the potential of DataFlow computing in research requiring high speeds, low power requirements, and high precision, while also benefiting from a reduction in the size of the equipment. The cutting-edge research and implementation case studies provided in this book will help the reader to develop their practical understanding of the advantages and unique features of this methodology. This work serves as a companion title to DataFlow Supercomputing Essentials: Algorithms, Applications and Implementations, which reviews the key algorithms in this area, and provides useful examples. Topics and features: reviews the library of tools, applications, and source code available to support DataFlow programming; discusses the enhancements to DataFlow computing yielded by small hardware changes, different compilation techniques, debugging, and optimizing tools; examines when a DataFlow architecture is best applied, and for which types of calculation; describes how converting applications to a DataFlow representation can result in an acceleration in performance, while reducing the power consumption; explains how to implement a DataFlow application on Maxeler hardware architecture, with links to a video tutorial series available online. This enlightening volume will be of great interest to all researchers investigating supercomputing in general, and DataFlow computing in particular. Advanced undergraduate and graduate students involved in courses on Data Mining, Microprocessor Systems, and VLSI Systems, will also find the book to be a helpful reference.
DataFlow Supercomputing Essentials: Algorithms, Applications and Implementations (Computer Communications and Networks)
by Nemanja Trifunovic Veljko Milutinovic Milos Kotlar Marko Stojanovic Igor Dundic Zoran BabovicThis informative text/reference highlights the potential of DataFlow computing in research requiring high speeds, low power requirements, and high precision, while also benefiting from a reduction in the size of the equipment. The cutting-edge research and implementation case studies provided in this book will help the reader to develop their practical understanding of the advantages and unique features of this methodology. This work serves as a companion title to DataFlow Supercomputing Essentials: Algorithms, Applications and Implementations, which reviews the key algorithms in this area, and provides useful examples. Topics and features: reviews the library of tools, applications, and source code available to support DataFlow programming; discusses the enhancements to DataFlow computing yielded by small hardware changes, different compilation techniques, debugging, and optimizing tools; examines when a DataFlow architecture is best applied, and for which types of calculation; describes how converting applications to a DataFlow representation can result in an acceleration in performance, while reducing the power consumption; explains how to implement a DataFlow application on Maxeler hardware architecture, with links to a video tutorial series available online. This enlightening volume will be of great interest to all researchers investigating supercomputing in general, and DataFlow computing in particular. Advanced undergraduate and graduate students involved in courses on Data Mining, Microprocessor Systems, and VLSI Systems, will also find the book to be a helpful reference.
The DataOps Revolution: Delivering the Data-Driven Enterprise
by Simon TrewinDataOps is a new way of delivering data and analytics that is proven to get results. It enables IT and users to collaborate in the delivery of solutions that help organisations to embrace a data-driven culture. The DataOps Revolution: Delivering the Data-Driven Enterprise is a narrative about real world issues involved in using DataOps to make data-driven decisions in modern organisations. The book is built around real delivery examples based on the author’s own experience and lays out principles and a methodology for business success using DataOps. Presenting practical design patterns and DataOps approaches, the book shows how DataOps projects are run and presents the benefits of using DataOps to implement data solutions. Best practices are introduced in this book through the telling of a story, which relates how a lead manager must find a way through complexity to turn an organisation around. This narrative vividly illustrates DataOps in action, enabling readers to incorporate best practices into everyday projects. The book tells the story of an embattled CIO who turns to a new and untested project manager charged with a wide remit to roll out DataOps techniques to an entire organisation. It illustrates a different approach to addressing the challenges in bridging the gap between IT and the business. The approach presented in this story lines up to the six IMPACT pillars of the DataOps model that Kinaesis (www.kinaesis.com) has been using through its consultants to deliver successful projects and turn around failing deliveries. The pillars help to organise thinking and structure an approach to project delivery. The pillars are broken down and translated into steps that can be applied to real-world projects that can deliver satisfaction and fulfillment to customers and project team members.
The Datapreneurs: The Promise of AI and the Creators Building Our Future
by Bob Muglia Steve HammA leader in the data economy explains how we arrived at AI—and how we can navigate its future In The Datapreneurs, Bob Muglia helps us understand how innovation in data and information technology have led us to AI—and how this technology must shape our future. The long-time Microsoft executive, former CEO of Snowflake, and current tech investor maps the evolution of the modern data stack and how it has helped build today&’s economy and society. And he explains how humanity must create a new social contract for the artificial general intelligence (AGI)—autonomous machines intelligent as people—that he expects to arrive in less than a decade. Muglia details his personal experience in the foundational years of computing and data analytics, including with Bill Gates and Sam Altman, the CEO of OpenAI, the creator of ChatGPT, and others that are not household names—yet. He builds upon Isaac Asimov&’s Laws of Robotics to explore the moral, ethical, and legal implications of today&’s smart machines, and how a combination of human and machine intelligence could create an era of progress and prosperity where all the people on Earth can have what they need and want without destroying our natural environment.The Datapreneurs is a call to action. AGI is surely coming. Muglia believes that tech business leaders, ethicists, policy leaders, and even the general public must collaborate answer the short- and long-term questions raised by its emergence. And he argues that we had better get going, because advances are coming so fast that society risks getting caught flatfooted—with potentially disastrous consequences.
Daten-Teams: Ein einheitliches Managementmodell für erfolgreiche, datenorientierte Teams
by Jesse AndersonErfahren Sie, wie Sie erfolgreiche Big-Data-Projekte durchführen, wie Sie Ihre Teams mit Ressourcen ausstatten und wie die Teams miteinander arbeiten sollten, um kosteneffizient zu sein. In diesem Buch werden die drei Teams vorgestellt, die für erfolgreiche Projekte erforderlich sind, und es wird erläutert, welche Aufgaben die einzelnen Teams haben.Die meisten Unternehmen scheitern mit Big-Data-Projekten, und der Misserfolg wird fast immer auf die verwendeten Technologien geschoben. Um erfolgreich zu sein, müssen sich Unternehmen sowohl auf die Technologie als auch auf das Management konzentrieren.Die Nutzung von Daten ist ein Teamsport. Es bedarf verschiedener Menschen mit unterschiedlichen Fähigkeiten, die alle zusammenarbeiten müssen, um etwas zu erreichen. Bei allen Projekten, mit Ausnahme der kleinsten, sollten die Mitarbeiter in mehreren Teams organisiert werden, um das Scheitern von Projekten und unzureichende Leistungen zu vermeiden.Dieses Buch konzentriert sich auf das Management. Vor einigen Jahren wurde wenig bis gar nicht über das Management von Big-Data-Projekten oder -Teams geschrieben oder gesprochen. Data Teams zeigt, warum Managementfehler die Ursache für so viele Projektmisserfolge sind und wie Sie solche Misserfolge in Ihrem Projekt proaktiv verhindern können.Was Sie lernen werdenEntdecken Sie die drei Teams, die Sie benötigen, um mit Big Data erfolgreich zu seinVerstehen, was ein Datenwissenschaftler ist und was ein Datenwissenschaftsteam tutVerstehen, was ein Data Engineer ist und was ein Data Engineering Team machtVerstehen, was ein Betriebsingenieur ist und was ein Betriebsteam tutWissen, wie sich die Teams und Titel unterscheiden und warum Sie alle drei Teams brauchenErkennen, welche Rolle das Unternehmen bei der Zusammenarbeit mit Datenteams spielt und wie der Rest der Organisation zu erfolgreichen Datenprojekten beiträgtFür wen dieses Buch gedacht istFührungskräfte aller Ebenen, einschließlich derjenigen, die über einige technische Fähigkeiten verfügen und ein Big-Data-Projekt in Angriff nehmen wollen oder bereits ein Big-Data-Projekt begonnen haben. Es ist besonders hilfreich für diejenigen, die Projekte haben, die nicht vorankommen und nicht wissen, warum, oder die an einer Konferenz teilgenommen oder über Big Data gelesen haben und nun damit beginnen, zu prüfen, was nötig ist, um ein Projekt zu realisieren.Dieses Buch ist auch für leitende Mitarbeiter oder technische Architekten relevant, die in einem Team arbeiten, das vom Unternehmen beauftragt wurde, herauszufinden, was nötig ist, um ein Projekt zu starten, in einem Projekt, das nicht vorankommt, oder die feststellen müssen, ob es nichttechnische Probleme gibt, die ihr Projekt beeinträchtigen.
Daten- und Informationsqualität: Auf dem Weg zur Information Excellence
by Knut Hildebrand Marcus Gebauer Holger Hinrichs Michael MielkeDas erste deutsche Buch zum Thema Daten- und Informationsqualität in der dritten, erweiterten Auflage. Wissenschaftlich fundiert und von Praktikern geschrieben, wird der aktuelle Stand aus Forschung und praktischer Anwendung präsentiert, in den wichtigen Facetten dieses wichtigen Themas. Ein Muss für alle IT-Profis.
Daten- und Informationsqualität: Auf dem Weg zur Information Excellence
by Knut Hildebrand Marcus Gebauer Holger Hinrichs Michael MielkeDie Verbesserung und Sicherung der Informationsqualität (IQ) wird in immer mehr Unternehmen als eigenständige und wichtige Managementaufgabe begriffen. IQ-Management ist mittlerweile ein elementarer Baustein in Systemintegrationsprojekten. Aber auch für laufende Prozesse mit heterogenen Daten und Nutzern ist eine hohe Informationsqualität die Grundvoraussetzung für funktionierende betriebliche Abläufe. Das erste deutschsprachige Buch zum Thema behandelt Daten- und Informationsqualität umfassend: von Definitionen zur Datenqualität über Methoden und Regelwerke für ihr Management bis hin zur Verankerung in der Organisation – mit Fallbeispielen aus zahlreichen Unternehmen. Im einführenden Kapitel erläutern die Autoren zunächst die Grundlagen. Sie stellen wissenschaftliche Modelle der Informationstheorie vor und erläutern die Rolle von Daten im Wissens- und Informationsmanagement und als Produktionsfaktor. Ein weiteres grundlegendes Kapitel widmet sich den verschiedenen Dimensionen der Informationsqualität. Anhand von 15 Begriffen und erläuternden Beispielen werden die IQ-Dimensionen wie beispielsweise Zugänglichkeit (accessibility), Umfang (appropriate amount of data) oder Glaubwürdigkeit (believability) präzise beschrieben. Dieses Kapitel ist zugleich Ergebnis der Arbeit einer Projektgruppe in der DGIQ (Deutsche Gesellschaft für Informations- und Datenqualität). Im zweiten Teil des Buchs werden die Methoden, Tools und Techniken für das Management der Datenqualität erläutert. Dazu zählen unter anderem Datenqualitätsmetriken, Methoden wie Total Data Quality Management, die strukturierte Datenanalyse oder Maßnahmen wie Datenbereinigung. Der Band wurde für die vierte Auflage erweitert und an zahlreichen Stellen überarbeitet. Wissenschaftlich fundiert und von Praktikern geschrieben, präsentiert es den aktuellen Stand aus Forschung und Anwendung. Das Buch richtet sich an Unternehmensführungen, IT-Manager, beispielsweise in Banken und Versicherungen, und an alle Datenspezialisten. Ein Muss für alle IT-Profis.