Browse Results

Showing 19,626 through 19,650 of 54,275 results

Beginning GraphQL: Fetch data faster and more efficiently whilst improving the overall performance of your web application

by Brian Kimokoti

Over-fetching and under-fetching data can negatively impact the performance of your web application. Future-proof your API structure and handle key development requirements by correctly defining types and schemas in GraphQL.Key FeaturesIncludes server-side implementations using GraphQL.js, Apollo, Graphcool, and PrismaUnderstand an example client-side implementation of GraphQL in ReactJS using ApolloImplement over 20 practical activities and exercises across 5 topics that enable you to efficiently use GraphQL in production Book DescriptionThough fairly new, GraphQL is quickly rising in popularity when it comes to API development. This book will teach you everything you need to know to start building efficient APIs with GraphQL. You’ll begin by learning to create a simple scaffold application using Node.js and Express. Then, you’ll explore core GraphQL concepts and study how GraphQL integrates with other frameworks in real-life business applications. By the end of the book, you will be able to successfully create efficient client-server REST-like applications.What you will learnUnderstand core GraphQL concepts that can be used across different languagesUnderstand the overall structure of GraphQL applicationsUse Apollo GraphQL for both server and client JavaScript applicationsUnderstand the key differences between GraphQL and RESTWho this book is forThis book is ideal for developers who want to broaden their understanding of API development. Prior experience to JavaScript is required, with any prior work with React or Node.js being beneficial.

Agile Business Rule Development: Process, Architecture, and JRules Examples

by Hafedh Mili Jérôme Boyer

Business rules are everywhere. Every enterprise process, task, activity, or function is governed by rules. However, some of these rules are implicit and thus poorly enforced, others are written but not enforced, and still others are perhaps poorly written and obscurely enforced. The business rule approach looks for ways to elicit, communicate, and manage business rules in a way that all stakeholders can understand, and to enforce them within the IT infrastructure in a way that supports their traceability and facilitates their maintenance. Boyer and Mili will help you to adopt the business rules approach effectively. While most business rule development methodologies put a heavy emphasis on up-front business modeling and analysis, agile business rule development (ABRD) as introduced in this book is incremental, iterative, and test-driven. Rather than spending weeks discovering and analyzing rules for a complete business function, ABRD puts the emphasis on producing executable, tested rule sets early in the project without jeopardizing the quality, longevity, and maintainability of the end result. The authors' presentation covers all four aspects required for a successful application of the business rules approach: (1) foundations, to understand what business rules are (and are not) and what they can do for you; (2) methodology, to understand how to apply the business rules approach; (3) architecture, to understand how rule automation impacts your application; (4) implementation, to actually deliver the technical solution within the context of a particular business rule management system (BRMS). Throughout the book, the authors use an insurance case study that deals with claim processing. Boyer and Mili cater to different audiences: Project managers will find a pragmatic, proven methodology for delivering and maintaining business rule applications. Business analysts and rule authors will benefit from guidelines and best practices for rule discovery and analysis. Application architects and software developers will appreciate an exploration of the design space for business rule applications, proven architectural and design patterns, and coding guidelines for using JRules.

Community-Built Databases: Research and Development

by Eric Pardede

Wikipedia, Flickr, You Tube, Facebook, LinkedIn are all examples of large community-built databases, although with quite diverse purposes and collaboration patterns. Their usage and dissemination will further grow introducing e.g. new semantics, personalization, or interactive media. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems.

Conditionals and Modularity in General Logics (Cognitive Technologies)

by Dov M. Gabbay Karl Schlechta

This text centers around three main subjects. The first is the concept of modularity and independence in classical logic and nonmonotonic and other nonclassical logic, and the consequences on syntactic and semantical interpolation and language change. In particular, we will show the connection between interpolation for nonmonotonic logic and manipulation of an abstract notion of size. Modularity is essentially the ability to put partial results achieved independently together for a global result. The second aspect of the book is the authors' uniform picture of conditionals, including many-valued logics and structures on the language elements themselves and on the truth value set. The third topic explained by the authors is neighbourhood semantics, their connection to independence, and their common points and differences for various logics, e.g., for defaults and deontic logic, for the limit version of preferential logics, and for general approximation. The book will be of value to researchers and graduate students in logic and theoretical computer science.

Current Challenges in Patent Information Retrieval (The Information Retrieval Series #29)

by Katja Mayer Mihai Lupu Anthony J. Trippe John Tait

Intellectual property in the form of patents plays a vital role in today's increasingly knowledge-based economy. This book assembles state-of-the art research and is intended to illustrate innovative approaches to patent information retrieval.

Data Management and Query Processing in Semantic Web Databases

by Sven Groppe

The Semantic Web, which is intended to establish a machine-understandable Web, is currently changing from being an emerging trend to a technology used in complex real-world applications. A number of standards and techniques have been developed by the World Wide Web Consortium (W3C), e.g., the Resource Description Framework (RDF), which provides a general method for conceptual descriptions for Web resources, and SPARQL, an RDF querying language. Recent examples of large RDF data with billions of facts include the UniProt comprehensive catalog of protein sequence, function and annotation data, the RDF data extracted from Wikipedia, and Princeton University's WordNet. Clearly, querying performance has become a key issue for Semantic Web applications. In his book, Groppe details various aspects of high-performance Semantic Web data management and query processing. His presentation fills the gap between Semantic Web and database books, which either fail to take into account the performance issues of large-scale data management or fail to exploit the special properties of Semantic Web data models and queries. After a general introduction to the relevant Semantic Web standards, he presents specialized indexing and sorting algorithms, adapted approaches for logical and physical query optimization, optimization possibilities when using the parallel database technologies of today's multicore processors, and visual and embedded query languages. Groppe primarily targets researchers, students, and developers of large-scale Semantic Web applications. On the complementary book webpage readers will find additional material, such as an online demonstration of a query engine, and exercises, and their solutions, that challenge their comprehension of the topics presented.

Context and Semantics for Knowledge Management: Technologies for Personal Productivity

by John Davies Paul Warren Elena Simperl

Knowledge and information are among the biggest assets of enterprises and organizations. However, efficiently managing, maintaining, accessing, and reusing this intangible treasure is difficult. Information overload makes it difficult to focus on the information that really matters; the fact that much corporate knowledge only resides in employees' heads seriously hampers reuse. The work described in this book is motivated by the need to increase the productivity of knowledge work. Based on results from the EU-funded ACTIVE project and complemented by recent related results from other researchers, the application of three approaches is presented: the synergy of Web 2.0 and semantic technology; context-based information delivery; and the use of technology to support informal user processes. The contributions are organized in five parts. Part I comprises a general introduction and a description of the opportunities and challenges faced by organizations in exploiting Web 2.0 capabilities. Part II looks at the technologies, and also some methodologies, developed in ACTIVE. Part III describes how these technologies have been evaluated in three case studies within the project. Part IV starts with a chapter describing the principal market trends for knowledge management solutions, and then includes a number of chapters describing work complementary to ACTIVE. Finally, Part V draws conclusions and indicates further areas for research. Overall, this book mainly aims at researchers in academia and industry looking for a state-of-the-art overview of the use of semantic and Web 2.0 technologies for knowledge management and personal productivity. Practitioners in industry will also benefit, in particular from the case studies which highlight cutting-edge applications in these fields.

Computer Science and Educational Software Design: A Resource for Multidisciplinary Work in Technology Enhanced Learning

by Pierre Tchounikine

Developing educational software requires thinking, problematizing, representing, modeling, implementing and analyzing pedagogical objectives and issues, as well as conceptual models and software architectures. Computer scientists face the difficulty of understanding the particular issues and phenomena to be taken into account in educational software projects and of avoiding a naïve technocentered perspective. On the other hand, actors with backgrounds in human or social sciences face the difficulty of understanding software design and implementation issues, and how computer scientists engage in these tasks. Tchounikine argues that these difficulties cannot be solved by building a kind of "general theory" or "general engineering methodology" to be adopted by all actors for all projects: educational software projects may correspond to very different realities, and may be conducted within very different perspectives and with very different matters of concern. Thus the issue of understanding each others' perspectives and elaborating some common ground is to be considered in context, within the considered project or perspective. To this end, he provides the reader with a framework and means for actively taking into account the relationships between pedagogical settings and software, and for working together in a multidisciplinary way to develop educational software. His book is for actors engaged in research or development projects which require inventing, designing, adapting, implementing or analyzing educational software. The core audience is Master's and PhD students, researchers and engineers from computer science or human and social sciences (e.g., education, psychology, pedagogy, philosophy, communications or sociology) interested in the issues raised by educational software design and analysis and in the variety of perspectives that may be adopted.

Building and Using Comparable Corpora (Theory and Applications of Natural Language Processing)

by Reinhard Rapp Serge Sharoff Pierre Zweigenbaum Pascale Fung

The 1990s saw a paradigm change in the use of corpus-driven methods in NLP. In the field of multilingual NLP (such as machine translation and terminology mining) this implied the use of parallel corpora. However, parallel resources are relatively scarce: many more texts are produced daily by native speakers of any given language than translated. This situation resulted in a natural drive towards the use of comparable corpora, i. e. non-parallel texts in the same domain or genre. Nevertheless, this research direction has not produced a single authoritative source suitable for researchers and students coming to the field. The proposed volume provides a reference source, identifying the state of the art in the field as well as future trends. The book is intended for specialists and students in natural language processing, machine translation and computer-assisted translation.

Strategic Information Management: Challenges And Strategies In Managing Information Systems (Management Reader Ser.)

by Robert D. Galliers Dorothy E Leidner

'Strategic Information Management' has been completely up-dated to reflect the rapid changes in IT and the business environment since the publication of the second edition. Half of the readings in the book have been replaced to address current issues and the latest thinking in Information Management. It goes without saying that Information technology has had a major impact on individuals, organizations and society over the past 50 years or so. There are few organizations that can afford to ignore IT and few individuals who would prefer to be without it. As managerial tasks become more complex, so the nature of the requiredinformation systems (IS) changes - from structured, routine support to ad hoc, unstructured, complex enquiries at the highest levels of management.As with the first and second editions, this third edition of 'Strategic Information Management: Challenges and strategies in managing informationsystems' aims to present the many complex and inter-related issues associated with the management of information systems. The book provides a rich source of material reflecting recent thinking on the key issues facing executives in information systems management. It draws from a wide range of contemporary articles written by leading experts from North America and Europe.'Strategic Information Management' is designed as a course text for MBA, Master's level students and senior undergraduate students taking courses in information management. It provides a wealth of information and references for researchers in addition.

Architecture Principles: The Cornerstones of Enterprise Architecture (The Enterprise Engineering Series)

by Danny Greefhorst Erik Proper

It can be argued that architecture principles form the cornerstone of any architecture. The focus of this book is on the role of architecture principles. It provides both a balanced perspective on architecture principles, and is the first book on the topic.

Collaborative Financial Infrastructure Protection: Tools, Abstractions, and Middleware

by Gregory Chockler Roberto Baldoni

The Critical Infrastructure Protection Survey recently released by Symantec found that 53% of interviewed IT security experts from international companies experienced at least ten cyber attacks in the last five years, and financial institutions were often subject to some of the most sophisticated and large-scale cyber attacks and frauds. The book by Baldoni and Chockler analyzes the structure of software infrastructures found in the financial domain, their vulnerabilities to cyber attacks and the existing protection mechanisms. It then shows the advantages of sharing information among financial players in order to detect and quickly react to cyber attacks. Various aspects associated with information sharing are investigated from the organizational, cultural and legislative perspectives. The presentation is organized in two parts: Part I explores general issues associated with information sharing in the financial sector and is intended to set the stage for the vertical IT middleware solution proposed in Part II. Nonetheless, it is self-contained and details a survey of various types of critical infrastructure along with their vulnerability analysis, which has not yet appeared in a textbook-style publication elsewhere. Part II then presents the CoMiFin middleware for collaborative protection of the financial infrastructure. The material is presented in an accessible style and does not require specific prerequisites. It appeals to both researchers in the areas of security, distributed systems, and event processing working on new protection mechanisms, and practitioners looking for a state-of-the-art middleware technology to enhance the security of their critical infrastructures in e.g. banking, military, and other highly sensitive applications. The latter group will especially appreciate the concrete usage scenarios included.

Cloud Computing: Web-Based Dynamic IT Services (Informatik Im Fokus Ser.)

by Christian Baun Stefan Tai Marcel Kunze Jens Nimis

Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for discussion, new research, and novel application scenarios. They first introduce the foundation of cloud computing with its basic technologies, such as virtualization and Web services. After that they discuss the cloud architecture and its service modules. The following chapters then cover selected commercial cloud offerings (including Amazon Web Services and Google App Engine) and management tools, and present current related open-source developments (including Hadoop, Eucalyptus, and Open CirrusTM). Next, economic considerations (cost and business models) are discussed, and an evaluation of the cloud market situation is given. Finally, the appendix contains some practical examples of how to use cloud resources or cloud applications, and a glossary provides concise definitions of key terms. The authors' presentation does not require in-depth technical knowledge. It is equally intended as an introduction for students in software engineering, web technologies, or business development, for professional software developers or system architects, and for future-oriented decision-makers like top executives and managers.

Advanced Topics in Information Retrieval (The Information Retrieval Series #33)

by Massimo Melucci Ricardo Baeza-Yates

Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications. In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index. With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.

Advanced Statistical Methods for the Analysis of Large Data-Sets (Studies in Theoretical and Applied Statistics)

by Jose Miguel Angulo Ibanez Mauro Coli Agostino Di Ciaccio

The theme of the meeting was "Statistical Methods for the Analysis of Large Data-Sets". In recent years there has been increasing interest in this subject; in fact a huge quantity of information is often available but standard statistical techniques are usually not well suited to managing this kind of data. The conference serves as an important meeting point for European researchers working on this topic and a number of European statistical societies participated in the organization of the event. The book includes 45 papers from a selection of the 156 papers accepted for presentation and discussed at the conference on "Advanced Statistical Methods for the Analysis of Large Data-sets."

Biomechanics of the Gravid Human Uterus

by Roustem N. Miftahof Hong Gil Nam

The complexity of human uterine function and regulation is one of the great wonders of nature and represents a daunting challenge to unravel. This book is dedicated to the biomechanical modeling of the gravid human uterus and gives an example of the application of the mechanics of solids and the theory of soft shells to explore medical problems of labor and delivery. After a brief overview of the anatomy, physiology and biomechanics of the uterus, the authors focus mainly on electromechanical wave processes, their origin, dynamics, and neuroendocrine and pharmacological modulations. In the last chapter applications, pitfalls and problems related to modeling and computer simulations of the pregnant uterus and pelvic floor structures are discussed. A collection of exercises is added at the end of each chapter to help readers with self-evaluation. The book serves as an invaluable source of information for researchers, instructors and advanced undergraduate and graduate students interested in systems biology, applied mathematics and biomedical engineering.

Data-Warehouse-Systeme kompakt: Aufbau, Architektur, Grundfunktionen (Xpert.press)

by Kiumars Farkisch

In dem Buch werden Data-Warehouse-Systeme als einheitliche, zentrale, vollständige, historisierte und analytische IT-Plattform untersucht und ihre Rolle für die Datenanalyse und für Entscheidungsfindungsprozesse dargestellt. Dabei behandelt der Autor die einzelnen Komponenten, die für den Aufbau, die Architektur und den Betrieb eines Data-Warehouse-Systems von Bedeutung sind. Die multidimensionale Datenmodellierung, der ETL-Prozess und Analysemethoden werden erörtert und Maßnahmen zur Performancesteigerung von Data-Warehouse-Systemen diskutiert.

Cusped Shell-Like Structures (SpringerBriefs in Applied Sciences and Technology)

by George Jaiani

The book is devoted to an up-dated exploratory survey of results concerning elastic cusped shells, plates, and beams and cusped prismatic shell-fluid interaction problems. It contains some up to now non-published results as well. Mathematically the corresponding problems lead to non-classical, in general, boundary value and initial-boundary value problems for governing degenerate elliptic and hyperbolic systems in static and dynamical cases, respectively. Its uses two fundamentally different approaches of investigation: 1) to get results for two-dimensional and one-dimensional problems from results of the corresponding three-dimensional problems and 2) to investigate directly governing degenerate and singular systems of 2D and 1D problems. In both the cases, it is important to study relation of 2D and 1D problems to 3D problems.

Correction Formulae for the Stress Distribution in Round Tensile Specimens at Neck Presence (SpringerBriefs in Applied Sciences and Technology)

by Andreas Öchsner Magdalena Gromada Gennady Mishuris

The monograph deals with methods to determine mechanical properties and evaluate the flow curve of ductile materials from the tensile test. It presents classical hypotheses concerning the onset of neck creation as well as the state of the art in determining the mechanical properties from the tensile test, with emphasis on the consequences of the neck formation. It revises derivations of formulae for the stress distribution in the minimal cross-section of the axisymmetrical specimen in the classical approaches proposed by Bridgman, Davidenkov / Spiridonova and Siebel as well as in the less famous formulae derived by Szczepinski and Malinin / Petrosjan. The revision is completed with solutions evaluated by the authors. In the monograph, the simplifying assumptions utilised in the classical approaches were carefully verified by numerical simulations accompanied by theoretical analysis. Errors imposed in the evaluation of the average axial stress acting on the minimal cross-section as a result of every particular simplification are estimated. The accuracy of all formulae to evaluate the flow curve is discussed. The significance of a high accurate determination can be seen in the context of numerical simulation (e.g. finite element computations), where the total error and accuracy is partly based on the accuracy of the material input.

Business Aspects of Web Services

by Benjamin Blau Christof Weinhardt Wibke Michalk Lilia Filipova-Neumann Thomas Meinl Tobias Conte

Driven by maturing Web service technologies and the wide acceptance of the service-oriented architecture paradigm, the software industry's traditional business models and strategies have begun to change: software vendors are turning into service providers. In addition, in the Web service market, a multitude of small and highly specialized providers offer modular services of almost any kind and economic value is created through the interplay of various distributed service providers that jointly contribute to form individualized and integrated solutions. This trend can be optimally catalyzed by universally accessible service orchestration platforms - service value networks (SVNs) - which are the underlying organizational form of the coordination mechanisms presented in this book. Here, the authors focus on providing comprehensive business-oriented insights into today's trends and challenges that stem from the transition to a service-led economy. They investigate current and future Web service business models and provide a framework for Web service value networks. Pricing mechanism basics are introduced and applied to the specific area of SVNs. Strategies for platform providers are analyzed from the viewpoint of a single provider, and so are pricing mechanisms in service value networks which are optimal from a network perspective. The extended concept of pricing Web service derivatives is also illustrated. The presentation concludes with a vision of how Web service markets in the future could be structured and what further developments can be expected to happen. This book will be of interest to researchers in business development and practitioners such as managers of SMEs in the service sector, as well as computer scientists familiar with Web technologies. The book's comprehensive content provides readers with a thorough understanding of the organizational, economic and technical implications of dealing with Web services as the nucleus of modern business models, which can be applied to Web services in general and Web service value networks specifically..

Coarse-to-Fine Natural Language Processing (Theory and Applications of Natural Language Processing)

by Eugene Charniak Slav Petrov

The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical models, which assign probabilities for example to the different meanings of a word or the plausibility of grammatical constructions. This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing. Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. The book is intended for students and researchers interested in statistical approaches to Natural Language Processing. Slav's work Coarse-to-Fine Natural Language Processing represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach. Eugene Charniak (Brown University)

A Feature-Centric View of Information Retrieval (The Information Retrieval Series #27)

by Donald Metzler

Commercial Web search engines such as Google, Yahoo, and Bing are used every day by millions of people across the globe. With their ever-growing refinement and usage, it has become increasingly difficult for academic researchers to keep up with the collection sizes and other critical research issues related to Web search, which has created a divide between the information retrieval research being done within academia and industry. Such large collections pose a new set of challenges for information retrieval researchers. In this work, Metzler describes highly effective information retrieval models for both smaller, classical data sets, and larger Web collections. In a shift away from heuristic, hand-tuned ranking functions and complex probabilistic models, he presents feature-based retrieval models. The Markov random field model he details goes beyond the traditional yet ill-suited bag of words assumption in two ways. First, the model can easily exploit various types of dependencies that exist between query terms, eliminating the term independence assumption that often accompanies bag of words models. Second, arbitrary textual or non-textual features can be used within the model. As he shows, combining term dependencies and arbitrary features results in a very robust, powerful retrieval model. In addition, he describes several extensions, such as an automatic feature selection algorithm and a query expansion framework. The resulting model and extensions provide a flexible framework for highly effective retrieval across a wide range of tasks and data sets. A Feature-Centric View of Information Retrieval provides graduate students, as well as academic and industrial researchers in the fields of information retrieval and Web search with a modern perspective on information retrieval modeling and Web searches.

Abgründe der Informatik: Geheimnisse und Gemeinheiten

by Alois Potton

Was Sie schon immer über die Informatik und "die Informatiker" wissen wollten, aber nie zu fragen wagten, Alois Potton hat es notiert: Über mehr als zwei Jahrzehnte hat er hinter die Kulissen geblickt und Anekdoten in 80 Glossen gegossen. Schonungslos, bösartig und zum Teil politisch nicht ganz korrekt analysiert er den alltäglichen Wahnsinn und die Absurditäten der IT-Szene. Allgemein verständlich geschrieben, werden sich auch Nichtinformatiker angesichts analoger Vorgänge in ihrem Arbeitsbereich amüsieren - oder aber beleidigt fühlen.

Computergestützte Audio- und Videotechnik: Multimediatechnik in der Anwendung

by Dieter Stotz

Diese Einführung in die moderne Audio- und Videotechnik ermöglicht Lesern mit technischem Grundverständnis einen leichten Einstieg - auch in komplexe Zusammenhänge. Der Autor vermittelt detailliertes Wissen, praxisnah und verständlich aufbereitet: von den Grundlagen der Ton- und Videotechnik über Abtastung und Digitalisierung, räumliches Hören, Datenkompression, MIDI-Standard und -Signale, digitale Audiomesstechnik bis zu hochauflösender Videotechnik, Genlock, Chromakeying, Schnittsystemen und Animation. Mit vielen Graphiken und Abbildungen.

BOINC: Hochleistungsrechnen mit Berkeley Open Infrastructure for Network Computing (Xpert.press)

by Christian Benjamin Ries

Mit BOINC können komplexe und rechenintensive Probleme im Sinne des Public Resource Computing gelöst werden: Freiwillige melden sich bei einem BOINC-Projekt an und stellen ihre Rechnerressourcen zur Verfügung. Der Autor erläutert die einzelnen Entwicklungsschritte, so dass ein voll funktionsfähiges BOINC-Projekt erstellt und effizient gewartet werden kann. Komplexe Zusammenhänge werden anhand der Unified Modeling Language (UML) veranschaulicht. Mit detaillierten Beschreibungen der Programmierschnittstelle und praxisnahen Fallbeispielen.

Refine Search

Showing 19,626 through 19,650 of 54,275 results