Browse Results

Showing 23,401 through 23,425 of 28,127 results

Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

by Ruey S. Tsay Daniel Peña

Master advanced topics in the analysis of large, dynamically dependent datasets with this insightful resource Statistical Learning with Big Dependent Data delivers a comprehensive presentation of the statistical and machine learning methods useful for analyzing and forecasting large and dynamically dependent data sets. The book presents automatic procedures for modelling and forecasting large sets of time series data. Beginning with some visualization tools, the book discusses procedures and methods for finding outliers, clusters, and other types of heterogeneity in big dependent data. It then introduces various dimension reduction methods, including regularization and factor models such as regularized Lasso in the presence of dynamical dependence and dynamic factor models. The book also covers other forecasting procedures, including index models, partial least squares, boosting, and now-casting. It further presents machine-learning methods, including neural network, deep learning, classification and regression trees and random forests. Finally, procedures for modelling and forecasting spatio-temporal dependent data are also presented. Throughout the book, the advantages and disadvantages of the methods discussed are given. The book uses real-world examples to demonstrate applications, including use of many R packages. Finally, an R package associated with the book is available to assist readers in reproducing the analyses of examples and to facilitate real applications. Analysis of Big Dependent Data includes a wide variety of topics for modeling and understanding big dependent data, like: New ways to plot large sets of time series An automatic procedure to build univariate ARMA models for individual components of a large data set Powerful outlier detection procedures for large sets of related time series New methods for finding the number of clusters of time series and discrimination methods , including vector support machines, for time series Broad coverage of dynamic factor models including new representations and estimation methods for generalized dynamic factor models Discussion on the usefulness of lasso with time series and an evaluation of several machine learning procedure for forecasting large sets of time series Forecasting large sets of time series with exogenous variables, including discussions of index models, partial least squares, and boosting. Introduction of modern procedures for modeling and forecasting spatio-temporal data Perfect for PhD students and researchers in business, economics, engineering, and science: Statistical Learning with Big Dependent Data also belongs to the bookshelves of practitioners in these fields who hope to improve their understanding of statistical and machine learning methods for analyzing and forecasting big dependent data.

Statistical Learning for Biomedical Data

by James D. Malley Karen G. Malley Sinisa Pajevic

This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests(TM), neural nets, support vector machines, nearest neighbors and boosting.

Statistical Learning from a Regression Perspective

by Richard A. Berk

This textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this can be seen as an extension of nonparametric regression. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. A continued emphasis on the implications for practice runs through the text. Among the statistical learning procedures examined are bagging, random forests, boosting, support vector machines and neural networks. Response variables may be quantitative or categorical. As in the first edition, a unifying theme is supervised learning that can be treated as a form of regression analysis. Key concepts and procedures are illustrated with real applications, especially those with practical implications. A principal instance is the need to explicitly take into account asymmetric costs in the fitting process. For example, in some situations false positives may be far less costly than false negatives. Also provided is helpful craft lore such as not automatically ceding data analysis decisions to a fitting algorithm. In many settings, subject-matter knowledge should trump formal fitting criteria. Yet another important message is to appreciate the limitation of one's data and not apply statistical learning procedures that require more than the data can provide. The material is written for upper undergraduate level and graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. The author uses this book in a course on modern regression for the social, behavioral, and biological sciences. Intuitive explanations and visual representations are prominent. All of the analyses included are done in R with code routinely provided.

Statistical Learning from a Regression Perspective (Springer Texts in Statistics)

by Richard A. Berk

This textbook considers statistical learning applications when interest centers on the conditional distribution of a response variable, given a set of predictors, and in the absence of a credible model that can be specified before the data analysis begins. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis depends in an integrated fashion on sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. The unifying theme is that supervised learning properly can be seen as a form of regression analysis. Key concepts and procedures are illustrated with a large number of real applications and their associated code in R, with an eye toward practical implications. The growing integration of computer science and statistics is well represented including the occasional, but salient, tensions that result. Throughout, there are links to the big picture. The third edition considers significant advances in recent years, among which are: the development of overarching, conceptual frameworks for statistical learning;the impact of “big data” on statistical learning;the nature and consequences of post-model selection statistical inference;deep learning in various forms;the special challenges to statistical inference posed by statistical learning;the fundamental connections between data collection and data analysis;interdisciplinary ethical and political issues surrounding the application of algorithmic methods in a wide variety of fields, each linked to concerns about transparency, fairness, and accuracy. This edition features new sections on accuracy, transparency, and fairness, as well as a new chapter on deep learning. Precursors to deep learning get an expanded treatment. The connections between fitting and forecasting are considered in greater depth. Discussion of the estimation targets for algorithmic methods is revised and expanded throughout to reflect the latest research. Resampling procedures are emphasized. The material is written for upper undergraduate and graduate students in the social, psychological and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems.

Statistical Learning in Genetics: An Introduction Using R (Statistics for Biology and Health)

by Daniel Sorensen

This book provides an introduction to computer-based methods for the analysis of genomic data. Breakthroughs in molecular and computational biology have contributed to the emergence of vast data sets, where millions of genetic markers for each individual are coupled with medical records, generating an unparalleled resource for linking human genetic variation to human biology and disease. Similar developments have taken place in animal and plant breeding, where genetic marker information is combined with production traits. An important task for the statistical geneticist is to adapt, construct and implement models that can extract information from these large-scale data. An initial step is to understand the methodology that underlies the probability models and to learn the modern computer-intensive methods required for fitting these models. The objective of this book, suitable for readers who wish to develop analytic skills to perform genomic research, is to provide guidance to take this first step.This book is addressed to numerate biologists who typically lack the formal mathematical background of the professional statistician. For this reason, considerably more detail in explanations and derivations is offered. It is written in a concise style and examples are used profusely. A large proportion of the examples involve programming with the open-source package R. The R code needed to solve the exercises is provided. The MarkDown interface allows the students to implement the code on their own computer, contributing to a better understanding of the underlying theory.Part I presents methods of inference based on likelihood and Bayesian methods, including computational techniques for fitting likelihood and Bayesian models. Part II discusses prediction for continuous and binary data using both frequentist and Bayesian approaches. Some of the models used for prediction are also used for gene discovery. The challenge is to find promising genes without incurring a large proportion of false positive results. Therefore, Part II includes a detour on False Discovery Rate assuming frequentist and Bayesian perspectives. The last chapter of Part II provides an overview of a selected number of non-parametric methods. Part III consists of exercises and their solutions.Daniel Sorensen holds PhD and DSc degrees from the University of Edinburgh and is an elected Fellow of the American Statistical Association. He was professor of Statistical Genetics at Aarhus University where, at present, he is professor emeritus.

Statistical Learning of Complex Data (Studies in Classification, Data Analysis, and Knowledge Organization)

by Maurizio Vichi Francesca Greselin Laura Deldossi Luca Bagnato

This book of peer-reviewed contributions presents the latest findings in classification, statistical learning, data analysis and related areas, including supervised and unsupervised classification, clustering, statistical analysis of mixed-type data, big data analysis, statistical modeling, graphical models and social networks. It covers both methodological aspects as well as applications to a wide range of fields such as economics, architecture, medicine, data management, consumer behavior and the gender gap. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field of data analysis and classification. It gathers selected and peer-reviewed contributions presented at the 11th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society (CLADAG 2017), held in Milan, Italy, on September 13–15, 2017.

Statistical Learning with Math and Python: 100 Exercises for Building Logic

by Joe Suzuki

The most crucial ability for machine learning and data science is mathematical logic for grasping their essence rather than knowledge and experience. This textbook approaches the essence of machine learning and data science by considering math problems and building Python programs. As the preliminary part, Chapter 1 provides a concise introduction to linear algebra, which will help novices read further to the following main chapters. Those succeeding chapters present essential topics in statistical learning: linear regression, classification, resampling, information criteria, regularization, nonlinear regression, decision trees, support vector machines, and unsupervised learning. Each chapter mathematically formulates and solves machine learning problems and builds the programs. The body of a chapter is accompanied by proofs and programs in an appendix, with exercises at the end of the chapter. Because the book is carefully organized to provide the solutions to the exercises in each chapter, readers can solve the total of 100 exercises by simply following the contents of each chapter. This textbook is suitable for an undergraduate or graduate course consisting of about 12 lectures. Written in an easy-to-follow and self-contained style, this book will also be perfect material for independent learning.

Statistical Learning with Sparsity: The Lasso and Generalizations (Chapman & Hall/CRC Monographs on Statistics and Applied Probability)

by Trevor Hastie Robert Tibshirani Martin Wainwright

Discover New Methods for Dealing with High-Dimensional DataA sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underl

Statistical Literacy at School: Growth and Goals (Studies in Mathematical Thinking and Learning Series)

by Jane M. Watson

This book reveals the development of students' understanding of statistical literacy. It provides a way to "see" student thinking and gives readers a deeper sense of how students think about important statistical topics. Intended as a complement to curriculum documents and textbook series, it is consistent with the current principles and standards of the National Council of Teachers of Mathematics. The term "statistical literacy" is used to emphasize that the purpose of the school curriculum should not be to turn out statisticians but to prepare statistically literate school graduates who are prepared to participate in social decision making. Based on ten years of research--with reference to other significant research as appropriate--the book looks at students' thinking in relation to tasks based on sampling, graphical representations, averages, chance, beginning inference, and variation, which are essential to later work in formal statistics. For those students who do not proceed to formal study, as well as those who do, these concepts provide a basis for decision making or questioning when presented with claims based on data in societal settings. Statistical Literacy at School: Growth and Goals:*establishes an overall framework for statistical literacy in terms of both the links to specific school curricula and the wider appreciation of contexts within which chance and data-handling ideas are applied;*demonstrates, within this framework, that there are many connections among specific ideas and constructs;*provides tasks, adaptable for classroom or assessment use, that are appropriate for the goals of statistical literacy; *presents extensive examples of student performance on the tasks, illustrating hierarchies of achievement, to assist in monitoring gains and meeting the goals of statistical literacy; and*includes a summary of analysis of survey data that suggests a developmental hierarchy for students over the years of schooling with respect to the goal of statistical literacy.Statistical Literacy at School: Growth and Goals is directed to researchers, curriculum developers, professionals, and students in mathematics education as well those across the curriculum who are interested in students' cognitive development within the field; to teachers who want to focus on the concepts involved in statistical literacy without the use of formal statistical techniques; and to statisticians who are interested in the development of student understanding before students are exposed to the formal study of statistics.

Statistical Literacy for Clinical Practitioners

by William H. Holmes William C. Rinaman

This textbook on statistics is written for students in medicine, epidemiology, and public health. It builds on the important role evidence-based medicine now plays in the clinical practice of physicians, physician assistants and allied health practitioners. By bringing research design and statistics to the fore, this book can integrate these skills into the curricula of professional programs. Students, particularly practitioners-in-training, will learn statistical skills that are required of today's clinicians. Practice problems at the end of each chapter and downloadable data sets provided by the authors ensure readers get practical experience that they can then apply to their own work.

Statistical Literacy: A Beginner′s Guide

by Rhys Christopher Jones

In an increasingly data-centric world, we all need to know how to read and interpret statistics. But where do we begin? This book breaks statistical terms and concepts down in a clear, straightforward way. From understanding what data are telling you to exploring the value of good storytelling with numbers, it equips you with the information and skills you need to become statistically literate. It also: Dispels misconceptions about the nature of statistics to help you avoid common traps. Helps you put your learning into practice with over 60 Tasks and Develop Your Skills activities. Draws on real-world research to demonstrate the messiness of data – and show you a path through it. Approachable and down to earth, this guide is aimed at undergraduates across the social sciences, psychology, business and beyond who want to engage confidently with quantitative methods or statistics. It forms a reassuring aid for anyone looking to understand the foundations of statistics before their course advances, or as a refresher on key content.

Statistical Literacy: A Beginner′s Guide

by Rhys Christopher Jones

In an increasingly data-centric world, we all need to know how to read and interpret statistics. But where do we begin? This book breaks statistical terms and concepts down in a clear, straightforward way. From understanding what data are telling you to exploring the value of good storytelling with numbers, it equips you with the information and skills you need to become statistically literate. It also: Dispels misconceptions about the nature of statistics to help you avoid common traps. Helps you put your learning into practice with over 60 Tasks and Develop Your Skills activities. Draws on real-world research to demonstrate the messiness of data – and show you a path through it. Approachable and down to earth, this guide is aimed at undergraduates across the social sciences, psychology, business and beyond who want to engage confidently with quantitative methods or statistics. It forms a reassuring aid for anyone looking to understand the foundations of statistics before their course advances, or as a refresher on key content.

Statistical Machine Learning for Engineering with Applications (Lecture Notes in Statistics #227)

by Anita Schöbel Jürgen Franke

This book offers a leisurely introduction to the concepts and methods of machine learning. Readers will learn about classification trees, Bayesian learning, neural networks and deep learning, the design of experiments, and related methods. For ease of reading, technical details are avoided as far as possible, and there is a particular emphasis on applicability, interpretation, reliability and limitations of the data-analytic methods in practice. To cover the common availability and types of data in engineering, training sets consisting of independent as well as time series data are considered. To cope with the scarceness of data in industrial problems, augmentation of training sets by additional artificial data, generated from physical models, as well as the combination of machine learning and expert knowledge of engineers are discussed. The methodological exposition is accompanied by several detailed case studies based on industrial projects covering a broad range of engineering applications from vehicle manufacturing, process engineering and design of materials to optimization of production processes based on image analysis. The focus is on fundamental ideas, applicability and the pitfalls of machine learning in industry and science, where data are often scarce. Requiring only very basic background in statistics, the book is ideal for self-study or short courses for engineering and science students.

Statistical Machine Learning: A Unified Framework (Chapman & Hall/CRC Texts in Statistical Science)

by Richard Golden

The recent rapid growth in the variety and complexity of new machine learning architectures requires the development of improved methods for designing, analyzing, evaluating, and communicating machine learning technologies. Statistical Machine Learning: A Unified Framework provides students, engineers, and scientists with tools from mathematical statistics and nonlinear optimization theory to become experts in the field of machine learning. In particular, the material in this text directly supports the mathematical analysis and design of old, new, and not-yet-invented nonlinear high-dimensional machine learning algorithms. Features: Unified empirical risk minimization framework supports rigorous mathematical analyses of widely used supervised, unsupervised, and reinforcement machine learning algorithms Matrix calculus methods for supporting machine learning analysis and design applications Explicit conditions for ensuring convergence of adaptive, batch, minibatch, MCEM, and MCMC learning algorithms that minimize both unimodal and multimodal objective functions Explicit conditions for characterizing asymptotic properties of M-estimators and model selection criteria such as AIC and BIC in the presence of possible model misspecification This advanced text is suitable for graduate students or highly motivated undergraduate students in statistics, computer science, electrical engineering, and applied mathematics. The text is self-contained and only assumes knowledge of lower-division linear algebra and upper-division probability theory. Students, professional engineers, and multidisciplinary scientists possessing these minimal prerequisites will find this text challenging yet accessible. About the Author: Richard M. Golden (Ph.D., M.S.E.E., B.S.E.E.) is Professor of Cognitive Science and Participating Faculty Member in Electrical Engineering at the University of Texas at Dallas. Dr. Golden has published articles and given talks at scientific conferences on a wide range of topics in the fields of both statistics and machine learning over the past three decades. His long-term research interests include identifying conditions for the convergence of deterministic and stochastic machine learning algorithms and investigating estimation and inference in the presence of possibly misspecified probability models.

Statistical Machine Translation

by Philipp Koehn

The filed of machine translation has recently been energized by the emergence of statistical techniques, which have brought the dream of automatic language translation closer to reality. This class-tested textbook, authored by an active researcher in the field, provides a gentle and accessible introduction to the latest methods and enables the reader to build machine translation systems for any language pair.

Statistical Mechanics

by Richard P. Feynman

Physics, rather than mathematics, is the focus in this classic graduate lecture note volume on statistical mechanics and the physics of condensed matter. This book provides a concise introduction to basic concepts and a clear presentation of difficult topics, while challenging the student to reflect upon as yet unanswered questions.

Statistical Mechanics of Classical and Disordered Systems: Luminy, France, August 2018 (Springer Proceedings in Mathematics & Statistics #293)

by Véronique Gayrard Nicola Kistler Louis-Pierre Arguin Irina Kourkova

These proceedings of the conference Advances in Statistical Mechanics, held in Marseille, France, August 2018, focus on fundamental issues of equilibrium and non-equilibrium dynamics for classical mechanical systems, as well as on open problems in statistical mechanics related to probability, mathematical physics, computer science, and biology. Statistical mechanics, as envisioned more than a century ago by Boltzmann, Maxwell and Gibbs, has recently undergone stunning twists and developments which have turned this old discipline into one of the most active areas of truly interdisciplinary and cutting-edge research. The contributions to this volume, with their rather unique blend of rigorous mathematics and applications, outline the state-of-the-art of this success story in key subject areas of equilibrium and non-equilibrium classical and quantum statistical mechanics of both disordered and non-disordered systems. Aimed at researchers in the broad field of applied modern probability theory, this book, and in particular the review articles, will also be of interest to graduate students looking for a gentle introduction to active topics of current research.

Statistical Mechanics of Hamiltonian Systems with Bounded Kinetic Terms: An Insight into Negative Temperature (Springer Theses)

by Marco Baldovin

Recent experimental evidence about the possibility of "absolute negative temperature" states in physical systems has triggered a stimulating debate about the consistency of such a concept from the point of view of Statistical Mechanics. It is not clear whether the usual results of this field can be safely extended to negative-temperature states; some authors even propose fundamental modifications to the Statistical Mechanics formalism, starting with the very definition of entropy, in order to avoid the occurrence of negative values of the temperature tout-court.The research presented in this thesis aims to shed some light on this controversial topic. To this end, a particular class of Hamiltonian systems with bounded kinetic terms, which can assume negative temperature, is extensively studied, both analytically and numerically. Equilibrium and out-of-equilibrium properties of this kind of system are investigated, reinforcing the overall picture that the introduction of negative temperature does not lead to any contradiction or paradox.

Statistical Mechanics of Lattice Systems: A Concrete Mathematical Introduction

by Sacha Friedli Yvan Velenik

This motivating textbook gives a friendly, rigorous introduction to fundamental concepts in equilibrium statistical mechanics, covering a selection of specific models, including the Curie–Weiss and Ising models, the Gaussian free field, O(n) models, and models with Kać interactions. Using classical concepts such as Gibbs measures, pressure, free energy, and entropy, the book exposes the main features of the classical description of large systems in equilibrium, in particular the central problem of phase transitions. It treats such important topics as the Peierls argument, the Dobrushin uniqueness, Mermin–Wagner and Lee–Yang theorems, and develops from scratch such workhorses as correlation inequalities, the cluster expansion, Pirogov–Sinai Theory, and reflection positivity. Written as a self-contained course for advanced undergraduate or beginning graduate students, the detailed explanations, large collection of exercises (with solutions), and appendix of mathematical results and concepts also make it a handy reference for researchers in related areas. Builds a narrative around the driving concepts, focusing on specific examples and models. Self-contained and accessible. Features numerous exercises and solutions, as well as a comprehensive appendix.

Statistical Mechanics of Neural Networks

by Haiping Huang

This book highlights a comprehensive introduction to the fundamental statistical mechanics underneath the inner workings of neural networks. The book discusses in details important concepts and techniques including the cavity method, the mean-field theory, replica techniques, the Nishimori condition, variational methods, the dynamical mean-field theory, unsupervised learning, associative memory models, perceptron models, the chaos theory of recurrent neural networks, and eigen-spectrums of neural networks, walking new learners through the theories and must-have skillsets to understand and use neural networks. The book focuses on quantitative frameworks of neural network models where the underlying mechanisms can be precisely isolated by physics of mathematical beauty and theoretical predictions. It is a good reference for students, researchers, and practitioners in the area of neural networks.

Statistical Mechanics: A Concise Advanced Textbook (UNITEXT for Physics)

by Sergio Cecotti

This textbook is based on lecture notes that the author delivered at Qiuzhen College (Tsinghua University), a Chinese institution known for its exceptionally talented mathematics students. The book's intended audience shapes its character. It introduces Statistical Mechanics from the ground up, offering a fully self-contained presentation that aims for mathematical precision. It distinguishes rigorous results from controlled approximations and provides physical insights into phenomena. Despite its concise nature (suited for a one-semester basic course), this book covers several topics typically not found in introductory texts. These include Shannon's information-theoretic interpretation of entropy, the gauge approach to order-disorder duality in the Ising model, the Yang-Lee theory, and the quantum dissipation-fluctuation theorem. Additionally, it explores frustrated and quenched systems, including an introduction to the celebrated Parisi solution of the Sherrington-Kirkpatrick model of spin glasses. The path integral formalism is extensively discussed from various perspectives to suit different applications. Chapter 2 approaches path integrals through the Feynman-Kac formula and second quantization. In Chapter 5, they are examined within the context of effective field theories like Landau-Ginzburg theory, while Chapter 6 delves into their connection with Brownian motion, Langevin stochastic differential equations, and Fokker-Planck diffusion PDEs. The book also explores the relationship between stochastic processes and supersymmetry. Various techniques for computing path integrals, especially functional determinants, are introduced throughout the relevant chapters, offering the most suitable computational tools for each application.

Statistical Mechanics: An Introductory Graduate Course (Graduate Texts in Physics)

by A. J. Berlinsky A. B. Harris

In a comprehensive treatment of Statistical Mechanics from thermodynamics through the renormalization group, this book serves as the core text for a full-year graduate course in statistical mechanics at either the Masters or Ph.D. level. Each chapter contains numerous exercises, and several chapters treat special topics which can be used as the basis for student projects. The concept of scaling is introduced early and used extensively throughout the text. At the heart of the book is an extensive treatment of mean field theory, from the simplest decoupling approach, through the density matrix formalism, to self-consistent classical and quantum field theory as well as exact solutions on the Cayley tree. Proceeding beyond mean field theory, the book discusses exact mappings involving Potts models, percolation, self-avoiding walks and quenched randomness, connecting various athermal and thermal models. Computational methods such as series expansions and Monte Carlo simulations are discussed, along with exact solutions to the 1D quantum and 2D classical Ising models. The renormalization group formalism is developed, starting from real-space RG and proceeding through a detailed treatment of Wilson’s epsilon expansion. Finally the subject of Kosterlitz-Thouless systems is introduced from a historical perspective and then treated by methods due to Anderson, Kosterlitz, Thouless and Young. Altogether, this comprehensive, up-to-date, and engaging text offers an ideal package for advanced undergraduate or graduate courses or for use in self study.

Statistical Mechanics: Fundamentals and Model Solutions

by Teunis C Dorlas

Statistical Mechanics: Fundamentals and Model Solutions, Second Edition Fully updated throughout and with new chapters on the Mayer expansion for classical gases and on cluster expansion for lattice models, this new edition of Statistical Mechanics: Fundamentals and Model Solutions provides a comprehensive introduction to equilibrium statistical mechanics for advanced undergraduate and graduate students of mathematics and physics. The author presents a fresh approach to the subject, setting out the basic assumptions clearly and emphasizing the importance of the thermodynamic limit and the role of convexity. With problems and solutions, the book clearly explains the role of models for physical systems, and discusses and solves various models. An understanding of these models is of increasing importance as they have proved to have applications in many areas of mathematics and physics. Features Updated throughout with new content from the field An established and well-loved textbook Contains new problems and solutions for further learning opportunity Author Professor Teunis C. Dorlas is at the Dublin Institute for Advanced Studies, Ireland.

Statistical Meta-Analysis with Applications

by Bimal K. Sinha Guido Knapp Joachim Hartung

An accessible introduction to performing meta-analysis across various areas of researchThe practice of meta-analysis allows researchers to obtain findings from various studies and compile them to verify and form one overall conclusion. Statistical Meta-Analysis with Applications presents the necessary statistical methodologies that allow readers to tackle the four main stages of meta-analysis: problem formulation, data collection, data evaluation, and data analysis and interpretation. Combining the authors' expertise on the topic with a wealth of up-to-date information, this book successfully introduces the essential statistical practices for making thorough and accurate discoveries across a wide array of diverse fields, such as business, public health, biostatistics, and environmental studies.Two main types of statistical analysis serve as the foundation of the methods and techniques: combining tests of effect size and combining estimates of effect size. Additional topics covered include:Meta-analysis regression proceduresMultiple-endpoint and multiple-treatment studiesThe Bayesian approach to meta-analysisPublication biasVote counting proceduresMethods for combining individual tests and combining individual estimatesUsing meta-analysis to analyze binary and ordinal categorical dataNumerous worked-out examples in each chapter provide the reader with a step-by-step understanding of the presented methods. All exercises can be computed using the R and SAS software packages, which are both available via the book's related Web site. Extensive references are also included, outlining additional sources for further study.Requiring only a working knowledge of statistics, Statistical Meta-Analysis with Applications is a valuable supplement for courses in biostatistics, business, public health, and social research at the upper-undergraduate and graduate levels. It is also an excellent reference for applied statisticians working in industry, academia, and government.

Statistical Method from the Viewpoint of Quality Control

by Walter A. Shewhart

Important text offers lucid explanation of how to regulate variables and maintain control over statistics in order to achieve quality control over manufactured products, crops and data. Topics include statistical control, establishing limits of variability, measurements of physical properties and constants, and specification of accuracy and precision. First inexpensive paperback edition.

Refine Search

Showing 23,401 through 23,425 of 28,127 results