- Table View
- List View
Analysis of Incidence Rates (Chapman & Hall/CRC Biostatistics Series)
by Peter CummingsIncidence rates are counts divided by person-time; mortality rates are a well-known example. Analysis of Incidence Rates offers a detailed discussion of the practical aspects of analyzing incidence rates. Important pitfalls and areas of controversy are discussed. The text is aimed at graduate students, researchers, and analysts in the disciplines of epidemiology, biostatistics, social sciences, economics, and psychology. Features: Compares and contrasts incidence rates with risks, odds, and hazards. Shows stratified methods, including standardization, inverse-variance weighting, and Mantel-Haenszel methods Describes Poisson regression methods for adjusted rate ratios and rate differences. Examines linear regression for rate differences with an emphasis on common problems. Gives methods for correcting confidence intervals. Illustrates problems related to collapsibility. Explores extensions of count models for rates, including negative binomial regression, methods for clustered data, and the analysis of longitudinal data. Also, reviews controversies and limitations. Presents matched cohort methods in detail. Gives marginal methods for converting adjusted rate ratios to rate differences, and vice versa. Demonstrates instrumental variable methods. Compares Poisson regression with the Cox proportional hazards model. Also, introduces Royston-Parmar models. All data and analyses are in online Stata files which readers can download. Peter Cummings is Professor Emeritus, Department of Epidemiology, School of Public Health, University of Washington, Seattle WA. His research was primarily in the field of injuries. He used matched cohort methods to estimate how the use of seat belts and presence of airbags were related to death in a traffic crash. He is author or co-author of over 100 peer-reviewed articles.
Analysis of Infectious Disease Data
by N.G. BeckerThe book gives an up-to-date account of various approaches availablefor the analysis of infectious disease data. Most of the methods havebeen developed only recently, and for those based on particularlymodern mathematics, details of the computation are carefullyillustrated. Interpretation is discussed at some length and the emphasisthroughout is on making statistical inferences about epidemiologicallyimportant parameters.Niels G. Becker is Reader in Statistics at La Trobe University,Australia.
Analysis of Integrated Data (Chapman & Hall/CRC Statistics in the Social and Behavioral Sciences)
by Li-Chun Zhang and Raymond L. ChambersThe advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
Analysis of Large and Complex Data (Studies in Classification, Data Analysis, and Knowledge Organization #0)
by Adalbert F.X. Wilhelm Hans A. KestlerThis book offers a snapshot of the state-of-the-art in classification at the interface between statistics, computer science and application fields. The contributions span a broad spectrum, from theoretical developments to practical applications; they all share a strong computational component. The topics addressed are from the following fields: Statistics and Data Analysis; Machine Learning and Knowledge Discovery; Data Analysis in Marketing; Data Analysis in Finance and Economics; Data Analysis in Medicine and the Life Sciences; Data Analysis in the Social, Behavioural, and Health Care Sciences; Data Analysis in Interdisciplinary Domains; Classification and Subject Indexing in Library and Information Science. The book presents selected papers from the Second European Conference on Data Analysis, held at Jacobs University Bremen in July 2014. This conference unites diverse researchers in the pursuit of a common topic, creating truly unique synergies in the process.
The Analysis of Linear Economic Systems: Father Maurice Potron’s Pioneering Works (Routledge Studies In The History Of Economics #117)
by Christian Bidard Guido Erreygers Paul A. SamuelsonMaurice Potron (1872-1942), a French Jesuit mathematician, constructed and analyzed a highly original, but virtually unknown economic model. This book presents translated versions of all his economic writings, preceded by a long introduction which sketches his life and environment based on extensive archival research and family documents. Potron had no education in economics and almost no contact with the economists of his time. His primary source of inspiration was the social doctrine of the Church, which had been updated at the end of the nineteenth century. Faced with the ‘economic evils’ of his time, he reacted by utilizing his talents as a mathematician and an engineer to invent and formalize a general disaggregated model in which production, employment, prices and wages are the main unknowns. He introduced four basic principles or normative conditions (‘sufficient production’, the ‘right to rest’, ‘justice in exchange’, and the ‘right to live’) to define satisfactory regimes of production and labour on the one hand, and of prices and wages on the other. He studied the conditions for the existence of these regimes, both on the quantity side and the value side, and he explored the way to implement them. This book makes it clear that Potron was the first author to develop a full input-output model, to use the Perron-Frobenius theorem in economics, to state a duality result, and to formulate the Hawkins-Simon condition. These are all techniques which now belong to the standard toolkit of economists. This book will be of interest to Economics postgraduate students and researchers, and will be essential reading for courses dealing with the history of mathematical economics in general, and linear production theory in particular. Paul A. Samuelson’s short foreword to the book may have been his last academic contribution.
The Analysis of Linear Economic Systems: Father Maurice Potron�s Pioneering Works (Routledge Studies In The History Of Economics #117)
by Christian Bidard Guido Erreygers Paul A. SamuelsonMaurice Potron (1872-1942), a French Jesuit mathematician, constructed and analyzed a highly original, but virtually unknown economic model. This book presents translated versions of all his economic writings, preceded by a long introduction which sketches his life and environment based on extensive archival research and family documents.Potron had no education in economics and almost no contact with the economists of his time. His primary source of inspiration was the social doctrine of the Church, which had been updated at the end of the nineteenth century. Faced with the ‘economic evils’ of his time, he reacted by utilizing his talents as a mathematician and an engineer to invent and formalize a general disaggregated model in which production, employment, prices and wages are the main unknowns. He introduced four basic principles or normative conditions (‘sufficient production’, the ‘right to rest’, ‘justice in exchange’, and the ‘right to live’) to define satisfactory regimes of production and labour on the one hand, and of prices and wages on the other. He studied the conditions for the existence of these regimes, both on the quantity side and the value side, and he explored the way to implement them.This book makes it clear that Potron was the first author to develop a full input-output model, to use the Perron-Frobenius theorem in economics, to state a duality result, and to formulate the Hawkins-Simon condition. These are all techniques which now belong to the standard toolkit of economists. This book will be of interest to Economics postgraduate students and researchers, and will be essential reading for courses dealing with the history of mathematical economics in general, and linear production theory in particular.
Analysis of Longitudinal Data with Example
by You-Gan Wang Liya Fu Sudhir PaulDevelopment in methodology on longitudinal data is fast. Currently, there are a lack of intermediate /advanced level textbooks which introduce students and practicing statisticians to the updated methods on correlated data inference. This book will present a discussion of the modern approaches to inference, including the links between the theories of estimators and various types of efficient statistical models including likelihood-based approaches. The theory will be supported with practical examples of R-codes and R-packages applied to interesting case-studies from a number of different areas. Key Features: •Includes the most up-to-date methods •Use simple examples to demonstrate complex methods •Uses real data from a number of areas •Examples utilize R code
Analysis of Messy Data, Volume II: Nonreplicated Experiments
by Dallas E. Johnson George A. MillikenResearchers often do not analyze nonreplicated experiments statistically because they are unfamiliar with existing statistical methods that may be applicable. Analysis of Messy Data, Volume II details the statistical methods appropriate for nonreplicated experiments and explores ways to use statistical software to make the required computations feasible.
Analysis of Multivariate and High-Dimensional Data
by Inge Koch'Big data' poses challenges that require both classical multivariate methods and contemporary techniques from machine learning and engineering. This modern text equips you for the new world - integrating the old and the new, fusing theory and practice and bridging the gap to statistical learning. The theoretical framework includes formal statements that set out clearly the guaranteed 'safe operating zone' for the methods and allow you to assess whether data is in the zone, or near enough. Extensive examples showcase the strengths and limitations of different methods with small classical data, data from medicine, biology, marketing and finance, high-dimensional data from bioinformatics, functional data from proteomics, and simulated data. High-dimension low-sample-size data gets special attention. Several data sets are revisited repeatedly to allow comparison of methods. Generous use of colour, algorithms, Matlab code, and problem sets complete the package. Suitable for master's/graduate students in statistics and researchers in data-rich disciplines.
Analysis of Neural Data (Springer Series in Statistics)
by Robert E. Kass Uri T. Eden Emery N. BrownContinual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work.
Analysis of Numerical Methods (Dover Books on Mathematics)
by Herbert Bishop Keller Eugene IsaacsonIn this age of omnipresent digital computers and their capacity for implementing numerical methods, no applied mathematician, physical scientist, or engineer can be considered properly trained without some understanding of those methods. This text, suitable for advanced undergraduate and graduate-level courses, supplies the required knowledge — not just by listing and describing methods, but by analyzing them carefully and stressing techniques for developing new methods.Based on each author's more than 40 years of experience in teaching university courses, this book offers lucid, carefully presented coverage of norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, numerical solution of differential equations, and more. No mathematical preparation beyond advanced calculus and elementary linear algebra (or matrix theory) is assumed. Examples and problems are given that extend or amplify the analysis in many cases.
Analysis of Operators on Function Spaces: The Serguei Shimorin Memorial Volume (Trends in Mathematics)
by Alexandru Aleman Haakan Hedenmalm Dmitry Khavinson Mihai PutinarThis book contains both expository articles and original research in the areas of function theory and operator theory. The contributions include extended versions of some of the lectures by invited speakers at the conference in honor of the memory of Serguei Shimorin at the Mittag-Leffler Institute in the summer of 2018. The book is intended for all researchers in the fields of function theory, operator theory and complex analysis in one or several variables. The expository articles reflecting the current status of several well-established and very dynamical areas of research will be accessible and useful to advanced graduate students and young researchers in pure and applied mathematics, and also to engineers and physicists using complex analysis methods in their investigations.
Analysis of Ordinal Categorical Data (Wiley Series in Probability and Statistics #656)
by Alan AgrestiStatistical science’s first coordinated manual of methods for analyzing ordered categorical data, now fully revised and updated, continues to present applications and case studies in fields as diverse as sociology, public health, ecology, marketing, and pharmacy. Analysis of Ordinal Categorical Data, Second Edition provides an introduction to basic descriptive and inferential methods for categorical data, giving thorough coverage of new developments and recent methods. Special emphasis is placed on interpretation and application of methods including an integrated comparison of the available strategies for analyzing ordinal data. Practitioners of statistics in government, industry (particularly pharmaceutical), and academia will want this new edition.
Analysis of Panel Data
by Cheng HsiaoThis book provides a comprehensive, coherent, and intuitive review of panel data methodologies that are useful for empirical analysis. Substantially revised from the second edition, it includes two new chapters on modeling cross-sectionally dependent data and dynamic systems of equations. Some of the more complicated concepts have been further streamlined. Other new material includes correlated random coefficient models, pseudo-panels, duration and count data models, quantile analysis, and alternative approaches for controlling the impact of unobserved heterogeneity in nonlinear panel data models.
Analysis of Panel Data (Econometric Society Monographs #Series Number 34)
by Cheng HsiaoNow in its fourth edition, this comprehensive introduction of fundamental panel data methodologies provides insights on what is most essential in panel literature. A capstone to the forty-year career of a pioneer of panel data analysis, this new edition's primary contribution will be the coverage of advancements in panel data analysis, a statistical method widely used to analyze two or higher-dimensional panel data. The topics discussed in early editions have been reorganized and streamlined to comprehensively introduce panel econometric methodologies useful for identifying causal relationships among variables, supported by interdisciplinary examples and case studies. This book, to be featured in Cambridge's Econometric Society Monographs series, has been the leader in the field since the first edition. It is essential reading for researchers, practitioners and graduate students interested in the analysis of microeconomic behavior.
Analysis of Poverty Data by Small Area Estimation
by Monica PratesiA comprehensive guide to implementing SAE methods for poverty studies and poverty mapping There is an increasingly urgent demand for poverty and living conditions data, in relation to local areas and/or subpopulations. Policy makers and stakeholders need indicators and maps of poverty and living conditions in order to formulate and implement policies, (re)distribute resources, and measure the effect of local policy actions. Small Area Estimation (SAE) plays a crucial role in producing statistically sound estimates for poverty mapping. This book offers a comprehensive source of information regarding the use of SAE methods adapted to these distinctive features of poverty data derived from surveys and administrative archives. The book covers the definition of poverty indicators, data collection and integration methods, the impact of sampling design, weighting and variance estimation, the issue of SAE modelling and robustness, the spatio-temporal modelling of poverty, and the SAE of the distribution function of income and inequalities. Examples of data analyses and applications are provided, and the book is supported by a website describing scripts written in SAS or R software, which accompany the majority of the presented methods. Key features: Presents a comprehensive review of SAE methods for poverty mapping Demonstrates the applications of SAE methods using real-life case studies Offers guidance on the use of routines and choice of websites from which to download them Analysis of Poverty Data by Small Area Estimation offers an introduction to advanced techniques from both a practical and a methodological perspective, and will prove an invaluable resource for researchers actively engaged in organizing, managing and conducting studies on poverty.
Analysis of Pseudo-Differential Operators (Trends in Mathematics)
by Shahla Molahajloo M. W. WongThis volume, like its predecessors, is based on the special session on pseudo-differential operators, one of the many special sessions at the 11th ISAAC Congress, held at Linnaeus University in Sweden on August 14-18, 2017. It includes research papers presented at the session and invited papers by experts in fields that involve pseudo-differential operators.The first four chapters focus on the functional analysis of pseudo-differential operators on a spectrum of settings from Z to Rn to compact groups. Chapters 5 and 6 discuss operators on Lie groups and manifolds with edge, while the following two chapters cover topics related to probabilities. The final chapters then address topics in differential equations.
Analysis of Quantal Response Data
by Byron J.T. MorganThis book takes the standard methods as the starting point, and then describes a wide range of relatively new approaches and procedures designed to deal with more complicated data and experiments - including much recent research in the area. Throughout mention is given to the computing requirements - facilities available in large computing packages like BMDP, SAS and SPSS are also described.
Analysis of Questionnaire Data with R
by Bruno FalissardWhile theoretical statistics relies primarily on mathematics and hypothetical situations, statistical practice is a translation of a question formulated by a researcher into a series of variables linked by a statistical tool. As with written material, there are almost always differences between the meaning of the original text and translated text.
Analysis of Repeated Measures: A Practical Approach For Behavioural Scientists (Chapman And Hall/crc Monographs On Statistics And Applied Probability Ser. #41)
by Martin J. Crowder David J. HandRepeated measures data arise when the same characteristic is measured on each case or subject at several times or under several conditions. There is a multitude of techniques available for analysing such data and in the past this has led to some confusion. This book describes the whole spectrum of approaches, beginning with very simple and crude methods, working through intermediate techniques commonly used by consultant statisticians, and concluding with more recent and advanced methods. Those covered include multiple testing, response feature analysis, univariate analysis of variance approaches, multivariate analysis of variance approaches, regression models, two-stage line models, approaches to categorical data and techniques for analysing crossover designs. The theory is illustrated with examples, using real data brought to the authors during their work as statistical consultants.
Analysis of Repeated Measures Data
by M. Ataharul Islam Rafiqul I. ChowdhuryThis book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first and higher orders are used for conditional models in addition to conditional probabilities as a function of covariates. Similarly, joint models are developed using both marginal-conditional probabilities as well as joint probabilities as a function of covariates. In addition to generalized linear models for bivariate outcomes, it highlights extended semi-parametric models for continuous failure time data and their applications in order to include models for a broader range of outcome variables that researchers encounter in various fields. The book further discusses the problem of analysing repeated measures data for failure time in the competing risk framework, which is now taking on an increasingly important role in the field of survival analysis, reliability and actuarial science. Details on how to perform the analyses are included in each chapter and supplemented with newly developed R packages and functions along with SAS codes and macro/IML. It is a valuable resource for researchers, graduate students and other users of statistical techniques for analysing repeated measures data.
Analysis of Safety Data of Drug Trials: An Update
by Ton J. Cleophas Aeilko H. ZwindermanIn 2010, the 5th edition of the textbook, "Statistics Applied to Clinical Studies", was published by Springer and since then has been widely distributed. The primary object of clinical trials of new drugs is to demonstrate efficacy rather than safety. However, a trial in humans which does not adequately address safety is unethical, while the assessment of safety variables is an important element of the trial.An effective approach is to present summaries of the prevalence of adverse effects and their 95% confidence intervals. In order to estimate the probability that the differences between treatment and control group occurred merely by chance, a statistical test can be performed. In the past few years, this pretty crude method has been supplemented and sometimes, replaced with more sophisticated and better sensitive methodologies, based on machine learning clusters and networks, and multivariate analyses. As a result, it is time that an updated version of safety data analysis was published. The issue of dependency also needs to be addressed. Adverse effects may be either dependent or independent of the main outcome. For example, an adverse effect of alpha blockers is dizziness and this occurs independently of the main outcome "alleviation of Raynaud 's phenomenon". In contrast, the adverse effect "increased calorie intake" occurs with "increased exercise", and this adverse effect is very dependent on the main outcome "weight loss". Random heterogeneities, outliers, confounders, interaction factors are common in clinical trials, and all of them can be considered as kinds of adverse effects of the dependent type. Random regressions and analyses of variance, high dimensional clusterings, partial correlations, structural equations models, Bayesian methods are helpful for their analysis. The current edition was written for non-mathematicians, particularly medical and health professionals and students. It provides examples of modern analytic methods so far largely unused in safety analysis. All of the 14 chapters have two core characteristics, First, they are intended for current usage, and they are particularly concerned with that usage. Second, they try and tell what readers need to know in order to understand and apply the methods. For that purpose, step by step analyses of both hypothesized and real data examples are provided.
Analysis of Socio-Economic Conditions: Insights from a Fuzzy Multi-dimensional Approach (Routledge Advances in Social Economics)
by Gianni Betti and Achille LemmiShowcasing fuzzy set theory, this book highlights the enormous potential of fuzzy logic in helping to analyse the complexity of a wide range of socio-economic patterns and behaviour. The contributions to this volume explore the most up-to-date fuzzy-set methods for the measurement of socio-economic phenomena in a multidimensional and/or dynamic perspective. Thus far, fuzzy-set theory has primarily been utilised in the social sciences in the field of poverty measurement. These chapters examine the latest work in this area, while also exploring further applications including social exclusion, the labour market, educational mismatch, sustainability, quality of life and violence against women. The authors demonstrate that real-world situations are often characterised by imprecision, uncertainty and vagueness, which cannot be properly described by the classical set theory which uses a simple true–false binary logic. By contrast, fuzzy-set theory has been shown to be a powerful tool for describing the multidimensionality and complexity of social phenomena. This book will be of significant interest to economists, statisticians and sociologists utilising quantitative methods to explore socio-economic phenomena.
Analysis of Survival Data
by D.R. CoxThis monograph contains many ideas on the analysis of survival data to present a comprehensive account of the field. The value of survival analysis is not confined to medical statistics, where the benefit of the analysis of data on such factors as life expectancy and duration of periods of freedom from symptoms of a disease as related to a treatment applied individual histories and so on, is obvious. The techniques also find important applications in industrial life testing and a range of subjects from physics to econometrics. In the eleven chapters of the book the methods and applications of are discussed and illustrated by examples.
Analysis of Survival Data with Dependent Censoring: Copula-Based Approaches (SpringerBriefs in Statistics)
by Takeshi Emura Yi-Hau ChenThis book introduces readers to copula-based statistical methods for analyzing survival data involving dependent censoring. Primarily focusing on likelihood-based methods performed under copula models, it is the first book solely devoted to the problem of dependent censoring. <P><P> The book demonstrates the advantages of the copula-based methods in the context of medical research, especially with regard to cancer patients’ survival data. Needless to say, the statistical methods presented here can also be applied to many other branches of science, especially in reliability, where survival analysis plays an important role. <P> The book can be used as a textbook for graduate coursework or a short course aimed at (bio-) statisticians. To deepen readers’ understanding of copula-based approaches, the book provides an accessible introduction to basic survival analysis and explains the mathematical foundations of copula-based survival models.