Limit search to available items
Book Cover
E-book
Author Kass, Robert E., author.

Title Analysis of neural data / Robert E. Kass, Uri T. Eden, and Emery N. Brown
Published New York : Springer, [2014]

Copies

Description 1 online resource (790 pages) : illustrations
Series Springer series in statistics, 0172-7397
Springer series in statistics.
Contents 880-01 Introduction -- Exploring Data -- Probability and Random Variables -- Random Vectors -- Important Probability Distributions -- Sequences of Random Variables -- Estimation and Uncertainty -- Estimation in Theory and Practice -- Uncertainty and the Bootstrap -- Statistical Significance -- General Methods for Testing Hypotheses -- Linear Regression -- Analysis of Variance -- Generalized Regression -- Nonparametric Regression -- Bayesian Methods -- Multivariate Analysis -- Time Series -- Point Processes -- Appendix: Mathematical Background -- Example Index -- Index -- Bibliography
880-01/(S Contents note continued: 9.2.3. nonparametric bootstrap draws pseudo-data from the empirical cdf -- 9.3. Discussion of Alternative Methods -- 10.1. Chi-Squared Statistics -- 10.1.1. chi-squared statistic compares model-fitted values to observed values -- 10.1.2. For multinomial data, the chi-squared statistic follows, approximately, a χ2 distribution. -- 10.1.3. rarity of a large chi-squared is judged by its p-value. -- 10.1.4. Chi-squared may be used to test independence of two traits. -- 10.2. Null Hypotheses -- 10.2.1. Statistical models are often considered null hypotheses -- 10.2.2. Null hypotheses sometimes specify a particular value of a parameter within a statistical model -- 10.2.3. Null hypotheses may also specify a constraint on two or more parameters. -- 10.3. Testing Null Hypotheses -- 10.3.1. hypothesis H0: [æ] = [æ]0 for a normal random variable is a paradigm case. -- 10.3.2. For large samples the hypothesis = Bo may be tested using the ratio (θ [--] θ0)/SE(θ) -- 10.3.3. For small samples it is customary to test H0: [æ] = Po using a t statistic. -- 10.3.4. For two independent samples, the hypothesis H0: [æ]1 = [æ]2 may be tested using the t-ratio -- 10.3.5. Computer simulation may be used to find p-values. -- 10.3.6. Rayleigh test can provide evidence against a uniform distribution of angles. -- 10.3.7. fit of a continuous distribution may be assessed with the Kolmogorov-Smirnov test -- 10.4. Interpretation and Properties of Tests -- 10.4.1. Statistical tests should have the correct probability of falsely rejecting H0, at least approximately. -- 10.4.2. confidence interval for θ may be used to test H0: θ = θ0. -- 10.4.3. Statistical tests are evaluated in terms of their probability of correctly rejecting H0. -- 10.4.4. performance of a statistical test may be displayed by the ROC curve -- 10.4.5. p-value is not the probability that Ho is true -- 10.4.6. Borderline p-values are especially worrisome with low power -- 10.4.7. p-value is conceptually distinct from type, one error. -- 10.4.8. non-significant test does not, by itself, indicate evidence in support of Ho. -- 10.4.9. One-tailed tests are sometimes used. -- 11.1. Likelihood Ratio Tests -- 11.1.1. likelihood ratio may be used to test H0: θ = θ0 -- 11.1.2. P-values for the likelihood ratio test of H0: θ = θ0 may be obtained from the χ2 distribution or by simulation. -- 11.1.3. likelihood ratio test of Ho: (ω, θ) = (ω, θ0) plugs in the MLE of ω, obtained under Ho. -- 11.1.4. likelihood ratio test reproduces, exactly or approximately, many commonly-used significance tests -- 11.1.5. likelihood ratio test is optimal for simple hypotheses. -- 11.1.6. To evaluate alternative non-nested models the likelihood ratio statistic may be adjusted for parameter dimensionality. -- 11.2. Permutation and Bootstrap Tests -- 11.2.1. Permutation tests consider all possible permutations of the data that would be consistent with the null hypothesis -- 11.2.2. bootstrap samples with replacement. -- 11.3. Multiple Tests -- 11.3.1. When multiple independent data sets are used to test the same hypothesis, the p-values are easily combined -- 11.3.2. When multiple hypotheses are considered, statistical significance should be adjusted. -- 12.1. Linear Regression Model -- 12.1.1. Linear regression assumes linearity of f(x) and independence of the noise contributions at the various observed x values. -- 12.1.2. relative contribution of the linear signal to the total response variation is summarized by R2 -- 12.1.3. Theory shows that if the model were correct then the least-squares estimate would be likely to be accurate for large samples. -- 12.2. Checking Assumptions -- 12.2.1. Residuals should represent unstructured noise. -- 12.2.2. Graphical examination of (x, y) data can yield crucial information. -- 12.2.3. Failure of independence among the errors can have substantial consequences. -- 12.3. Evidence of a Linear Trend -- 12.3.1. Confidence intervals for slopes are based on SE, according to the general formula. -- 12.3.2. Evidence in favor of a linear trend can be obtained from a t-test concerning the slope. -- 12.3.3. fitted relationship may not be accurate outside the range of the observed data -- 12.4. Correlation and Regression -- 12.4.1. correlation coefficient is determined by the regression coefficient and the standard deviations of x and y -- 12.4.2. Association is not causation. -- 12.4.3. Confidence intervals for p may be based on a transformation of r -- 12.4.4. When noise is added to two variables, their correlation diminishes. -- 12.5. Multiple Linear Regression -- 12.5.1. Multiple regression estimates the linear relationship of the response with each explanatory variable, while adjusting for the other explanatory variables -- 12.5.2. Response variation may be decomposed into signal and noise sums of squares. -- 12.5.3. Multiple regression may be formulated concisely using matrices. -- 12.5.4. linear regression model applies to polynomial regression and cosine regression -- 12.5.5. Effects of correlated explanatory variables cannot be interpreted separately. -- 12.5.6. In multiple linear regression interaction effects are often important. -- 12.5.7. Regression models with many explanatory variables often can be simplified -- 12.5.8. Multiple regression can be treacherous. -- 13.1. -One-Way and Two-Way ANOVA -- 13.1.1. ANOVA is based on a linear model. -- 13.1.2. One-way ANOVA decomposes total variability into average group variability and average individual variability, which would be roughly equal under the null hypothesis -- 13.1.3. When there are only two groups, the ANOVA F-test reduces to a t-test. -- 13.1.4. Two-way ANOVA assesses the effects of one factor while adjusting for the other factor. -- 13.1.5. When the variances are inhomogeneous across conditions a likelihood ratio test may be used. -- 13.1.6. More complicated experimental designs may be accommodated by ANOVA. -- 13.1.7. Additional analyses, involving multiple comparisons, may require adjustments to p-values. -- 13.2. ANOVA as Regression -- 13.2.1. general linear model includes both regression and ANOVA models -- 13.2.2. In multi-way ANOVA, interactions are often of interest -- 13.2.3. ANOVA comparisons may be adjusted using analysis of covariance. -- 13.3. Nonparametric Methods -- 13.3.1. Distribution-free nonparametric tests may be obtained by replacing data values with their ranks. -- 13.3.2. Permutation and bootstrap tests may be used to test ANOVA hypotheses. -- 13.4. Causation, Randomization, and Observational Studies -- 13.4.1. Randomization eliminates effects of confounding factors. -- 13.4.2. Observational studies can produce substantial evidence. -- 14.1. Logistic Regression, Poisson Regression, and Generalized Linear Models -- 14.1.1. Logistic regression may be used to analyze binary responses. -- 14.1.2. In logistic regression, ML is used to estimate the regression coefficients and the likelihood ratio test is used to assess evidence of a logistic-linear trend with x. -- 14.1.3. logit transformation is one among many that may be used for binomial responses, but it is the most commonly applied. -- 14.1.4. usual Poisson regression model transforms the mean A to log d. -- 14.1.5. In Poisson regression, ML is used to estimate coefficients and the likelihood ratio test is used to examine trends. -- 14.1.6. Generalized linear models extend regression methods to response distributions from exponential families
Summary Continual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work
Analysis levenswetenschappen
life sciences
geneeskunde
medicine
toegepaste statistiek
applied statistics
neurowetenschap
neuroscience
statistische analyse
statistical analysis
statistiek
statistics
Statistics (General)
Statistiek (algemeen)
Bibliography Includes bibliographical references and index
Notes Print version record
Subject Neural analyzers.
Neuropsychological tests.
Neurosciences.
Neurons.
Neurons
Neuropsychological Tests
Neurosciences
MEDICAL -- Physiology.
SCIENCE -- Life Sciences -- Human Anatomy & Physiology.
Neurons
Neural analyzers
Neuropsychological tests
Neurosciences
Form Electronic book
Author Eden, Uri T., author.
Brown, E. N. (Emery N.), author.
ISBN 9781461496021
1461496020