Limit search to available items
Book Cover
E-book
Author Knox, Steven W

Title Machine Learning : Topics and Techniques
Published Newark : John Wiley & Sons, Incorporated, 2018

Copies

Description 1 online resource (383 pages)
Contents Intro; Title page; Copyright; Preface; Organizationâ#x80;#x94;How to Use This Book; Acknowledgments; About the Companion Website; Chapter 1: Introductionâ#x80;#x94;Examples from Real Life; Chapter 2: The Problem of Learning; 2.1 Domain; 2.2 Range; 2.3 Data; 2.4 Loss; 2.5 Risk; 2.6 The Reality of the Unknown Function; 2.7 Training and Selection of Models, and Purposes of Learning; 2.8 Notation; Chapter 3: Regression; 3.1 General Framework; 3.2 Loss; 3.3 Estimating the Model Parameters; 3.4 Properties of Fitted Values; 3.5 Estimating the Variance; 3.6 A Normality Assumption; 3.7 Computation
3.8 Categorical Features3.9 Feature Transformations, Expansions, and Interactions; 3.10 Variations in Linear Regression; 3.11 Nonparametric Regression; Chapter 4: Survey of Classification Techniques; 4.1 The Bayes Classifier; 4.2 Introduction to Classifiers; 4.3 A Running Example; 4.4 Likelihood Methods; 4.5 Prototype Methods; 4.6 Logistic Regression; 4.7 Neural Networks; 4.8 Classification Trees; 4.9 Support Vector Machines; 4.10 Postscript: Example Problem Revisited; Chapter 5: Biasâ#x80;#x93;Variance Trade-off; 5.1 Squared-Error Loss; 5.2 Arbitrary Loss; Chapter 6: Combining Classifiers
6.1 Ensembles6.2 Ensemble Design; 6.3 Bootstrap Aggregation (Bagging); 6.4 Bumping; 6.5 Random Forests; 6.6 Boosting; 6.7 Arcing; 6.8 Stacking and Mixture of Experts; Chapter 7: Risk Estimation and Model Selection; 7.1 Risk Estimation via Training Data; 7.2 Risk Estimation via Validation or Test Data; 7.3 Cross-Validation; 7.4 Improvements on Cross-Validation; 7.5 Out-of-Bag Risk Estimation; 7.6 Akaikeâ#x80;#x99;s Information Criterion; 7.7 Schwartzâ#x80;#x99;s Bayesian Information Criterion; 7.8 Rissanenâ#x80;#x99;s Minimum Description Length Criterion; 7.9 R2 and Adjusted R2; 7.10 Stepwise Model Selection
7.11 Occamâ#x80;#x99;s RazorChapter 8: Consistency; 8.1 Convergence of Sequences of Random Variables; 8.2 Consistency for Parameter Estimation; 8.3 Consistency for Prediction; 8.4 There Are Consistent and Universally Consistent Classifiers; 8.5 Convergence to Asymptopia Is Not Uniform and May Be Slow; Chapter 9: Clustering; 9.1 Gaussian Mixture Models; 9.2 k-Means; 9.3 Clustering by Mode-Hunting in a Density Estimate; 9.4 Using Classifiers to Cluster; 9.5 Dissimilarity; 9.6 k-Medoids; 9.7 Agglomerative Hierarchical Clustering; 9.8 Divisive Hierarchical Clustering
9.9 How Many Clusters Are There? Interpretation of Clustering9.10 An Impossibility Theorem; Chapter 10: Optimization; 10.1 Quasi-Newton Methods; 10.2 The Nelderâ#x80;#x93;Mead Algorithm; 10.3 Simulated Annealing; 10.4 Genetic Algorithms; 10.5 Particle Swarm Optimization; 10.6 General Remarks on Optimization; 10.7 The Expectation-Maximization Algorithm; Chapter 11: High-Dimensional Data; 11.1 The Curse of Dimensionality; 11.2 Two Running Examples; 11.3 Reducing Dimension While Preserving Information; 11.4 Model Regularization; Chapter 12: Communication with Clients
Notes 12.1 Binary Classification and Hypothesis Testing
Print version record
Subject Machine learning.
Machine learning.
Form Electronic book
ISBN 9781119438984
1119438985
9781119439073
1119439078