Description 
1 online resource (vi, 402 pages) : illustrations 
Contents 
Cover  Halftitle page  Title page  Copyright page  Contents  Preface  1 Introduction  1.1 Motivation: The Pitfalls of LargeDimensional Statistics  1.1.1 The Big Data Era: When n Is No Longer Much Larger than p  1.1.2 Sample Covariance Matrices in the Large n,p Regime  1.1.3 Kernel Matrices of LargeDimensional Data  The Nontrivial Classification Regime  Asymptotic Loss of Pairwise Distance Discrimination  Explaining Kernel Methods with Random Matrix Theory  Do Real Data Follow Small or LargeDimensional Intuitions?  1.1.4 Summarizing 

1.2 Random Matrix Theory as an Answer  1.2.1 Which Theory and Why?  A Point of History  Resolvents, Gaussian Tools, and Concentration of Measure Theory  1.2.2 The Double Asymptotics: Turning the Curse of Dimensionality into a Dimensionality Blessing  Why Random Matrix Theory to Study the Large n,p Regime?  The Case of Machine Learning  1.2.3 Analyze, Understand, and Improve LargeDimensional Machine Learning Methods  From Low to LargeDimensional Intuitions  Core Random Matrices in Machine Learning Algorithms  Performance Analysis: Spectral Properties and Functionals 

Directions of Improvement and New Ideas  1.2.4 Exploiting Universality: From LargeDimensional Gaussian Vectors to Real Data  Theory versus Practice  Concentrated Random Vectors and Real Data Modeling  1.3 Outline and Online Toolbox  1.3.1 Organization of the Book  1.3.2 Online Codes  2 Random Matrix Theory  2.1 Fundamental Objects  2.1.1 The Resolvent  2.1.2 Spectral Measure and Stieltjes Transform  2.1.3 Cauchy's Integral, Linear Eigenvalue Functionals, and Eigenspaces  2.1.4 Deterministic and Random Equivalents  2.2 Foundational Random Matrix Results 

2.2.1 Key Lemmas and Identities  Resolvent Identities  Perturbation Identities  Probability Identities  2.2.2 The MarčenkoPastur and Semicircle Laws  The MarčenkoPastur Law  Intuitive idea  Detailed proof of Theorem 2.4  The "Gaussian Method" Alternative  Wigner Semicircle Law  2.2.3 LargeDimensional Sample Covariance Matrices and Generalized Semicircles  Large Sample Covariance Matrix Model and its Generalizations  Generalized Semicircle Law with a Variance Profile  2.3 Advanced Spectrum Considerations for Sample Covariances  2.3.1 Limiting Spectrum 
Summary 
This book presents a unified theory of random matrices for applications in machine learning, offering a largedimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in realworld machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semisupervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small versus largedimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as realworld data, with MATLAB and Python code provided on the accompanying website 
Bibliography 
Includes bibliographical references and index 
Notes 
Description based on online resource; title from digital title page (viewed on July 18, 2022) 
Subject 
Machine learning  Mathematics


Matrix analytic methods.


Matrix analytic methods

Form 
Electronic book

Author 
Liao, Zhenyu, 1992 author.

ISBN 
9781009128490 

1009128493 
