Limit search to available items
Book Cover
E-book
Author Chen, Pin-Yu

Title Adversarial Robustness for Machine Learning
Published San Diego : Elsevier Science & Technology, 2022

Copies

Description 1 online resource (300 p.)
Contents Front Cover -- Adversarial Robustness for Machine Learning -- Copyright -- Contents -- Biography -- Dr. Pin-Yu Chen (1986-present) -- Dr. Cho-Jui Hsieh (1985-present) -- Preface -- Part 1 Preliminaries -- 1 Background and motivation -- 1.1 What is adversarial machine learning? -- 1.2 Mathematical notations -- 1.3 Machine learning basics -- 1.4 Motivating examples -- Adversarial robustness accuracy -- what standard accuracy fails to tell -- Fast adaptation of adversarial robustness evaluation assets for emerging machine learning models -- 1.5 Practical examples of AI vulnerabilities
1.6 Open-source Python libraries for adversarial robustness -- Part 2 Adversarial attack -- 2 White-box adversarial attacks -- 2.1 Attack procedure and notations -- 2.2 Formulating attack as constrained optimization -- 2.3 Steepest descent, FGSM and PGD attack -- 2.4 Transforming to an unconstrained optimization problem -- 2.5 Another way to define attack objective -- 2.6 Attacks with different lp norms -- 2.7 Universal attack -- 2.8 Adaptive white-box attack -- 2.9 Empirical comparison -- 2.10 Extended reading -- 3 Black-box adversarial attacks -- 3.1 Evasion attack taxonomy
3.2 Soft-label black-box attack -- 3.3 Hard-label black-box attack -- 3.4 Transfer attack -- 3.5 Attack dimension reduction -- 3.6 Empirical comparisons -- 3.7 Proof of Theorem 1 -- 3.8 Extended reading -- 4 Physical adversarial attacks -- 4.1 Physical adversarial attack formulation -- 4.2 Examples of physical adversarial attacks -- 4.3 Empirical comparison -- 4.4 Extending reading -- 5 Training-time adversarial attacks -- 5.1 Poisoning attack -- 5.2 Backdoor attack -- 5.3 Empirical comparison -- 5.4 Case study: distributed backdoor attacks on federated learning -- 5.5 Extended reading
6 Adversarial attacks beyond image classification -- 6.1 Data modality and task objectives -- 6.2 Audio adversarial example -- 6.3 Feature identification -- 6.4 Graph neural network -- 6.5 Natural language processing -- Sentence classification -- Sequence-to-sequence translation -- 6.6 Deep reinforcement learning -- 6.7 Image captioning -- 6.8 Weight perturbation -- 6.9 Extended reading -- Part 3 Robustness verification -- 7 Overview of neural network verification -- 7.1 Robustness verification versus adversarial attack -- 7.2 Formulations of robustness verification
7.3 Applications of neural network verification -- Safety-critical control systems -- Natural language processing -- Machine learning interpretability -- 7.4 Extended reading -- 8 Incomplete neural network verification -- 8.1 A convex relaxation framework -- 8.2 Linear bound propagation methods -- The optimal layerwise convex relaxation -- 8.3 Convex relaxation in the dual space -- 8.4 Recent progresses in linear relaxation-based methods -- 8.5 Extended reading -- 9 Complete neural network verification -- 9.1 Mixed integer programming -- 9.2 Branch and bound
Notes Description based upon print version of record
9.3 Branch-and-bound with linear bound propagation
Form Electronic book
Author Hsieh, Cho-Jui
ISBN 9780128242575
0128242574