AI, Machine Learning and Deep Learning A Security Perspective
Today AI and Machine/Deep Learning have become the hottest areas in the information technology. This book aims to provide a complete picture on the challenges and solutions to the security issues in various applications. It explains how different attacks can occur in advanced AI tools and the challe...
Otros Autores: | , |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Boca Raton, FL :
CRC Press
[2023]
|
Edición: | First edition |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009785407306719 |
Tabla de Contenidos:
- Cover
- Half Title
- Title Page
- Copyright Page
- Table of Contents
- Preface
- About the Editors
- Contributors
- Part I Secure AI/ML Systems: Attack Models
- 1 Machine Learning Attack Models
- 1.1 Introduction
- 1.2 Background
- 1.2.1 Notation
- 1.2.2 Support Vector Machines
- 1.2.3 Neural Networks
- 1.3 White-Box Adversarial Attacks
- 1.3.1 L-BGFS Attack
- 1.3.2 Fast Gradient Sign Method
- 1.3.3 Basic Iterative Method
- 1.3.4 DeepFool
- 1.3.5 Fast Adaptive Boundary Attack
- 1.3.6 Carlini and Wagner's Attack
- 1.3.7 Shadow Attack
- 1.3.8 Wasserstein Attack
- 1.4 Black-Box Adversarial Attacks
- 1.4.1 Transfer Attack
- 1.4.2 Score-Based Black-Box Attacks
- ZOO Attack
- Square Attack
- 1.4.3 Decision-Based Attack
- Boundary Attack
- HopSkipJump Attack
- Spatial Transformation Attack
- 1.5 Data Poisoning Attacks
- 1.5.1 Label Flipping Attacks
- 1.5.2 Clean Label Data Poisoning Attack
- Feature Collision Attack
- Convex Polytope Attack and Bullseye Polytope Attack
- 1.5.3 Backdoor Attack
- 1.6 Conclusions
- Acknowledgment
- Note
- References
- 2 Adversarial Machine Learning: A New Threat Paradigm for Next-Generation Wireless Communications
- 2.1 Introduction
- 2.1.1 Scope and Background
- 2.2 Adversarial Machine Learning
- 2.3 Challenges and Gaps
- 2.3.1 Development Environment
- 2.3.2 Training and Test Datasets
- 2.3.3 Repeatability, Hyperparameter Optimization, and Explainability
- 2.3.4 Embedded Implementation
- 2.4 Conclusions and Recommendations
- References
- 3 Threat of Adversarial Attacks to Deep Learning: A Survey
- 3.1 Introduction
- 3.2 Categories of Attacks
- 3.2.1 White-Box Attacks
- FGSM-based Method
- JSMA-based Method
- 3.2.2 Black-Box Attacks
- Mobility-based Approach
- Gradient Estimation-Based Approach
- 3.3 Attacks Overview.
- 3.3.1 Attacks On Computer-Vision-Based Applications
- 3.3.2 Attacks On Natural Language Processing Applications
- 3.3.3 Attacks On Data Poisoning Applications
- 3.4 Specific Attacks In The Real World
- 3.4.1 Attacks On Natural Language Processing
- 3.4.2 Attacks Using Data Poisoning
- 3.5 Discussions and Open Issues
- 3.6 Conclusions
- References
- 4 Attack Models for Collaborative Deep Learning
- 4.1 Introduction
- 4.2 Background
- 4.2.1 Deep Learning (DL)
- Convolution Neural Network
- 4.2.2 Collaborative Deep Learning (CDL)
- Architecture
- Collaborative Deep Learning Workflow
- 4.2.3 Deep Learning Security and Collaborative Deep Learning Security
- 4.3 Auror: An Automated Defense
- 4.3.1 Problem Setting
- 4.3.2 Threat Model
- Targeted Poisoning Attacks
- 4.3.3 AUROR Defense
- 4.3.4 Evaluation
- 4.4 A New CDL Attack: Gan Attack
- 4.4.1 Generative Adversarial Network (GAN)
- 4.4.2 GAN Attack
- Main Protocol
- 4.4.3 Experiment Setups
- Dataset
- System Architecture
- Hyperparameter Setup
- 4.4.4 Evaluation
- 4.5 Defend Against Gan Attack In IoT
- 4.5.1 Threat Model
- 4.5.2 Defense System
- 4.5.3 Main Protocols
- 4.5.4 Evaluation
- 4.6 Conclusions
- Acknowledgment
- References
- 5 Attacks On Deep Reinforcement Learning Systems: A Tutorial
- 5.1 Introduction
- 5.2 Characterizing Attacks on DRL Systems
- 5.3 Adversarial Attacks
- 5.4 Policy Induction Attacks
- 5.5 Conclusions and Future Directions
- References
- 6 Trust and Security of Deep Reinforcement Learning
- 6.1 Introduction
- 6.2 Deep Reinforcement Learning Overview
- 6.2.1 Markov Decision Process
- 6.2.2 Value-Based Methods
- V-value Function
- Q-value Function
- Advantage Function
- Bellman Equation
- 6.2.3 Policy-Based Methods
- 6.2.4 Actor-Critic Methods
- 6.2.5 Deep Reinforcement Learning
- 6.3 The Most Recent Reviews.
- 6.3.1 Adversarial Attack On Machine Learning
- 6.3.1.1 Evasion Attack
- 6.3.1.2 Poisoning Attack
- 6.3.2 Adversarial Attack On Deep Learning
- 6.3.2.1 Evasion Attack
- 6.3.2.2 Poisoning Attack
- 6.3.3 Adversarial Deep Reinforcement Learning
- 6.4 Attacks On DRL Systems
- 6.4.1 Attacks On Environment
- 6.4.2 Attacks On States
- 6.4.3 Attacks On Policy Function
- 6.4.4 Attacks On Reward Function
- 6.5 Defenses Against DRL System Attacks
- 6.5.1 Adversarial Training
- 6.5.2 Robust Learning
- 6.5.3 Adversarial Detection
- 6.6 Robust DRL Systems
- 6.6.1 Secure Cloud Platform
- 6.6.2 Robust DRL Modules
- 6.7 A Scenario of Financial Stability
- 6.7.1 Automatic Algorithm Trading Systems
- 6.8 Conclusion and Future Work
- References
- 7 IoT Threat Modeling Using Bayesian Networks
- 7.1 Background
- 7.2 Topics of Chapter
- 7.3 Scope
- 7.4 Cyber Security In IoT Networks
- 7.4.1 Smart Home
- 7.4.2 Attack Graphs
- 7.5 Modeling With Bayesian Networks
- 7.5.1 Graph Theory
- 7.5.2 Probabilities and Distributions
- 7.5.3 Bayesian Networks
- 7.5.4 Parameter Learning
- 7.5.5 Inference
- 7.6 Model Implementation
- 7.6.1 Network Structure
- 7.6.2 Attack Simulation
- Selection Probabilities
- Vulnerability Probabilities Based On CVSS Scores
- Attack Simulation Algorithm
- 7.6.3 Network Parametrization
- 7.6.4 Results
- 7.7 Conclusions and Future Work
- References
- Part II Secure AI/ML Systems: Defenses
- 8 Survey of Machine Learning Defense Strategies
- 8.1 Introduction
- 8.2 Security Threats
- 8.3 Honeypot Defense
- 8.4 Poisoned Data Defense
- 8.5 Mixup Inference Against Adversarial Attacks
- 8.6 Cyber-Physical Techniques
- 8.7 Information Fusion Defense
- 8.8 Conclusions and Future Directions
- References
- 9 Defenses Against Deep Learning Attacks
- 9.1 Introduction
- 9.2 Categories of Defenses.
- 9.2.1 Modified Training Or Modified Input
- Data Preprocessing
- Data Augmentation
- 9.2.2 Modifying Networks Architecture
- Network Distillation
- Model Regularization
- 9.2.3 Network Add-On
- Defense Against Universal Perturbations
- MegNet Model
- 9.4 Discussions and Open Issues
- 9.5 Conclusions
- References
- 10 Defensive Schemes for Cyber Security of Deep Reinforcement Learning
- 10.1 Introduction
- 10.2 Background
- 10.2.1 Model-Free RL
- 10.2.2 Deep Reinforcement Learning
- 10.2.3 Security of DRL
- 10.3 Certificated Verification For Adversarial Examples
- 10.3.1 Robustness Certification
- 10.3.2 System Architecture
- 10.3.3 Experimental Results
- 10.4 Robustness On Adversarial State Observations
- 10.4.1 State-Adversarial DRL for Deterministic Policies: DDPG
- 10.4.2 State-Adversarial DRL for Q-Learning: DQN
- 10.4.3 Experimental Results
- 10.5 Conclusion And Challenges
- Acknowledgment
- References
- 11 Adversarial Attacks On Machine Learning Models in Cyber- Physical Systems
- 11.1 Introduction
- 11.2 Support Vector Machine (SVM) Under Evasion Attacks
- 11.2.1 Adversary Model
- 11.2.2 Attack Scenarios
- 11.2.3 Attack Strategy
- 11.3 SVM Under Causality Availability Attack
- 11.4 Adversarial Label Contamination on SVM
- 11.4.1 Random Label Flips
- 11.4.2 Adversarial Label Flips
- 11.5 Conclusions
- References
- 12 Federated Learning and Blockchain:: An Opportunity for Artificial Intelligence With Data Regulation
- 12.1 Introduction
- 12.2 Data Security And Federated Learning
- 12.3 Federated Learning Context
- 12.3.1 Type of Federation
- 12.3.1.1 Model-Centric Federated Learning
- 12.3.1.2 Data-Centric Federated Learning
- 12.3.2 Techniques
- 12.3.2.1 Horizontal Federated Learning
- 12.3.2.2 Vertical Federated Learning
- 12.4 Challenges
- 12.4.1 Trade-Off Between Efficiency and Privacy.
- 12.4.2 Communication Bottlenecks
- 12.4.3 Poisoning
- 12.5 Opportunities
- 12.5.1 Leveraging Blockchain
- 12.6 Use Case: Leveraging Privacy, Integrity, And Availability For Data-Centric Federated Learning Using A Blockchain-Based Approach
- 12.6.1 Results
- 12.7 Conclusion
- References
- Part III Using AI/ML Algorithms for Cyber Security
- 13 Using Machine Learning for Cyber Security: Overview
- 13.1 Introduction
- 13.2 Is Artificial Intelligence Enough To Stop Cyber Crime?
- 13.3 Corporations' Use Of Machine Learning To Strengthen Their Cyber Security Systems
- 13.4 Cyber Attack/Cyber Security Threats And Attacks
- 13.4.1 Malware
- 13.4.2 Data Breach
- 13.4.3 Structured Query Language Injection (SQL-I)
- 13.4.4 Cross-Site Scripting (XSS)
- 13.4.5 Denial-Of-Service (DOS) Attack
- 13.4.6 Insider Threats
- 13.4.7 Birthday Attack
- 13.4.8 Network Intrusions
- 13.4.9 Impersonation Attacks
- 13.4.10 DDoS Attacks Detection On Online Systems
- 13.5 Different Machine Learning Techniques In Cyber Security
- 13.5.1 Support Vector Machine (SVM)
- 13.5.2 K-Nearest Neighbor (KNN)
- 13.5.3 Naïve Bayes
- 13.5.4 Decision Tree
- 13.5.5 Random Forest (RF)
- 13.5.6 Multilayer Perceptron (MLP)
- 13.6 Application Of Machine Learning
- 13.6.1 ML in Aviation Industry
- 13.6.2 Cyber ML Under Cyber Security Monitoring
- 13.6.3 Battery Energy Storage System (BESS) Cyber Attack Mitigation
- 13.6.4 Energy-Based Cyber Attack Detection in Large-Scale Smart Grids
- 13.6.5 IDS for Internet of Vehicles (IoV)
- 13.7 Deep Learning Techniques In Cyber Security
- 13.7.1 Deep Auto-Encoder
- 13.7.2 Convolutional Neural Networks (CNN)
- 13.7.3 Recurrent Neural Networks (RNNs)
- 13.7.4 Deep Neural Networks (DNNs)
- 13.7.5 Generative Adversarial Networks (GANs)
- 13.7.6 Restricted Boltzmann Machine (RBM)
- 13.7.7 Deep Belief Network (DBN).
- 13.8 Applications Of Deep Learning In Cyber Security.