SafeML: Safety Monitoring of Machine Learning Classifiers

Exploring techniques for estimating the safety of machine learning classifiers
136 Downloads
Updated 10 Jun 2020

License: MIT Standard - \Python Style Guide View ECDF-based Distance Measure Algorithms  on File Exchange

SafeML_Logo

SafeML

Exploring techniques for safety monitoring of machine learning classifiers.

Abstract

Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this project to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behaviour and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets such as XOR, Spiral, and Circle and some well-known security-specific datasets for intrusion detection of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that the approach can provide a basis for detecting whether the application context of an ML component is valid in the safety-security.

Description

The following figure illustrates the flowchart of the proposed approach. In this flowchart, there are two main sections including training phase and application phase. A) The training phase is an offline procedure in which a trusted dataset will be used to train the intelligent algorithm that can be a machine learning or deep learning algorithm. This study will focus on the classification ability of machine learning. Thus, using a trusted dataset the classifier will be trained and its performance will be measured with existing KPIs. Meanwhile, the probability density function and statistical parameters of each class will be estimated and stored to be used for comparison. B) The second phase or application phase is an online procedure in which real-time and unlabelled data is going to be feed to the system. For example, consider an autonomous car the has been trained to detect obstacles and it should prevent a collision. Therefore, in the application phase, the trained classifier should distinguish between the road and other objects. One important and critical issue in the application phase is that the data does not have any label. So, it cannot be assured that the classifier can operate as accurate as of the training phase. In the application phase, the untrusted labels of the classifier will be used and similarly, the probability cumulative distribution function (CDF) and statistical parameters of each class will be extracted. The CDF-based statistical difference of each class in the training phase and application phase is used to estimate the accuracy. If the estimated accuracy and expected confidence difference was very low, the classifier results and accuracy can be trusted (In this example the autonomous car continues its operation), if the difference was low, the system can ask for more data and re-evaluation to make sure about the distance. In case of larger difference, the classifier results and accuracy are no longer valid, and the system should use an alternative approach or notify a human agent (In this example, the system will ask the driver to take the control of the car).

FlowChart

Figure 1. Flowchart of the proposed approach

From SafeML Toward Explainable AI (XAI)

The proposed method is not only suitable for safety evaluation of machine learning classifiers but also can be used @Run-Time as an eXplainable AI (XAI). In one of our examples for security dataset, we showed how SafeML can be used as XAI.

Case Studies

Contributors

Publication

Aslansefat, K. Sorokos, I., Whiting, D., Tavakoli Kolagari, R. and Papadopoulos, Y. (2020) SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure. [arXiv][ResearchGate][DeepAI]

Cite as

@article{Aslansefat2020SafeML,
   author  = {{Aslansefat}, Koorosh and {Sorokos}, Ioannis and {Whiting}, Declan and
              {Tavakoli Kolagari}, Ramin and {Papadopoulos}, Yiannis},
   title   = "{SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure}",
   journal = {arXiv e-prints},
   year    = {2020},
   url     = {https://arxiv.org/abs/2005.13166},
   eprint  = {2005.13166},
}

Related Works

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety. [arXiv]

Irving, G., Christiano, P., & Amodei, D. (2018). AI Safety via Debate. [arXiv]

Schulam, P., & Saria, S. (2019). Can You Trust This Prediction? Auditing Pointwise Reliability After Learning. [arXiv]

Kläs, M., & Sembach, L. (2019). Uncertainty Wrappers for Data-driven Models. [Springer]

Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., & Vechev, M. (2018). AI2: Safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP). [IEEE]

Related Projects

SafeNN Project: This porject relies on the idea of SafeML and aimed to evaluate safety of Deep Neural Networks using statistical distance measures.

NN-Dependability-KIT Project: Toolbox for software dependability engineering of artificial neural networks.

Confident-NN Project: Toolbox for empirical confidence estimation in neural networks-based classification.

SafeAI Project: Different toolboxes like DiffAI, DL2 and ERAN from SRILab ETH Zürich focusing on robust, safe and interpretable AI.

License

This framework is available under an MIT License.

Acknowledgments

We would like to thank EDFEnergy R&D UK Centre, AURA Innovation Centre and University of Hull for their support.

Cite As

Koorosh Aslansefat (2024). SafeML: Safety Monitoring of Machine Learning Classifiers (https://github.com/koo-ec/SafeML/releases/tag/v1.0), GitHub. Retrieved .

Aslansefat, Koorosh, et al. “SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures.” Model-Based Safety and Assessment, Springer International Publishing, 2020, pp. 197–211, doi:10.1007/978-3-030-58920-2_13.

View more styles
MATLAB Release Compatibility
Created with R2020a
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Implementation_in_MATLAB

Implementation_in_MATLAB/CICDDoS2019_Security_Dataset

Implementation_in_MATLAB/ECDF_Based_Distance_Functions

Implementation_in_MATLAB/Explainable_AI

Implementation_in_MATLAB/NSL_KDD_Security_Dataset

Implementation_in_MATLAB/PDF_Based_Distance_Functions

Version Published Release Notes
1.0

To view or report issues in this GitHub add-on, visit the GitHub Repository.
To view or report issues in this GitHub add-on, visit the GitHub Repository.