Using Particle Swarm Optimization in testing
In this talk, I will present recent work on the application of the Particle Swarm Optimization algorithm (PSO) to software testing. Initially, I will review the main concepts underlying the definition of a PSO. Our first line of work considers the use of PSO in search spaces whose elements are trees instead of, as usual, vectors. We apply this framework to generate test cases by evolving a population of trees. Our second line of work uses swarms in the scope of mutation testing. Specifically, given a set of mutants, we use a swarm to select hard-to-kill mutants.
Manuel Núñez is a Professor in the Department of Computer Systems and Computation of the Complutense University of Madrid, Spain. He holds a Doctorate degree in Mathematics & Computer Science, obtained in 1996. Additionally, he holds a Master degree in Economics, obtained in 2002. Professor Núñez has done research in the broad field of formal methods. Currently, he is interested in testing complex systems using both formal and heuristic approaches.
Professor Núñez is a member of the IEEE SMC Technical Committee on Computational Collective Intelligence, the Board of Directors of the Tarot Summer School on Software Testing and the A-MOST, DISCOTEC and ICCCI Steering Committees. He is a member of several Editorial Boards of scientific journals. He has published more than 150 research papers in international journals and meetings.
Dynamic Predictive Maintenance with Self-Adaptive Evolving Forecast Models
Predictive maintenance relies on real-time monitoring and diagnosis of system components, and process and production chains. The primary strategy is to take action when items or parts show certain behaviors that usually result in machine failure, reduced performance or a downtrend in product quality.
In the first stage, it is thus of utmost importance to recognize potentially arising problems as early as possible. Therefore, a core component in predictive maintenance systems is the usage of techniques from the fields of forecasting and prognostics, which can either rely on process parameter settings (static case) or process values recorded over time (dynamic case). We will focus on the latter and demonstrate a robust learning procedure of time-series based forecast models, which can deal with very high-dimensional batch process modeling settings. Furthermore, our approach allows the forecast models to be on-line updated over time and on the fly whenever required due to intrinsic system dynamics (such as, e.g. varying product types, charges, settings, environmental influences) => leading to the paradigm of self-adaptive forecast models. This is achieved i) by recursive adaptation of model parameters to permanent changes and to increase model significance and accuracy and ii) by evolution of new model components (rules) on the fly in order to account for variations in the process, which require a change in the model’s ‘non-linearity degree’. We will also present some enhanced methods in model adaptation for increased flexibility to properly compensate system drift and shifts, such as dynamic forgetting, rule merging and splitting as well as an incremental update of the latent variable sub-space as a variant of incremental feature space transformation.
In the second stage, the evolved and incrementally adaptive forecast models can be used as surrogates in a fully automatized optimization procedure in order to prevent operator’s intervention and time-intensive manual reactions to predicted downtrends. Often, there are some “control wheels”, usually machine parameter settings which are able to change the behavior of the production process in order to meet the quality standards. Such settings may have indeed been optimized before (due to expert knowledge or in a static optimization process), but may not take into account dynamically changing factors during production. In other cases, such settings could not be optimized before at all (as requiring time-intensive design of experiments cycles), such that often a default parametrization is used which is suboptimal for the final product quality. We will define the optimization problem, which typically leads to a many-valued problem with very-high dimensional input parameter space; thus, we will demonstrate possibilities how a reduction to smaller problems can be achieved, and how the reduced problems can then be more quickly and robustly solved with multi-objective evolutionary algorithms.
The talk will be concluded with a real-world application scenario from a (micro-fluidic) chip production site, where the self-adaptive time-series based forecast models together with the process optimization component have been successfully applied in order to detect product quality downtrends at an earlier stage and to even suggest modified process values trends (and associated) machine parameter settings to improve and assure high-level product quality anytime.
Edwin Lughofer received his PhD-degree from the Johannes Kepler University Linz (JKU) in 2005. He is currently Key Researcher with the Fuzzy Logic Laboratorium Linz / Department of Knowledge-Based Mathematical Systems (JKU) in the Softwarepark Hagenberg. He has participated in several basic and applied research projects on European and national level, with a specific focus on topics of Industry 4.0 and FoF (Factories of the Future). He has published more than 200 publications in the fields of evolving fuzzy systems, machine learning and vision, data stream mining, chemometrics, active learning, classification and clustering, fault detection and diagnosis, quality control and predictive maintenance, including 80 journals papers in SCI-expanded impact journals, a monograph on ’Evolving Fuzzy Systems’ (Springer, Heidelberg Berlin), an edited book on ’Learning in Non-stationary Environments’ (Springer, New York) and an edited book on ‘Predictive Maintenance in Dynamic Systems’ (Springer, New York). In sum, his publications received 6280 references achieving an h-index of 42. He is associate editor of the international journals Information Sciences, IEEE Transactions on Fuzzy Systems, Evolving Systems, Information Fusion, International Journal of Big Data and Analytics in Healthcare, Soft Computing and Complex and Intelligent Systems, the general chair of the IEEE Conference on EAIS 2014 in Linz, the publication chair of IEEE EAIS 2015, 2016, 2017, 2018 and 2020, the program co-chair of the International Conference on Machine Learning and Applications (ICMLA) 2018, the tutorial chair of IEEE SSCI Conference 2018, the publication chair of the 3rd INNS Conference on Big Data and Deep Learning 2018, and the Area chair of the FUZZ-IEEE 2015 conference in Istanbul. He co-organized around 12 special issues and more than 20 special sessions in international journals and conferences. In 2006 he received the best paper award at the International Symposium on Evolving Fuzzy Systems, in 2013 the best paper award at the IFAC conference in Manufacturing Modeling, Management and Control (800 participants) and in 2016 the best paper award at the IEEE Intelligent Systems Conference