if (cookieConsent.trackingAllowed()) { // Google Analytics code and/or other tracking code }

Keynote Speakers


Professor Saman K. Halgamuge

FIEEE, FIET, FAAIA, Department of Mechanical Engineering, School of Electrical, Mechanical and Infrastructure Engineering, University of Melbourne, Australia.


Homepage: https://scholar.google.com/citations?user=R_kF488AAAAJ

Title
Developing socially responsible Al for engaging with knowledge systems of Indigenous Peoples

Abstract
Several major technical issues in current AI hinder the creation of socially responsible AI with democratized access that would bring the advantage of this technology to almost all people in the planet. Unethical and socially irresponsible AI developed and used in applications evade regulations in most parts of the world, which is a multifaceted problem with widespread social, legal, political and commercial contributing factors. Can research in AI help solving this problem?
Would the 21st century AI need to be equipped with much better capabilities given that we have serious threats like dangerous epidemics, wild uncontrollable fires, floods etc to protect us from? Some of these issues are threatening the existence of indigenous peoples in the world. Engaging with the knowledge systems of indigenous people could be beneficial to all people. We could use AI to understand and explore indigenous knowledge passed on from generations to generations using oral traditions. We could also learn from the multi-objective optimization other technologically interpretable innovations used in such societies.
Should this AI be allowed to remain as unexplainable black boxes? AI models are mostly manually designed using the experience of AI-experts; they lack human interpretability, i.e., users do not understand the AI architectures either semantically/linguistically or mathematically/scientifically; and they are unable to dynamically change when new data are continuously acquired from the environment they operate. Addressing these deficiencies would provide answers to some of the valid questions about traceability, accountability and the ability to integrate existing knowledge (scientific or linguistically articulated human experience) into the AI model which in turn would help in engaging with the knowledge in indigenous communities. To overcome some of these deficiencies, Fair, Accessible, Interpretable and Reproducible (FAIR) AI – a new generation of AI is proposed. This keynote addresses these deficiencies and FAIR AI also describing some of our new research. Two new methods that help continuous learning are briefly introduced and their applications are discussed: SONG and IL-VIS. Unlike a static machine learning model that uses all the data at the end of an experiment, Il-Vis uses SONG to generate a progression trajectory thus far at each sampling timepoint of the experiment. Results on simulated data, for which the true progression trajectories are known, verified Il-Vis's ability to capture and visualize the trajectories accurately and relative to each other.
I acknowledge the Australian Research Council Grant DP220101035 on “Democratisation of Deep Learning” and grants on “Water Rights for First Nations: Exploring Cultural Economic Futures through Agent Based Modelling and AI” awarded by Melbourne Social Equity Institute and Faculty of Engineering and IT of University of Melbourne.

Biodata
Prof Saman Halgamuge, Fellow of IEEE, IET and AAIA, received the B.Sc. Engineering degree in Electronics and Telecommunication from the University of Moratuwa, Sri Lanka, and the Dipl.-Ing and Ph.D. degrees in data engineering from the Technical University of Darmstadt, Germany. He is currently a Professor of the Department of Mechanical Engineering of the School of Electrical Mechanical and Infrastructure Engineering, The University of Melbourne (UoM). He is also an honorary professor at Australian National University (ANU). He is listed as a top 2% most cited researcher for AI and Image Processing in the Stanford database and his papers are cited 13,000 times with h-factor of 50. He was a distinguished Lecturer of IEEE Computational Intelligence Society (2018-21). His research interests are in AI, machine learning including deep learning, optimization, big data analytics and their applications in biomedicine and engineering. He supervised 48 PhD students and 15 postdocs at UoM and ANU to completion. His leadership roles include Head, School of Engineering at ANU and Associate Dean of the Engineering and IT school of UoM.


Professor Jerzy Stefanowski

Institute of Computing Science, Poznan University of Technology, Poland


Homepage: https://www.cs.put.poznan.pl/jstefanowski/

Title
Title Multi-criteria approaches to explaining black box machine learning models

Abstract
The wide adoption of artificial intelligence and machine learning algorithms, especially in critical domains often encounters obstacles related to the lack of their interpretability. The most of the currently used machine learning methods are black-box models that do not provide information about the reasons behind taking a certain decision, nor do they explain the logic of an algorithm leading to it. Despite recent development of many methods for explaining the predictions of black box models, their use - even within one representation of the provided explanations - is not too easy. Limiting our interest in this presentation to the subset of methods designed for counterfactual explanations or rules, we note that they provide quite different explanations for a single predicted instance and choosing one of them is a non-trivial task. For instance, a counterfactual is generally expected to be a similar example for which the model prediction will be changed to more desired one. Usually, a single prediction can be explained by many different counterfactuals generated by various specialized methods. Several properties and quality measures are considered to evaluate these provided counterfactuals. However, these measures present contradictory views on possible explanations. This naturally leads us to exploiting a multiple criteria decision analysis for effectively comparing alternative solutions and aiding human users while selecting the best, most compromise one. In this presentation we will discuss its usefulness in context to two alternative types of explanations: counterfactuals and rules. Firstly, the comprehensive discussion of various evaluation measures for a given form (either counterfactuals or rules), will be given. Secondly, instead of proposing yet another method for generating counterfactuals or rules, we claim that the already existing methods should be sufficient to provide a diversified set of explanations. As a result we propose to use an ensemble of multiple base explainers (these methods) to provide a richer set of explanations, each of which establishes a certain trade-off between values of different quality measures (criteria). Then, the dominance relation between pairs of explanations is used in order to construct their Pareto front and filter out dominated explanations. The final explanation is selected from this front by applying the multiple criteria choice methods. These approaches are illustrated by experiments performed independently for counterfactuals and rules.

Biodata
Jerzy Stefanowski is a full professor at the Institute of Computing Science, Poznan University of Technology. He received the Ph.D. and Habilitation degrees in computer science from this university. Since 2021 he has been elected as a corresponding member of Polish Academy of Sciences. His research interests include machine learning, data mining and intelligent decision support, in particular ensemble classifiers, class imbalance, rule induction, and explainable Artificial Intelligence. In addition to his research activities he served in a number of organizational capacities:, current vice-president of Polish Artificial Intelligence Society (vice-president since 2014); co-founder and co-leader of Polish Special Interest Group on Machine Learning. More information could be found at https://www.cs.put.poznan.pl/jstefanowski/


Professor Siridech Boonsang

Dean of Faculty of Information Technology,
King Mongkut's Institute of Technology Ladkrabang,
Bangkok, Thailand.
Associate Professor in Electrical Engineering Department,
Faculty of Engineering,


Homepage: https://www.it.kmitl.ac.th/en/staff/assoc-prof-dr-siridech-boonsang/

Title
Generative AI for industrial manufacturing applications

Abstract
This work explores the application of generative models in industrial manufacturing, specifically in the context of creating synthetic images. The use of generative models to generate synthetic images has gained traction in various industries, including the automobile and manufacturing sectors. In the automobile industry, the Latent Diffusion Model (LDM) in combination with fine tuning techniques has been proposed as an approach to generate automobile images based on text input. However, limited datasets can lead to overfitting, and fine tuning is used to pre-train the model to handle smaller scale datasets. The synthetic images are generated based on conditional input, such as the brand, color, location, and automobile position. In industrial manufacturing, the development of automated surface inspection systems requires a large amount of representative product image data. However, obtaining such data, especially with defects that reflect real-world scenarios, can be challenging, resulting in difficulties in developing robust detection algorithms. Generative models can be utilized to create synthetic datasets that contain product images augmented with defects. These datasets provide images with a variety of defective shapes and positions over the surface, reflecting what would occur over longer production periods. In conclusion, the use of generative models has become essential in creating synthetic images in several industries, including industrial manufacturing. The LDM and fine-tuning techniques can be used to generate automobile images based on text input, while synthetically generated datasets with defects can help in the development of automated surface inspection systems.

Biodata
Dr. Siridech Boonsang is the current Dean of the Faculty of Information Technology at King Mongkut's Institute of Technology Ladkrabang (KMITL). He was born in Thailand and has an impressive educational background, having earned his Bachelor's degree in Electrical Engineering with Second Class Honours from KMITL in 1994. Dr. Boonsang then went on to pursue his Master's degree in Electrical and Electronic Engineering with a specialization in Electronic Instrumentation System from the University of Manchester Institute of Science and Technology (UMIST) in 2001. He completed his Ph.D. in Instrumentation from the same institution in 2004.
Dr. Boonsang is an expert in AI for Industrial Automation, Sensors and Actuators, and Optical and Electronic Materials. He has published numerous papers, including "A deep learning system for recognizing and recovering contaminated slider serial numbers in hard disk manufacturing processes," "Optical and Structural Properties of Insoluble and Flexible Biodegradable Regenerated Silk Films for Optically Transparent Hydrophilic Coating of Medical Devices," and "Evaluation of Micro- and Nano-Bismuth(III) Oxide Coated Fabric for Environmentally Friendly X-Ray Shielding Materials."
In his current role as the Dean of the Faculty of Information Technology at KMITL, Dr. Boonsang is responsible for overseeing the academic programs and research activities of the faculty. He is known for his dedication to promoting excellence in education and research and for his commitment to fostering innovation and creativity among his students and faculty members.


Professor Masaru Kitsuregawa

Research Organization of Information and Systems (ROIS), President /
The University of Tokyo, University Professor


Homepage: https://www.tkl.iis.u-tokyo.ac.jp/Kilab/Members/memo/kitsure_e.html

Title
Building a research data platform for Academia in Japan

Abstract
TBA

Biodata
President of Research Organization of Information and Systems and University Professor at the University of Tokyo. Received Ph.D. degree from the University of Tokyo in 1983. Served in various positions such as President of Information Processing Society of Japan (2013–2015) and Chairman of Committee for Informatics, Science Council of Japan(2014- 2016). He has wide research interests, especially in database engineering. He has received many awards including ACM SIGMOD E. F. Codd Innovations Award, IEEE Innovation in Societal Infrastructure Award and Japan Academy Award. In 2013, awarded Medal with Purple Ribbon from Japanese Government, and in 2016, the Chevalier de la Legion D’Honneur. He is a fellow of ACM, IEICE and IPSJ, CCF honorary member, and IEEE Life fellow.


Contact

Please send all enquiries on matters related to the ACIIDS 2023 conference to one of the following email addresses:

Organizational issues:
aciids@pwr.edu.pl

Local assistance:
aciids2023@it.kmitl.ac.th