Title
Developing socially responsible Al for engaging with knowledge systems of Indigenous Peoples
Abstract
Several major technical issues in current AI hinder the creation of socially responsible AI with democratized access that would bring the advantage of this technology to almost all people in the planet. Unethical and socially irresponsible AI developed and used in applications evade regulations in most parts of the world, which is a multifaceted problem with widespread social, legal, political and commercial contributing factors. Can research in AI help solving this problem?
Would the 21st century AI need to be equipped with much better capabilities given that we have serious threats like dangerous epidemics, wild uncontrollable fires, floods etc to protect us from? Some of these issues are threatening the existence of indigenous peoples in the world. Engaging with the knowledge systems of indigenous people could be beneficial to all people. We could use AI to understand and explore indigenous knowledge passed on from generations to generations using oral traditions. We could also learn from the multi-objective optimization other technologically interpretable innovations used in such societies.
Should this AI be allowed to remain as unexplainable black boxes? AI models are mostly manually designed using the experience of AI-experts; they lack human interpretability, i.e., users do not understand the AI architectures either semantically/linguistically or mathematically/scientifically; and they are unable to dynamically change when new data are continuously acquired from the environment they operate. Addressing these deficiencies would provide answers to some of the valid questions about traceability, accountability and the ability to integrate existing knowledge (scientific or linguistically articulated human experience) into the AI model which in turn would help in engaging with the knowledge in indigenous communities. To overcome some of these deficiencies, Fair, Accessible, Interpretable and Reproducible (FAIR) AI – a new generation of AI is proposed. This keynote addresses these deficiencies and FAIR AI also describing some of our new research. Two new methods that help continuous learning are briefly introduced and their applications are discussed: SONG and IL-VIS. Unlike a static machine learning model that uses all the data at the end of an experiment, Il-Vis uses SONG to generate a progression trajectory thus far at each sampling timepoint of the experiment. Results on simulated data, for which the true progression trajectories are known, verified Il-Vis's ability to capture and visualize the trajectories accurately and relative to each other.
I acknowledge the Australian Research Council Grant DP220101035 on “Democratisation of Deep Learning” and grants on “Water Rights for First Nations: Exploring Cultural Economic Futures through Agent Based Modelling and AI” awarded by Melbourne Social Equity Institute and Faculty of Engineering and IT of University of Melbourne.
Biodata
Prof Saman Halgamuge, Fellow of IEEE, IET and AAIA, received the B.Sc. Engineering degree in Electronics and Telecommunication from the University of Moratuwa, Sri Lanka, and the Dipl.-Ing and Ph.D. degrees in data engineering from the Technical University of Darmstadt, Germany. He is currently a Professor of the Department of Mechanical Engineering of the School of Electrical Mechanical and Infrastructure Engineering, The University of Melbourne (UoM). He is also an honorary professor at Australian National University (ANU). He is listed as a top 2% most cited researcher for AI and Image Processing in the Stanford database and his papers are cited 13,000 times with h-factor of 50. He was a distinguished Lecturer of IEEE Computational Intelligence Society (2018-21). His research interests are in AI, machine learning including deep learning, optimization, big data analytics and their applications in biomedicine and engineering. He supervised 48 PhD students and 15 postdocs at UoM and ANU to completion. His leadership roles include Head, School of Engineering at ANU and Associate Dean of the Engineering and IT school of UoM.
Title
Title Multi-criteria approaches to explaining black box machine learning models
Abstract
The wide adoption of artificial intelligence and machine learning algorithms, especially in critical domains often encounters obstacles related to the lack of their interpretability. The most of the currently used machine learning methods are black-box models that do not provide information about the reasons behind taking a certain decision, nor do they explain the logic of an algorithm leading to it. Despite recent development of many methods for explaining the predictions of black box models, their use - even within one representation of the provided explanations - is not too easy. Limiting our interest in this presentation to the subset of methods designed for counterfactual explanations or rules, we note that they provide quite different explanations for a single predicted instance and choosing one of them is a non-trivial task. For instance, a counterfactual is generally expected to be a similar example for which the model prediction will be changed to more desired one. Usually, a single prediction can be explained by many different counterfactuals generated by various specialized methods. Several properties and quality measures are considered to evaluate these provided counterfactuals. However, these measures present contradictory views on possible explanations. This naturally leads us to exploiting a multiple criteria decision analysis for effectively comparing alternative solutions and aiding human users while selecting the best, most compromise one. In this presentation we will discuss its usefulness in context to two alternative types of explanations: counterfactuals and rules. Firstly, the comprehensive discussion of various evaluation measures for a given form (either counterfactuals or rules), will be given. Secondly, instead of proposing yet another method for generating counterfactuals or rules, we claim that the already existing methods should be sufficient to provide a diversified set of explanations. As a result we propose to use an ensemble of multiple base explainers (these methods) to provide a richer set of explanations, each of which establishes a certain trade-off between values of different quality measures (criteria). Then, the dominance relation between pairs of explanations is used in order to construct their Pareto front and filter out dominated explanations. The final explanation is selected from this front by applying the multiple criteria choice methods. These approaches are illustrated by experiments performed independently for counterfactuals and rules.
Biodata
Jerzy Stefanowski is a full professor at the Institute of Computing Science, Poznan University of Technology. He received the Ph.D. and Habilitation degrees in computer science from this university. Since 2021 he has been elected as a corresponding member of Polish Academy of Sciences. His research interests include machine learning, data mining and intelligent decision support, in particular ensemble classifiers, class imbalance, rule induction, and explainable Artificial Intelligence. In addition to his research activities he served in a number of organizational capacities:, current vice-president of Polish Artificial Intelligence Society (vice-president since 2014); co-founder and co-leader of Polish Special Interest Group on Machine Learning. More information could be found at
https://www.cs.put.poznan.pl/jstefanowski/