The announcement of the seminar "Ethical AI"
The seminar "Ethical AI"
- March 17th 2023 (Friday) / 16:00-18:00
- Graduate School of Humanities Building, Kobe University.
- Can AI be Moral? Two different approaches to Ethical AIs
- - Fei Song (Lingnan University)
- Felix S H Yeung (University of Essex & University of Hong Kong)
Developing moral machines is a pressing task for AI research. With the explosion of possible contexts of robot-human interactions, we have every reason to make sure machines shall act in accordance with acceptable moral standards. Failure to produce sufficiently moral behaviour in machines can be morally disastrous. Yet, despite the pressing need, the challenges to meeting it are equally daunting.
Our talk consists of two parts. In the first part of the talk, we first outline the limitations of three proposed approaches in the literature for ethical machines: 1) the top-down approach (e.g., deterministic algorithm model); 2) the bottom-up approach (e.g., machine learning model); 3) hybrid systems (e.g., deterministic algorithm + machine learning). Then, we propose a novel approach called a pluralist hybrid system. The pluralist hybrid system comprises two elements. First, it has a deterministic algorithm system that impartially includes different moral rules for action guidance. The deterministic algorithm system is responsible for making explicit moral decisions. Second, it has a machine learning system responsible for calculating the value of the parameters required by the application of moral principles. We argue that the pluralist hybrid system is better than the existing proposals in that (i) it better addresses the moral disagreement problem of top-down approaches and (ii) it reduces the opacity of the system as a whole to a justifiable level compared with bottom-up models.
In the second part of the talk, we explore the plausibility of a bottom-up and rule-free approach to ethical machines we call the “skillful expert model.” This model aligns itself with the novel, virtue-ethical approaches to machine ethics, but proposes that what machines should learn are human skills rather than general human virtues. Following Dreyfus and Dreyfus, we characterize skills as domain-specific, embodied, non-principle-based and highly adaptable processes that produce optimal (coping) responses in different situations. These skills are reliable in tasks such as driving, caregiving, and artistic creation. Using the particular example of autonomous vehicles, we propose that developing artificial neural networks that mimic the skills of a human expert can have great potential in resolving some of the deepest challenges in developing ethical autonomous machines.