Publication
CONCUR 2016
Conference paper

Ethical preference-based decision support systems

View publication

Abstract

The future will see autonomous intelligent systems acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. Also, humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In these scenarios, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'). In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles. In this paper we introduce some issues in embedding morality into intelligent systems. A few research questions are defined, with the hope that the discussion raised by the questions will shed some light onto the possible answers.

Date

01 Aug 2016

Publication

CONCUR 2016

Authors

Share