Preferences and ethical priorities: Thinking fast and slow in AI
In AI, the ability to model and reason with preferences allows for more personalized services. Ethical priorities are also essential, if we want AI systems to make decisions that are ethically acceptable. Both data-driven and symbolic methods can be used to model preferences and ethical priorities, and to combine them in the same system, as two agents that need to cooperate. We describe two approaches to design AI systems that can reason with both preferences and ethical priorities. We then generalize this setting to follow Kahneman's theory of thinking fast and slow in the human's mind. According to this theory, we make decision by employing and combining two very different systems: one accounts for intuition and immediate but imprecise actions, while the other one models correct and complex logical reasoning. We discuss how such two systems could possibly be exploited and adapted to design machines that allow for both data-driven and logical reasoning, and exhibit degrees of personalized and ethically acceptable behavior.