Recently a large attention has been devoted to the ethical issues arising around the design and the implementation of artificial agents. This is due to the fact that humans and machines more and more often need to collaborate to decide on actions to take or decisions to make. Such decisions should be not only correct and optimal from the point of view of the overall goal to be reached, but should also agree to some form of moral values which are aligned to the human ones. Examples of such scenarios can be seen in autonomous vehicles, medical diagnosis support systems, and many other domains, where humans and artificial intelligent systems cooperate. One of the main issues arising in this context regards ways to model and reason with moral values. In this paper we discuss the possible use of AI compact preference models as a promising approach to model, reason, and embed moral values in decision support systems.