Artificial Intelligence (AI) in the context of military systems has frequently been portrayed as dangerous, and as leading to humanity being put in danger by an errant AI system, such as the Skynet imagined in the Terminator movie series. At the same time, the benefits of using AI in such systems are numerous. Therefore, we need to develop techniques that will let military systems benefit from the advances in AI, while ensuring that a system like Skynet never turns against humanity. In this paper, we examine the problem from the perspective of device management, a set of intelligent systems that manage themselves and determine their own policies. We discuss mechanisms that could be used to prevent these systems from becoming malignant.