AI systems managed to reach and even exceed human performance in various cognitive tasks, ranging from image recognition to strategic games and to reasoning. AI models continue to grow in size, requiring us to re-think about how to architect computing systems. Analog in-memory computing is one non-Von Neumann approach, where computational tasks are performed in memory by exploiting the physical attributes of memory devices. This talk will focus on phase-change memory-based analog in-memory computing for accelerating deep learning inference and training. The limited analog computational precision will be discussed with particular attention to device non-idealities. Strategies to overcome these at device, circuit, architecture, and algorithmic levels will be reviewed. Analog in-memory computing-based system architectures tailored for various AI application domains, including low-power solutions, will be presented.