Specialized hardware for deep learning using analog memory devices has the potential to outperform conventional GPUs by a large margin. At the core of such hardware are arrays of non-volatile-memory (NVM) devices that can perform the simple matrix operations needed for deep learning in parallel and in constant time. Several implementations can be found in the literature that use different materials as memory elements, including phase-change-memory (PCM), resistive-random-access-memory (RRAM), electrochemical-random-access-memory (ECRAM), and ferroelectric devices. While the current focus is to demonstrate functionality, there is an increasing concern about the reliability margins of this emerging technology. In this paper we will briefly describe operation and device requirements, and then focus on possible reliability exposure in terms of variability, stability and drift, retention and durability.