The growing interest in both the automation of machine learning and deep learning has inevitably led to the development of a wide variety of methods to automate deep learning. The choice of network architecture has proven critical, and many improvements in deep learning are due to new structuring of it. However, deep learning techniques are computationally intensive and their use requires a high level of domain knowledge. Even a partial automation of this process therefore helps to make deep learning more accessible for everyone. In this tutorial we present a uniform formalism that enables different methods to be categorized and compare the different approaches in terms of their performance. We achieve this through a comprehensive discussion of the commonly used architecture search spaces and architecture optimization algorithms based on reinforcement learning and evolutionary algorithms as well as approaches that include surrogate and one-shot models. In addition, we discuss approaches to accelerate the search for neural architectures based on early termination and transfer learning and address the new research directions, which include constrained and multi-objective architecture search as well as the automated search for data augmentation, optimizers, and activation functions.