This paper presents improved results on classifying electroencephalography (EEG) recordings using deep learning. The task is to classify movements that the subject is thinking about (motor imagery), using only the recorded electrical activities on the scalp. The challenges are: poor signal-to-noise ratio; interference from numerous sources such as electrical line noise, muscle activity, and eye movements; considerable variability between individuals and even recording sessions. Traditional signal processing techniques such as frequency band analysis, common spatial pattern (CSP) algorithm or independent component analysis (ICA) fall short due to their limited capacity. Thanks to the rise of big data in healthcare, medical recordings now come in abundance. Therefore deep learning which relies on large amounts of training data is becoming the new cutting edge tool. We present a significant improvement of classification accuracy on the Brain-Computer Interfaces Competition IV dataset (2a), and compare the results of various state of the art neural network structures.