In this paper, we show how pedestrian detection accuracy and efficiency can be improved for static surveillance cameras using scene context and temporal non-maximal suppression. First, using the geometry of the scene, we derive the relationship between height in the image with respect to height in the real-world. This relationship is used to learn the range of scales to evaluate for potential detections. The geometry can also be used to estimate distances on the ground plane and to predict the image distance for pedestrian movement. Secondly, we show the error inherent in the standard non-maximum suppression method and demonstrate how the error can be reduced using a small temporal window. The scene context information and temporal non-maximum suppression (tNMS) can be applied to any detection algorithm. We evaluate the accuracy of each method across seven different videos. We show the scene context information and temporal non-maximum suppression improve accuracy on the order of ten and five percent respectively and over fifteen percent combined for seven different scenes for two publicly available detectors. Furthermore, both these approaches significantly reduce computational cost.