The goal of this project is to uniquely fingerprint an environment in the visual domain. The fingerprinting should be such that if a user takes a video swipe of the environment with the phone camera, her precise location can be immediately inferred. The problem is challenging because the user is continuously scanning the environment and the opportunity of capturing unique parts is fleeting. A typical environment may consist of repeating visual patterns such as ceiling, floor, wall texture, etc. and a painting hanging on the wall may be the only unique identifier of the location. Our approach to solve this problem is motivated by the ability of human brain in differentiating two similar looking environments by observing subtle differences in them. Our contribution in this work is two fold: (1) summarize global uniqueness of a location in few bits of information to quickly determine whether the current phone camera view contains a unique part, (2) scale this notion of uniqueness to buildings of arbitrary sizes yet delivering near real-time performance. Once fully developed, we believe our approach can apply to accelerate mobile augmented reality and correct indoor localization errors.