Challenges, such as requirements of resources, limited availability of storage space on devices, and mobile bandwidth spectrum, inhibit unconstrained and ubiquitous video consumption. We propose a first-of-its-kind methodology to compress videos that stream human faces. We detect facial landmarks on-the-fly and compress the video by storing a sequence of distinct frames extracted from the video, such that the facial landmarks of a pair of successively stored frames are significantly different. We use a dynamic thresholding technique to detect the significance of difference and store meta-information for reconstructing the missing frames. To reduce glitches in the decompressed video, we use morphing technique that smoothens the transition between successive frames. We measure the objective goodness of our technique by evaluating the time taken to compress, the entropy per frame, the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and compression ratio. For subjective analysis, we perform a user study observing user satisfaction at different compression ratios.