Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) have become the state-of-the-art approaches for mining Electronic Health Records (EHRs). Generally speaking, RNN extracts the temporal dependency among features as a time series of hidden states, whereas CNN summarizes the local patterns among features as a set of feature maps. Many studies have leveraged their complementary effects by using a stack of neural network layers with CNN on the input, followed by RNN layers for the output. However, the feature representations learned by these two types of neural networks are often hard to be visualized and interpreted in a unified way. In this work, we propose a general framework which represents the extracted temporal relationships and local patterns in a unified and systematic way through facial representations that have evolving emotional expressions based on a patient's health conditions. This form of feature representation not only improves the potential to visualize EHRs, but also further benefits our downstream task on early prediction of septic shock. More specifically, we show that our proposed framework consistently out-performed all other baseline models including various deep learning models for sepsis shock early prediction.