The pervasive use of wearable sensors in activity and health monitoring presents a huge potential for building novel data analysis and prediction frameworks. In particular, approaches that can harness data from a diverse set of low-cost sensors for recognition are needed. Many of the existing approaches rely heavily on elaborate feature engineering to build robust recognition systems, and their performance is often limited by the inaccuracies in the data. In this paper, we develop a novel two-stage recognition system that enables a systematic fusion of complementary information from multiple sensors in a linear graph embedding setting, while employing an ensemble classifier phase that leverages the discriminative power of different feature extraction strategies. Experimental results on a challenging dataset show that our framework greatly improves the recognition performance when compared to using any single sensor.