Publication
SEC 2016
Conference paper

DeepCham: Collaborative edge-mediated adaptive deep learning for mobile object recognition

View publication

Abstract

Deep learning techniques achieve state-of-the-art performance on many computer vision related tasks, e.g. large-scale object recognition. In this paper we show that recognition accuracy degrades when used in daily mobile scenarios due to context variations caused by different locations, time of a day, etc. To solve this problem, we present DeepCham- the first adaptive mobile object recognition framework that allows deep learning techniques to be used successfully in mobile environments. Specifically, DeepCham is mediated by an edge master server which coordinates with participating mobile users to collaboratively train a domain-aware adaptation model which can yield much better object recognition accuracy when used together with a domain-constrained deep model. DeepCham generates high-quality domain-aware training instances for adaptation from in-situ mobile photos using two major steps: (i) a distributed algorithm which identifies qualifying images stored in each mobile device for training, (ii) a user labeling process for recognizable objects identified from qualifying images using suggestions automatically generated by a generic deep model. Using a newly collected dataset with smartphone images collected from different locations, time of a day, and device types, we show that DeepCham improves the object recognition accuracy by 150% when compared to that achieved merely using a generic deep model. In addition, we investigated how major design factors affect the performance of DeepCham. Finally, we demonstrate the feasibility of DeepCham using an implemented prototype.

Date

05 Dec 2016

Publication

SEC 2016

Authors

Share