Publication
IJCNN 2015
Conference paper

Transfer learning between texture classification tasks using Convolutional Neural Networks

View publication

Abstract

Convolutional Neural Networks (CNNs) have set the state-of-the-art in many computer vision tasks in recent years. For this type of model, it is common to have millions of parameters to train, commonly requiring large datasets. We investigate a method to transfer learning across different texture classification problems, using CNNs, in order to take advantage of this type of architecture to problems with smaller datasets. We use a Convolutional Neural Network trained on a source dataset (with lots of data) to project the data of a target dataset (with limited data) onto another feature space, and then train a classifier on top of this new representation. Our experiments show that this technique can achieve good results in tasks with small datasets, by leveraging knowledge learned from tasks with larger datasets. Testing the method on the the Brodatz-32 dataset, we achieved an accuracy of 97.04% - superior to models trained with popular texture descriptors, such as Local Binary Patterns and Gabor Filters, and increasing the accuracy by 6 percentage points compared to a CNN trained directly on the Brodatz-32 dataset. We also present a visual analysis of the projected dataset, showing that the data is projected to a space where samples from the same class are clustered together - suggesting that the features learned by the CNN in the source task are relevant for the target task.

Date

28 Sep 2015

Publication

IJCNN 2015

Authors

Share