Transfer Learning


Transfer Learning as a Tool for Efficient Machine Learning

Objectives
Traditionally, machine learning requires a lot of labelled data to perform reasonably well. However, annotating data requires a lot of efforts. Even if we generate lot of data, most of them are unlabeled. To exploit lot of unlabeled data or few labelled data, transfer learning is used in different forms.
The simplest form is unsupervised domain adaptation (UDA) where we have labelled source domain data but unlabeled target domain data with same tasks across source and target. The goal is to find a transformation of a source domain so that a model on the transformed source domain performs well in the target domain.
The second form is hypothesis transfer learning to novel categories (HTL). Here we have a large number of source hypothesis but we do not have access to source domain data. The goal is to learn a hypothesis for a new target task which has few labelled samples using the source hypothesis.
The third form is few-shot learning (FSL) where we have very few labelled samples. The goal is find a relation between the model learnt from less number of samples and the model learnt from large number of samples so that we can use the relation to find the large sample model whenever there are few samples for training.

Potential Applications
Machine vision, knowledge transfer, data categorization.