Data & Analytics

What Is Transfer Learning?

During transfer learning, the knowledge gained and rapid progress made from a source task is used to improve the learning and development to a new target task.

TransferLearning.jpg
Source: qimono on Pixabary

Transfer learning is a machine-learning method where the application of knowledge obtained from a model used in one task can be reused as a foundation point for another task.

Machine-learning algorithms use historical data as their input to make predictions and produce new output values. They are typically designed to conduct isolated tasks. A source task is a task from which knowledge is transferred to a target task. A target task is where improved learning occurs because of the transfer of knowledge from a source task.

During transfer learning, the knowledge leveraged and rapid progress from a source task is used to improve the learning and development to a new target task. The application of knowledge is using the source task’s attributes and characteristics, which will be applied and mapped onto the target task.

However, if the transfer method results in a decrease in the performance of the new target task, it is called a negative transfer. One of the major challenges when working with transfer learning methods is being able to provide and ensure the positive transfer between related tasks while avoiding the negative transfer between less-related tasks.

The What, When, and How of Transfer Learning

  • What do we transfer? To understand which parts of the learned knowledge to transfer, we need to figure out which portions of knowledge best reflect both the source and target, overall improving the performance and accuracy of the target task. 
  • When do we transfer? Understanding when to transfer is important because we don’t want to be transferring knowledge that could, in turn, make matters worse, leading to negative transfer. The goal is to improve the performance of the target task, not make it worse. 
  • How do we transfer? Now that we have a better idea of what we want to transfer and when, we can move on to working with different techniques to transfer the knowledge efficiently.

Before we dive into the methodology behind transfer learning, it is good to know the different forms of transfer learning.
Different Types of Transfer Learning
Inductive Transfer Learning. In this type of transfer learning, the source and target task are the same, however, they are still different from one another. The model will use inductive biases from the source task to help improve the performance of the target task. The source task may or may not contain labeled data, further leading onto the model using multitask learning and self-taught learning.

Unsupervised Transfer Learning. Unsupervised learning is when an algorithm is subjected to being able to identify patterns in data sets that have not been labeled or classified. In this case, the source and target are similar, however, the task is different, where data is unlabeled in both source and target. Techniques such as dimensionality reduction and clustering are well known in unsupervised learning.

Transductive Transfer Learning. In this last type of transfer learning, the source and target tasks share similarities, however, the domains are different. The source domain contains a lot of labeled data, whereas there is an absence of labeled data in the target domain, further leading onto the model using domain adaptation.

Read the full story here.