Multi-Task Learning 

 

 

 

Multi-Task Learning (MTL) is an approach to machine learning that learns a problem together with other related problems at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the shared information across different tasks.  MTL is an approach that improves generalization by using the domain information contained in the training data of related tasks. It achieves this by learning tasks in parallel using a shared representation: what is learned for each task can help other tasks be learned better.

In particular, the goal of MTL is to improve the performance of learning algorithms by learning classifiers for multiple tasks jointly. This works particularly well if these tasks have some commonality and are generally slightly under sampled.  This problem is important in a variety of applications, ranging from collaborative filtering, conjoint analysis, object detection in computer vision, to multiple microarray data set integration in computational biology, to mention just a few. A key aspect of many multi-task learning algorithms is that they implement mechanisms for learning the underlying tasks' structure. Finding this common structure is important because it allows pooling information across the tasks, a property which is particularly appealing when there are many tasks but only few data per task. Moreover, knowledge of the common structure may facilitate learning new tasks (transfer learning).

                                        

 

Selected Publications:

1.    A. Caponnetto, C. A. Micchelli, M. Pontil, and Y. Ying, Universal multi-task kernels, Journal of Machine Learning Research, 9 (2008), 1615-1646.

2.    A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying, A spectral regularization framework for multi-task structure learning, Advances in Neural Information Processing Systems (NIPS), 2007.

3.    Y. Ying and C. Campbell, Learning coordinate gradients with multi-task kernels , Proceedings of the 21st Annual Conference on Learning Theory (COLT), 2008.

4.    Y. Ying, Multi-task coordinate gradient learning. ICML workshop on "object, functional and structured data: towards next generation kernel-based methods", 2012. 

 

________________________________________________________________________________________________________________________________________

 

Return to Yiming Ying's home page