Abstract
Multi-label learning, in which each instance can belong to multiple labels simultaneously, has significantly attracted the attention of researchers as a result of its wide range of applications, which range from document classification and automatic image annotation to video annotation. Many multi-label learning models have been developed to capture label dependency. Amongst them, the classifier chain (CC) model is one of the most popular methods due to its simplicity and promising experimental results. However, CC suffers from three important problems: Does the label order affect the performance of CC? Is there any globally optimal classifier chain which can achieve the optimal prediction performance for CC? If yes, how can the globally optimal classifier chain be found? It is non-trivial to answer these problems. Another important branch of methods for capturing label dependency is encoding-decoding paradigm. Based on structural SVMs, maximum margin output coding (MMOC) has become one of the most representative encoding-decoding methods and shown promising results for multi-label classification. Unfortunately, MMOC suffers from two major limitations: 1) Inconsistent performance: D. McAllester has already proved that structural SVMs fail to converge on the optimal decoder even with infinite training data. 2) Prohibitive computational cost: the training of MMOC involves a complex quadratic programming (QP) problem over the combinatorial space, and its computational cost on the data sets with many labels is prohibitive. Therefore, it is non-trivial to break the bottlenecks of MMOC, and develop efficient and consistent algorithms for solving multi-label learning tasks. The prediction of most multi-label learning methods either scales linearly with the number of labels or involves an expensive decoding process, which usually requires solving a combinatorial optimization. Such approaches become unacceptable when tackling thousands of labels, and are impractical for real-world applications, such as document annotation. It is imperative to design an efficient, yet accurate multi-label learning algorithm with the minimum number of predictions.
Short bio
Weiwei Liu, a full professor at the School of Computer Science, Wuhan University, China. His research interest is machine learning. His research results have been published at prestigious journals and leading conferences such as JMLR, IEEE TPAMI, COLT,NeurIPS, ICML and so on. He has served as the Area Chair many times for ICML, NeurIPS, and ICLR. He also served as the Associate Editor for IEEE Transactions on Neural Networks and Learning Systems (TNNLS) .