In order to better solve the low-rank and sparse decomposition problem for high-dimensional data matrix, this paper puts forward a novel Max minimization model with Max-norm as the convex relaxation of the rank function, and provides the corresponding algorithm. Based on the complexity analysis on the novel model, an improved Max constraint model is further proposed, which not only has good performance in the decomposition problem but also can be solved with a fast projection gradient method. The experimental results show that the proposed two models are effective for low-rank sparse decomposition problem.
Lin Zhou-chen, Chen Min-ming, and Ma Yi. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices[OL]. http://arxiv.org/abs/1009.5055, 2013.
[2]
Candes E J, Li Xiao-dong, Ma Yi, et al.. Robust principal component analysis?[J]. Journal of the ACM, 2011, 58(3): 11.
[3]
Candes E J and Plan Y. Matrix completion with noise[J]. Proceedings of the IEEE, 2010, 98(6): 925-936.
[4]
Chen Chong-yu, Cai Jian-fei, Lin Wei-si, et al.. Incremental low-rank and sparse decomposition for compressing videos captured by fixed cameras[J]. Journal of Visual Communication and Image Representation, 2015, 26(1): 338-348.
[5]
Sheng Bi-yun, Yang Wan-kou, Zhang Bao-chang, et al.. A non-negative low rank and sparse model for action recognition [C]. Proceedings of the 6th Chinese Conference on Pattern Recognition, Changsha, China, 2014: 266-275.
[6]
Li Sheng, Li Liang-yue, and Fu Yun. Low-Rank and Sparse Dictionary Learning[M]. Switzerland: Springer International Publishing, 2014: 61-85.
Huo Lei-gang and Feng Xiang-chu. Denoising of hyperspectral remote sensing image based on principal component analysis and dictionary learning[J].? Journal of Electronics & Information Technology, 2014, 36(11): 2723-2729.
Zhang Wen-juan and Feng Xiang-chu. Image super-pixels segmentation method based on the non-convex low-rank and sparse constraints[J]. Journal of Xidian University, 2013, 40(5): 86-91.
[9]
Fazel M. Matrix rank minimization with applications[D]. [Ph.D. dissertation], Stanford University, 2002.
[10]
Srebro N and Shraibman A. Rank, Trace-norm and Max-norm [M]. Heidelberg: Springer Berlin Heidelberg, 2005: 545-560.
[11]
Cai T and Zhou Wen-xin. Matrix completion via max-norm constrained optimization [OL]. http://arxiv.org/abs/ 1303. 0341, 2014.
[12]
Cai T and Zhou Wen-xin. A max-norm constrained minimization approach to 1-bit matrix completion[J]. The Journal of Machine Learning Research, 2013, 14(1): 3619-3647.
[13]
Jalali A and Srebro N. Clustering using max-norm constrained optimization[OL]. http://arxiv.org/abs/1202. 5598, 2012.
[14]
Neyshabur B, Makarychev Y, and Srebro N. Clustering, hamming embedding, generalized lsh and the max norm [C]. Proceedings of the 25th International Conference on Algorithmic Learning Theory, Bled, 2014: 306-320.
[15]
Forster J, Schmitt N, Simon H U, et al.. Estimating the optimal margins of embeddings in euclidean half spaces[J]. Machine Learning, 2003, 51(3): 263-281.
Zhuang Zhe-min, Zhang Cong-you, Yang Jin-yao, et al.. Investigation on visual background extractor based on gray feature and adaptive threshold[J]. Journal of Electronics & Information Technology, 2015, 37(2): 346-352.
Zhang Chao, Wu Xiao-pei, and Lü Zhao. Experiments and analysis on observation vector generation and channel number selection in motion detection algorithm based on independent component analysis[J]. Journal of Electronics & Information Technology, 2015, 37(1): 137-142.
[18]
Salakhutdinov R and Srebro N. Collaborative filtering in a non-uniform world: learning with the weighted trace norm[C]. Electronic Proceedings of the Neural Information Processing Systems Conference, Vancouver, 2010: 2056-2064.
[19]
Lee J, Recht B, Salakhutdinov R, et al.. Practical large-scale optimization for max-norm regularization[C]. Electronic Proceedings of the Neural Information Processing Systems Conference, Vancouver, 2010: 1297-1305.
[20]
Shen Jie, Xu Huan, and Li Ping. Online optimization for max-norm regularization[C]. Electronic Proceedings of the Neural Information Processing Systems Conference, Montreal, 2014: 1718-1726.
[21]
Linial N, Mendelson S, Schechtman G, et al.. Complexity measures of sign matrices[J]. Combinatorica, 2007, 27(4): 439-463.
[22]
Boyd S, Parikh N, Chu E, et al.. Distributed optimization and statistical learning via the alternating direction method of multipliers[J]. Foundations and Trends in Machine Learning, 2011, 3(1): 1-122.
[23]
Shen Yuan, Wen Zai-wen, and Zhang Yin. Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization[J]. Optimization Methods and Software, 2014, 29(2): 239-263.
[24]
Hale E T, Yin Wo-tao, and Zhang Yin. Fixed-point continuation for Methodology and convergence[J]. SIAM Journal on Optimization, 2008, 19(3): 1107-1130.
[25]
Lin Chih-jen. Projected gradient methods for nonnegative matrix factorization[J]. Neural Computation, 2007, 19(10): 2756-2779.
[26]
Daubechies I, Fornasier M, and Loris I. Accelerated projected gradient method for linear inverse problems with sparsity constraints[J]. Journal of Fourier Analysis and Applications, 2008, 14(5/6): 764-792.