A goal of multi-view clustering (MVC) is to discover common features of an object across views while identifying unique features within each view. Deep neural networks are good at feature learning on large-scale unlabeled datasets, but most deep MVC methods struggle to extract and utilize complementary information from view-unique features. Additionally, many lack support for single-sample inference, limiting applications. This paper presents a novel Common and Unique Representation Deep Embedded Clustering (CUR-DEC) architecture and optimization method that learns view-invariant representations, aiding in clustering assignments by leveraging view-unique information. This method is suitable for single-sample inference. We first pretrain an autoencoder to extract both view-common and view-unique features, then a common cluster representation is learned by leveraging complementary information. Experimental results on multi-view datasets show that our method provides significant improvements compared to other deep multi-view clustering methods.
Related links
Details
Title
Common and Unique Representation Deep Embedded Clustering
Publication Details
2025 IEEE International Conference on Image Processing (ICIP), pp.611-616
Resource Type
Conference proceeding
Conference
IEEE International Conference on Image Processing (ICIP 2025) (Anchorage, Alaska, USA, 09/13/2025–09/16/2025)