File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3340531.3411944
- Scopus: eid_2-s2.0-85095863107
- WOS: WOS:000749561300101
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Fast Attributed Multiplex Heterogeneous Network Embedding
Title | Fast Attributed Multiplex Heterogeneous Network Embedding |
---|---|
Authors | |
Keywords | attributed networks large-scale networks multiplex heterogeneous networks network embedding network representation learning sparse random projection |
Issue Date | 2020 |
Citation | International Conference on Information and Knowledge Management, Proceedings, 2020, p. 995-1004 How to Cite? |
Abstract | In recent years, heterogeneous network representation learning has attracted considerable attentions with the consideration of multiple node types. However, most of them ignore the rich set of network attributes (attributed network) and different types of relations (multiplex network), which can hardly recognize the multi-modal contextual signals across different interactions. While a handful of network embedding techniques are developed for attributed multiplex heterogeneous networks, they are significantly limited to the scalability issue on large-scale network data, due to their heavy cost both in computation and memory. In this work, we propose a Fast Attributed Multiplex heterogeneous network Embedding framework (FAME) for large-scale network data, by mapping the units from different modalities (i.e., network topological structures, various node features and relations) into the same latent space in a very efficient way. Our FAME is an integrative architecture with the scalable spectral transformation and sparse random projection, to automatically preserve both attribute semantics and multi-type interactions in the learned embeddings. Extensive experiments on four real-world datasets with various network analytical tasks, demonstrate that FAME achieves both effectiveness and significant efficiency over state-of-the-art baselines. The source code is available at: https://github.com/ZhijunLiu95/FAME. |
Persistent Identifier | http://hdl.handle.net/10722/308828 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Zhijun | - |
dc.contributor.author | Huang, Chao | - |
dc.contributor.author | Yu, Yanwei | - |
dc.contributor.author | Fan, Baode | - |
dc.contributor.author | Dong, Junyu | - |
dc.date.accessioned | 2021-12-08T07:50:13Z | - |
dc.date.available | 2021-12-08T07:50:13Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | International Conference on Information and Knowledge Management, Proceedings, 2020, p. 995-1004 | - |
dc.identifier.uri | http://hdl.handle.net/10722/308828 | - |
dc.description.abstract | In recent years, heterogeneous network representation learning has attracted considerable attentions with the consideration of multiple node types. However, most of them ignore the rich set of network attributes (attributed network) and different types of relations (multiplex network), which can hardly recognize the multi-modal contextual signals across different interactions. While a handful of network embedding techniques are developed for attributed multiplex heterogeneous networks, they are significantly limited to the scalability issue on large-scale network data, due to their heavy cost both in computation and memory. In this work, we propose a Fast Attributed Multiplex heterogeneous network Embedding framework (FAME) for large-scale network data, by mapping the units from different modalities (i.e., network topological structures, various node features and relations) into the same latent space in a very efficient way. Our FAME is an integrative architecture with the scalable spectral transformation and sparse random projection, to automatically preserve both attribute semantics and multi-type interactions in the learned embeddings. Extensive experiments on four real-world datasets with various network analytical tasks, demonstrate that FAME achieves both effectiveness and significant efficiency over state-of-the-art baselines. The source code is available at: https://github.com/ZhijunLiu95/FAME. | - |
dc.language | eng | - |
dc.relation.ispartof | International Conference on Information and Knowledge Management, Proceedings | - |
dc.subject | attributed networks | - |
dc.subject | large-scale networks | - |
dc.subject | multiplex heterogeneous networks | - |
dc.subject | network embedding | - |
dc.subject | network representation learning | - |
dc.subject | sparse random projection | - |
dc.title | Fast Attributed Multiplex Heterogeneous Network Embedding | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3340531.3411944 | - |
dc.identifier.scopus | eid_2-s2.0-85095863107 | - |
dc.identifier.spage | 995 | - |
dc.identifier.epage | 1004 | - |
dc.identifier.isi | WOS:000749561300101 | - |