File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Sketching Transformed Matrices with Applications to Natural Language Processing

TitleSketching Transformed Matrices with Applications to Natural Language Processing
Authors
Issue Date2020
Citation
Proceedings of Machine Learning Research, 2020, v. 108, p. 467-481 How to Cite?
AbstractSuppose we are given a large matrix A = (ai,j) that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, f(A):= (f(ai,j)) for some function f. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation after-wards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling. In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to a concrete application: low-rank approximation. We show that our approach obtains small error and is efficient in both space and time. We complement our theoretical results with experiments on synthetic and real data.
Persistent Identifierhttp://hdl.handle.net/10722/341321

 

DC FieldValueLanguage
dc.contributor.authorLiang, Yingyu-
dc.contributor.authorSong, Zhao-
dc.contributor.authorWang, Mengdi-
dc.contributor.authorYang, Lin F.-
dc.contributor.authorYang, Xin-
dc.date.accessioned2024-03-13T08:41:54Z-
dc.date.available2024-03-13T08:41:54Z-
dc.date.issued2020-
dc.identifier.citationProceedings of Machine Learning Research, 2020, v. 108, p. 467-481-
dc.identifier.urihttp://hdl.handle.net/10722/341321-
dc.description.abstractSuppose we are given a large matrix A = (ai,j) that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, f(A):= (f(ai,j)) for some function f. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation after-wards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling. In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to a concrete application: low-rank approximation. We show that our approach obtains small error and is efficient in both space and time. We complement our theoretical results with experiments on synthetic and real data.-
dc.languageeng-
dc.relation.ispartofProceedings of Machine Learning Research-
dc.titleSketching Transformed Matrices with Applications to Natural Language Processing-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85111230660-
dc.identifier.volume108-
dc.identifier.spage467-
dc.identifier.epage481-
dc.identifier.eissn2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats