File Download
Supplementary

postgraduate thesis: zkSNARKs for matrix computations

TitlezkSNARKs for matrix computations
Authors
Advisors
Advisor(s):Yiu, SM
Issue Date2025
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Cong, M. [叢明舒]. (2025). zkSNARKs for matrix computations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractThe correctness of a computation can be efficiently verified in a privacy-preserving manner without re-execution using zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). With short transcript sizes and fast verification times, zkSNARKs enable the potential deployment of computationally intensive algorithms—such as machine learning models—on the blockchain, making them efficiently verifiable through short proofs. However, the prover time for matrix computations in these settings often fails to scale efficiently with increasing model complexity and data size. We are the first to systematically address zkSNARKs for general matrix computations with practical prover efficiency. We achieve an $O(N + nM)$ prover time, asymptotically faster than the unverified matrix computation, for computations involving $M$ matrix operations on $n \times n$ matrices with $N$ total non-zero entries. Starting with a single dense matrix multiplication, we propose zkMatrix, a special-purpose zkSNARK for verifying committed $n \times n$ matrix multiplication through their projections onto random vectors. Among zkSNARKs with $O(\log n)$ transcript size and verifier time, zkMatrix is the first to achieve $O(n^2)$ prover time and $O(n^2)$ RAM usage. Batching multiple proofs together reduces the prover time for each additional multiplication to $O(n)$ group operations. Next, we design zkSNARKs for sparse matrix multiplication with $N$ non-zero entries. zkSmart reduces the prover time from $O(n^2)$ to $O(N + n)$, relying on an $O(N + n)$-prover vector-matrix-vector product argument, achieved by improving Bulletproofs. Moreover, \zksmart formulates verifiable computation represented as a matrix circuit of $M$ nodes, each denoting a matrix operation. Sparse matrix multiplication translates the matrix circuit satisfiability (Mat-Circ-SAT) problem into the high-dimensional rank-1 constraint system (HD-R1CS), a matrix-circuit version of the rank-1 constraint system (R1CS), traditionally used for arithmetic circuits. Using zkSmart, we achieve $O(N + nM)$ prover time for general matrix computations. To reduce the cost of committing to intermediate variable matrices in zkSmart, we introduce Evalyn, which generates proofs using a pre-order tree traversal on the abstract syntax tree (AST) of a matrix expression. Evalyn ensures output and input consistency in serial matrix computations by linking randomness for zkSNARKs between parent and child nodes, eliminating the need to commit to the nodes and significantly improving prover efficiency. Our prover for R1CS outperforms state-of-the-art general-purpose zkSNARKs. As a foundational component of our framework, we optimize Bulletproofs to construct the fastest known inner product argument (IPA). Additionally, we propose a zero-knowledge transformation that commits to transcript elements with only logarithmic overhead—while maintaining compatibility with post-quantum secure, non-homomorphic commitment schemes. We apply our framework to zero-knowledge machine learning (zkML), providing zkSNARKs for neural networks. We translate floating-point truncations and non-linear activation functions into linear algebra equations that can be verified by our framework. We utilize our framework to generate efficient proofs for the attention layer in large language models (LLMs). After resolving all these challenges, we have thoroughly addressed the design of efficient zkSNARKs for matrix computations.
DegreeDoctor of Philosophy
SubjectData encryption (Computer science)
Matrices
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/356609

 

DC FieldValueLanguage
dc.contributor.advisorYiu, SM-
dc.contributor.authorCong, Mingshu-
dc.contributor.author叢明舒-
dc.date.accessioned2025-06-05T09:31:26Z-
dc.date.available2025-06-05T09:31:26Z-
dc.date.issued2025-
dc.identifier.citationCong, M. [叢明舒]. (2025). zkSNARKs for matrix computations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/356609-
dc.description.abstractThe correctness of a computation can be efficiently verified in a privacy-preserving manner without re-execution using zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). With short transcript sizes and fast verification times, zkSNARKs enable the potential deployment of computationally intensive algorithms—such as machine learning models—on the blockchain, making them efficiently verifiable through short proofs. However, the prover time for matrix computations in these settings often fails to scale efficiently with increasing model complexity and data size. We are the first to systematically address zkSNARKs for general matrix computations with practical prover efficiency. We achieve an $O(N + nM)$ prover time, asymptotically faster than the unverified matrix computation, for computations involving $M$ matrix operations on $n \times n$ matrices with $N$ total non-zero entries. Starting with a single dense matrix multiplication, we propose zkMatrix, a special-purpose zkSNARK for verifying committed $n \times n$ matrix multiplication through their projections onto random vectors. Among zkSNARKs with $O(\log n)$ transcript size and verifier time, zkMatrix is the first to achieve $O(n^2)$ prover time and $O(n^2)$ RAM usage. Batching multiple proofs together reduces the prover time for each additional multiplication to $O(n)$ group operations. Next, we design zkSNARKs for sparse matrix multiplication with $N$ non-zero entries. zkSmart reduces the prover time from $O(n^2)$ to $O(N + n)$, relying on an $O(N + n)$-prover vector-matrix-vector product argument, achieved by improving Bulletproofs. Moreover, \zksmart formulates verifiable computation represented as a matrix circuit of $M$ nodes, each denoting a matrix operation. Sparse matrix multiplication translates the matrix circuit satisfiability (Mat-Circ-SAT) problem into the high-dimensional rank-1 constraint system (HD-R1CS), a matrix-circuit version of the rank-1 constraint system (R1CS), traditionally used for arithmetic circuits. Using zkSmart, we achieve $O(N + nM)$ prover time for general matrix computations. To reduce the cost of committing to intermediate variable matrices in zkSmart, we introduce Evalyn, which generates proofs using a pre-order tree traversal on the abstract syntax tree (AST) of a matrix expression. Evalyn ensures output and input consistency in serial matrix computations by linking randomness for zkSNARKs between parent and child nodes, eliminating the need to commit to the nodes and significantly improving prover efficiency. Our prover for R1CS outperforms state-of-the-art general-purpose zkSNARKs. As a foundational component of our framework, we optimize Bulletproofs to construct the fastest known inner product argument (IPA). Additionally, we propose a zero-knowledge transformation that commits to transcript elements with only logarithmic overhead—while maintaining compatibility with post-quantum secure, non-homomorphic commitment schemes. We apply our framework to zero-knowledge machine learning (zkML), providing zkSNARKs for neural networks. We translate floating-point truncations and non-linear activation functions into linear algebra equations that can be verified by our framework. We utilize our framework to generate efficient proofs for the attention layer in large language models (LLMs). After resolving all these challenges, we have thoroughly addressed the design of efficient zkSNARKs for matrix computations.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshData encryption (Computer science)-
dc.subject.lcshMatrices-
dc.titlezkSNARKs for matrix computations-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2025-
dc.identifier.mmsid991044970873703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats