File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation

TitleUNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation
Authors
KeywordsClothed human reconstruction
Neural implicit functions
Non-rigid deformation
Shape representation
Issue Date2022
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, v. 13663 LNCS, p. 121-137 How to Cite?
AbstractWe propose united implicit functions (UNIF), a part-based method for clothed human reconstruction and animation with raw scans and skeletons as the input. Previous part-based methods for human reconstruction rely on ground-truth part labels from SMPL and thus are limited to minimal-clothed humans. In contrast, our method learns to separate parts from body motions instead of part supervision, thus can be extended to clothed humans and other articulated objects. Our Partition-from-Motion is achieved by a bone-centered initialization, a bone limit loss, and a section normal loss that ensure stable part division even when the training poses are limited. We also present a minimal perimeter loss for SDF to suppress extra surfaces and part overlapping. Another core of our method is an adjacent part seaming algorithm that produces non-rigid deformations to maintain the connection between parts which significantly relieves the part-based artifacts. Under this algorithm, we further propose “Competing Parts”, a method that defines blending weights by the relative position of a point to bones instead of the absolute position, avoiding the generalization problem of neural implicit functions with inverse LBS (linear blend skinning). We demonstrate the effectiveness of our method by clothed human body reconstruction and animation on the CAPE and the ClothSeq datasets. Our code is available at https://github.com/ShenhanQian/UNIF.git.
Persistent Identifierhttp://hdl.handle.net/10722/345294
ISSN
2023 SCImago Journal Rankings: 0.606

 

DC FieldValueLanguage
dc.contributor.authorQian, Shenhan-
dc.contributor.authorXu, Jiale-
dc.contributor.authorLiu, Ziwei-
dc.contributor.authorMa, Liqian-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:26:26Z-
dc.date.available2024-08-15T09:26:26Z-
dc.date.issued2022-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, v. 13663 LNCS, p. 121-137-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/345294-
dc.description.abstractWe propose united implicit functions (UNIF), a part-based method for clothed human reconstruction and animation with raw scans and skeletons as the input. Previous part-based methods for human reconstruction rely on ground-truth part labels from SMPL and thus are limited to minimal-clothed humans. In contrast, our method learns to separate parts from body motions instead of part supervision, thus can be extended to clothed humans and other articulated objects. Our Partition-from-Motion is achieved by a bone-centered initialization, a bone limit loss, and a section normal loss that ensure stable part division even when the training poses are limited. We also present a minimal perimeter loss for SDF to suppress extra surfaces and part overlapping. Another core of our method is an adjacent part seaming algorithm that produces non-rigid deformations to maintain the connection between parts which significantly relieves the part-based artifacts. Under this algorithm, we further propose “Competing Parts”, a method that defines blending weights by the relative position of a point to bones instead of the absolute position, avoiding the generalization problem of neural implicit functions with inverse LBS (linear blend skinning). We demonstrate the effectiveness of our method by clothed human body reconstruction and animation on the CAPE and the ClothSeq datasets. Our code is available at https://github.com/ShenhanQian/UNIF.git.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.subjectClothed human reconstruction-
dc.subjectNeural implicit functions-
dc.subjectNon-rigid deformation-
dc.subjectShape representation-
dc.titleUNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-031-20062-5_8-
dc.identifier.scopuseid_2-s2.0-85144537850-
dc.identifier.volume13663 LNCS-
dc.identifier.spage121-
dc.identifier.epage137-
dc.identifier.eissn1611-3349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats