File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Compositing-Aware Image Search

TitleCompositing-Aware Image Search
Authors
Issue Date2018
PublisherSpringer.
Citation
15th European Conference on Computer Vision (ECCV 2018), Munich, Germany, 8-14 September 2018. In Ferrari, V, Hebert, M, Sminchisescu, C, Weiss, Y (Eds.), Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part III, p. 517-532. Cham: Springer, 2018 How to Cite?
AbstractWe present a new image search technique that, given a background image, returns compatible foreground objects for image compositing tasks. The compatibility of a foreground object and a background scene depends on various aspects such as semantics, surrounding context, geometry, style and color. However, existing image search techniques measure the similarities on only a few aspects, and may return many results that are not suitable for compositing. Moreover, the importance of each factor may vary for different object categories and image content, making it difficult to manually define the matching criteria. In this paper, we propose to learn feature representations for foreground objects and background scenes respectively, where image content and object category information are jointly encoded during training. As a result, the learned features can adaptively encode the most important compatibility factors. We project the features to a common embedding space, so that the compatibility scores can be easily measured using the cosine similarity, enabling very efficient search. We collect an evaluation set consisting of eight object categories commonly used in compositing tasks, on which we demonstrate that our approach significantly outperforms other search techniques.
Persistent Identifierhttp://hdl.handle.net/10722/303866
ISBN
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID
Series/Report no.Lecture Notes in Computer Science ; 11207
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 11207

 

DC FieldValueLanguage
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorShen, Xiaohui-
dc.contributor.authorLin, Zhe-
dc.contributor.authorSunkavalli, Kalyan-
dc.contributor.authorPrice, Brian-
dc.contributor.authorJia, Jiaya-
dc.date.accessioned2021-09-15T08:26:10Z-
dc.date.available2021-09-15T08:26:10Z-
dc.date.issued2018-
dc.identifier.citation15th European Conference on Computer Vision (ECCV 2018), Munich, Germany, 8-14 September 2018. In Ferrari, V, Hebert, M, Sminchisescu, C, Weiss, Y (Eds.), Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part III, p. 517-532. Cham: Springer, 2018-
dc.identifier.isbn9783030012182-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/303866-
dc.description.abstractWe present a new image search technique that, given a background image, returns compatible foreground objects for image compositing tasks. The compatibility of a foreground object and a background scene depends on various aspects such as semantics, surrounding context, geometry, style and color. However, existing image search techniques measure the similarities on only a few aspects, and may return many results that are not suitable for compositing. Moreover, the importance of each factor may vary for different object categories and image content, making it difficult to manually define the matching criteria. In this paper, we propose to learn feature representations for foreground objects and background scenes respectively, where image content and object category information are jointly encoded during training. As a result, the learned features can adaptively encode the most important compatibility factors. We project the features to a common embedding space, so that the compatibility scores can be easily measured using the cosine similarity, enabling very efficient search. We collect an evaluation set consisting of eight object categories commonly used in compositing tasks, on which we demonstrate that our approach significantly outperforms other search techniques.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofComputer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part III-
dc.relation.ispartofseriesLecture Notes in Computer Science ; 11207-
dc.relation.ispartofseriesImage Processing, Computer Vision, Pattern Recognition, and Graphics ; 11207-
dc.titleCompositing-Aware Image Search-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-01219-9_31-
dc.identifier.scopuseid_2-s2.0-85055095187-
dc.identifier.spage517-
dc.identifier.epage532-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000594210100031-
dc.publisher.placeCham-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats