File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Class-Adapted Blind Deblurring of Document Images

TitleClass-Adapted Blind Deblurring of Document Images
Authors
Issue Date2017
Citation
Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2017, v. 1, p. 721-726 How to Cite?
AbstractDeblurring of document images is an important problem, with several relevant applications, such as camera-based document acquisition and processing systems. Consequently, considerable attention has been given to this problem, namely in the blind image deblurring (BID) scenario, where the blurring filter is (partially or fully) unknown. Traditional BID methods can be used for document images, but this is far from optimal, since those methods are tailored to natural images, that is, they rely on statistical properties of natural images. This has lead to the proposal of a few special-purpose techniques, namely by exploiting properties of text images. In fact, in document images, the most prevalent type of content is text, but in some cases, it is not the only one, with the other types being very different from text. For example, identity documents typically contain faces and/or fingerprints, which are not adequately treated by methods designed for images of text. In this work, we propose a new method for BID of documents, supported on a class-adapted dictionary-based prior (learned from one or more sets of clean images of specific classes) for the image and a sparsity-inducing prior on the (unknown) blurring filter. This approach handles document images that contain two or more image classes (e.g., text and faces) which is a main contribution of our work. Experiments with document images containing both text and faces show the competitiveness of the proposed method in terms of restoration quality. Additionally, our experiments show that the proposed method is able to handle images with strong noise, outperforming state-of-The-art methods designed for BID of text images.
Persistent Identifierhttp://hdl.handle.net/10722/298258
ISSN
2020 SCImago Journal Rankings: 0.353
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLjubenovic, Marina-
dc.contributor.authorZhuang, Lina-
dc.contributor.authorFigueiredo, Mario A.T.-
dc.date.accessioned2021-04-08T03:08:01Z-
dc.date.available2021-04-08T03:08:01Z-
dc.date.issued2017-
dc.identifier.citationProceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2017, v. 1, p. 721-726-
dc.identifier.issn1520-5363-
dc.identifier.urihttp://hdl.handle.net/10722/298258-
dc.description.abstractDeblurring of document images is an important problem, with several relevant applications, such as camera-based document acquisition and processing systems. Consequently, considerable attention has been given to this problem, namely in the blind image deblurring (BID) scenario, where the blurring filter is (partially or fully) unknown. Traditional BID methods can be used for document images, but this is far from optimal, since those methods are tailored to natural images, that is, they rely on statistical properties of natural images. This has lead to the proposal of a few special-purpose techniques, namely by exploiting properties of text images. In fact, in document images, the most prevalent type of content is text, but in some cases, it is not the only one, with the other types being very different from text. For example, identity documents typically contain faces and/or fingerprints, which are not adequately treated by methods designed for images of text. In this work, we propose a new method for BID of documents, supported on a class-adapted dictionary-based prior (learned from one or more sets of clean images of specific classes) for the image and a sparsity-inducing prior on the (unknown) blurring filter. This approach handles document images that contain two or more image classes (e.g., text and faces) which is a main contribution of our work. Experiments with document images containing both text and faces show the competitiveness of the proposed method in terms of restoration quality. Additionally, our experiments show that the proposed method is able to handle images with strong noise, outperforming state-of-The-art methods designed for BID of text images.-
dc.languageeng-
dc.relation.ispartofProceedings of the International Conference on Document Analysis and Recognition, ICDAR-
dc.titleClass-Adapted Blind Deblurring of Document Images-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICDAR.2017.123-
dc.identifier.scopuseid_2-s2.0-85045218374-
dc.identifier.volume1-
dc.identifier.spage721-
dc.identifier.epage726-
dc.identifier.isiWOS:000464822500113-
dc.identifier.issnl1520-5363-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats