Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/7003
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShokoufandeh, A.-
dc.contributor.authorKeselman, Y.-
dc.contributor.authorDemirci, M. F.-
dc.contributor.authorMacrini, D.-
dc.contributor.authorDickinson, S.-
dc.date.accessioned2021-09-11T15:44:49Z-
dc.date.available2021-09-11T15:44:49Z-
dc.date.issued2012en_US
dc.identifier.issn1751-9632-
dc.identifier.issn1751-9640-
dc.identifier.urihttps://doi.org/10.1049/iet-cvi.2012.0030-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/7003-
dc.description.abstractThe mainstream object categorisation community relies heavily on object representations consisting of local image features, due to their ease of recovery and their attractive invariance properties. Object categorisation is therefore formulated as finding, that is, 'detecting', a one-to-one correspondence between image and model features. This assumption breaks down for categories in which two exemplars may not share a single local image feature. Even when objects are represented as more abstract image features, a collection of features at one scale (in one image) may correspond to a single feature at a coarser scale (in the second image). Effective object categorisation therefore requires the ability to match features many-to-many. In this paper, we review our progress on three independent object categorisation problems, each formulated as a graph matching problem and each solving the many-to-many graph matching problem in a different way. First, we explore the problem of learning a shape class prototype from a set of class exemplars which may not share a single local image feature. Next, we explore the problem of matching two graphs in which correspondence exists only at higher levels of abstraction, and describe a low-dimensional, spectral encoding of graph structure that captures the abstract shape of a graph. Finally, we embed graphs into geometric spaces, reducing the many-to-many graph-matching problem to a weighted point matching problem, for which efficient many-to-many matching algorithms exist.en_US
dc.description.sponsorshipNSERCNatural Sciences and Engineering Research Council of Canada (NSERC); IRIS; NSFNational Science Foundation (NSF); ONROffice of Naval Research; DARPAUnited States Department of DefenseDefense Advanced Research Projects Agency (DARPA); PREAen_US
dc.description.sponsorshipThe authors gratefully acknowledge the support of NSERC, IRIS, NSF, ONR, DARPA, and PREA.en_US
dc.language.isoenen_US
dc.publisherWileyen_US
dc.relation.ispartofIet Computer Visionen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subject[No Keywords]en_US
dc.titleMany-to-many feature matching in object recognition: a review of three approachesen_US
dc.typeReviewen_US
dc.departmentFaculties, Faculty of Engineering, Department of Computer Engineeringen_US
dc.departmentFakülteler, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümütr_TR
dc.identifier.volume6en_US
dc.identifier.issue6en_US
dc.identifier.startpage500en_US
dc.identifier.endpage513en_US
dc.identifier.wosWOS:000318228200002en_US
dc.identifier.scopus2-s2.0-84879735073en_US
dc.institutionauthorDemirci, Muhammed Fatih-
dc.identifier.doi10.1049/iet-cvi.2012.0030-
dc.relation.publicationcategoryDiğeren_US
dc.identifier.scopusqualityQ3-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairetypeReview-
item.grantfulltextnone-
Appears in Collections:Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

22
checked on Apr 20, 2024

WEB OF SCIENCETM
Citations

18
checked on Apr 20, 2024

Page view(s)

22
checked on Apr 22, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.