Spratling, Michael (2005) Learning viewpoint invariant perceptual representations from cluttered images. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (5), pp. 753-761. ISSN 0162-8828.
|
Text
Binder1.pdf Download (690kB) | Preview |
Abstract
In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.
Metadata
Item Type: | Article |
---|---|
Additional Information: | This is an exact copy of a paper published in IEEE Transactions on Pattern Analysis and Machine Intelligence (ISSN 0162-8828). It is reproduced with permission from the publisher. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. © 2005 IEEE. |
Keyword(s) / Subject(s): | computational models of vision, neural nets |
School: | Birkbeck Faculties and Schools > Faculty of Science > School of Psychological Sciences |
Depositing User: | Sandra Plummer |
Date Deposited: | 22 Mar 2006 |
Last Modified: | 02 Aug 2023 16:46 |
URI: | https://eprints.bbk.ac.uk/id/eprint/342 |
Statistics
Additional statistics are available via IRStats2.