Learning viewpoint invariant perceptual representations from cluttered images
Spratling, Michael (2005) Learning viewpoint invariant perceptual representations from cluttered images. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (5), pp. 753-761. ISSN 0162-8828.
In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.
|Additional Information:||This is an exact copy of a paper published in IEEE Transactions on Pattern Analysis and Machine Intelligence (ISSN 0162-8828). It is reproduced with permission from the publisher. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. © 2005 IEEE.|
|Keyword(s) / Subject(s):||computational models of vision, neural nets|
|School:||Birkbeck Schools and Departments > School of Science > Psychological Sciences|
|Depositing User:||Sandra Plummer|
|Date Deposited:||22 Mar 2006|
|Last Modified:||17 Apr 2013 12:32|
Additional statistics are available via IRStats2.