Du, Y. and Yuan, C. and Hu, W. and Maybank, Stephen J. (2017) Spatio-temporal self-organizing map deep network for dynamic object detection from videos. In: UNSPECIFIED (ed.) IEEE Conference on Computer Vison and Pattern Recognition 2017. IEEE Computer Society. ISBN 9781538604588.
Text
18383.pdf - Author's Accepted Manuscript Restricted to Repository staff only Download (876kB) | Request a copy |
Abstract
In dynamic object detection, it is challenging to construct an effective model to sufficiently characterize the spatial-temporal properties of the background. This paper proposes a new Spatio-Temporal Self-Organizing Map (STSOM) deep network to detect dynamic objects in complex scenarios. The proposed approach has several contributions: First, a novel STSOM shared by all pixels in a video frame is presented to efficiently model complex background. We exploit the fact that the motions of complex background have the global variation in the space and the local variation in the time, to train STSOM using the whole frames and the sequence of a pixel over time to tackle the variance of complex background. Second, a Bayesian parameter estimation based method is presented to learn thresholds automatically for all pixels to filter out the background. Last, in order to model the complex background more accurately, we extend the single-layer STSOM to the deep network. Then the background is filtered out layer by layer. Experimental results on CDnet 2014 dataset demonstrate that the proposed STSOM deep network outperforms numerous recently proposed methods in the overall performance and in most categories of scenarios.
Metadata
Item Type: | Book Section |
---|---|
Additional Information: | Print ISSN: 1063-6919. http://cvpr2017.thecvf.com/program/workshops |
School: | Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences |
Depositing User: | Administrator |
Date Deposited: | 20 Mar 2017 09:30 |
Last Modified: | 09 Aug 2023 12:41 |
URI: | https://eprints.bbk.ac.uk/id/eprint/18383 |
Statistics
Additional statistics are available via IRStats2.