BIROn - Birkbeck Institutional Research Online

    Sequence processing with recurrent neural networks

    Peng, C.C. and Magoulas, George D. (2009) Sequence processing with recurrent neural networks. In: Rabuñal Dopico, J.R. and Dorado, J. and Pazos, A. (eds.) Encyclopedia of Artificial Intelligence. Hershey, U.S.: IGI Global, pp. 1411-1417. ISBN 9781599048499.

    Full text not available from this repository.

    Abstract

    Sequence processing involves several tasks such as clustering, classification, prediction, and transduction of sequential data which can be symbolic, non-symbolic or mixed. Examples of symbolic data patterns occur in modelling natural (human) language, while the prediction of water level of River Thames is an example of processing non-symbolic data. If the content of a sequence will be varying through different time steps, the sequence is called temporal or time-series. In general, a temporal sequence consists of nominal symbols from a particular alphabet, while a time-series sequence deals with continuous, real-valued elements (Antunes & Oliverira, 2001). Processing both these sequences mainly consists of applying the current known patterns to produce or predict the future ones, while a major difficulty is that the range of data dependencies is usually unknown. Therefore, an intelligent system with memorising capability is crucial for effective sequence processing and modelling. A recurrent neural network (RNN) is an artificial neural network in which self-loop and backward connections between nodes are allowed (Lin & Lee 1996; Schalkoff, 1997). Comparing to feedforward neural networks, RNNs are well-known for their power to memorise time dependencies and model nonlinear systems. RNNs can be trained from examples to map input sequences to output sequences and in principle they can implement any kind of sequential behaviour. They are biologically more plausible and computationally more powerful than other modelling approaches, such as Hidden Markov Models (HMMs), which have non-continuous internal states, feedforward neural networks and Support Vector Machines (SVMs), which do not have internal states at all. In this article, we review RNN architectures and we discuss the challenges involved in training RNNs for sequence processing. We provide a review of learning algorithms for RNNs and discuss future trends in this area.

    Metadata

    Item Type: Book Section
    School: Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences
    Research Centres and Institutes: Birkbeck Knowledge Lab
    Depositing User: Administrator
    Date Deposited: 04 Apr 2011 09:23
    Last Modified: 09 Aug 2023 12:29
    URI: https://eprints.bbk.ac.uk/id/eprint/1423

    Statistics

    Activity Overview
    6 month trend
    0Downloads
    6 month trend
    372Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item