Nonlinear Filters. Simon Haykin

Nonlinear Filters - Simon  Haykin


Скачать книгу
Statistical Modeling

      Statistical modeling aims at extracting information about the underlying data mechanism that allows for making predictions. Then, such predictions can be used to make decisions. There are two cultures in deploying statistical models for data analysis [5]:

       Data modeling culture is based on the idea that a given stochastic model generates the data.

       Algorithmic modeling culture uses algorithmic models to deal with an unknown data mechanism.

      An algorithmic approach has the advantage of being able to handle large complex datasets. Moreover, it can avoid irrelevant theories or questionable conclusions.

Schematic illustration of the encoder of an asymmetric autoencoder plays the role of a nonlinear filter.

      Taking an algorithmic approach, in machine learning, statistical models can be classified as [6]:

      1 (i) Generative models predict visible effects from hidden causes, .

      2 (ii) Discriminative models infer hidden causes from visible effects, .

       Chapter 2 presents the notion of observability for deterministic and stochastic systems.

      Chapters 37 cover classic estimation algorithms:

       Chapter 3 is dedicated to observers as state estimators for deterministic systems.

       Chapter 4 presents the general formulation of the optimal Bayesian filtering for stochastic systems.

       Chapter 5 covers the Kalman filter as the optimal Bayesian filter in the sense of minimizing the mean‐square estimation error for linear systems with Gaussian noise. Moreover, Kalman filter variants are presented that extend its applicability to nonlinear or non‐Gaussian cases.

       Chapter 6 covers the particle filter, which handles severe nonlinearity and non‐Gaussianity by approximating the corresponding distributions using a set of particles (random samples).

       Chapter 7 covers the smooth variable‐structure filter, which provides robustness against bounded uncertainties and noise. In addition to the innovation vector, this filter benefits from a secondary set of performance indicators.

      Chapters 811 cover learning‐based estimation algorithms:

       Chapter 8 covers the basics of deep learning.

       Chapter 9 covers deep‐learning‐based filtering algorithms using supervised and unsupervised learning.

       Chapter 10 presents the expectation maximization algorithm and its variants, which are used for joint state and parameter estimation.

       Chapter 11 presents the reinforcement learning‐based filter, which is built on viewing variational inference and reinforcement learning as instances of a generic expectation maximization problem.

      The last chapter is dedicated to nonparametric Bayesian models:

       Chapter 12 covers measure‐theoretic probability concepts as well as the notions of exchangeability, posterior computability, and algorithmic sufficiency. Furthermore, it provides guidelines for constructing nonparametric Bayesian models from finite parametric Bayesian models.

      In each chapter, selected applications of the presented filtering algorithms are reviewed, which cover a wide range of problems. Moreover, the last section of each chapter usually refers to a few topics for further study.

      2.1 Introduction


Скачать книгу