Change Detection and Image Time Series Analysis 2. Группа авторов
Matrices
1
Hierarchical Markov Random Fields for High Resolution Land Cover Classification of Multisensor and Multiresolution Image Time Series
Ihsen HEDHLI1, Gabriele MOSER2, Sebastiano B. SERPICO2 and Josiane ZERUBIA3
1Institute Intelligence and Data, Université Laval, Quebec City, Canada
2University of Genoa, Italy
3INRIA, Université Cote d’Azur, Nice, France
1.1. Introduction
1.1.1. The role of multisensor data in time series classification
Accurate and time-efficient classification methods for multitemporal imagery and satellite image time series are important tools required to support the rapid and reliable extraction of information on a monitored region, especially when an extensive area is considered. Given the substantial amount and variety of data currently available from last-generation, very-high spatial resolution satellite missions, the main difficulty is developing a classifier that uses the benefits of input time series that are possibly composed of multimission, multisensor, multiresolution and multifrequency imagery (Gómez-Chova et al. 2015). From an application-oriented viewpoint, the goal is to take advantage of this variety of input sources, in order to maximize the accuracy and effectiveness of the resulting thematic mapping products. From a methodological viewpoint, this goal aims for the development of novel data fusion techniques. These techniques should be flexible enough to support the joint classification of a time series of images collected in the same area, by different sensors, at different times, and associated with multiple spatial resolutions and wavelength ranges.
In this chapter, this joint fusion problem is addressed. First, an overview of the major concepts and of the recent literature in the area of remote sensing data fusion is presented (see section 1.1.3). Then, two advanced methods for the joint supervised classification of multimission image time series, including multisensor optical and Synthetic Aperture Radar (SAR) components acquired at multiple spatial resolutions, are described (see section 1.2). The two techniques address different problems of supervised classification of satellite image time series and share a common methodological formulation based on hierarchical Markov random field (MRF) models. Examples of the experimental results obtained by the proposed approaches in the application to very-high-resolution time series are also presented and discussed (see section 1.3).
On the one hand, the use of multiresolution and multiband imagery has been previously shown to optimize the classification results in terms of accuracy and computation time. On the other hand, the integration of the temporal dimension into a classification scheme can both enhance the results in terms of reliability and capture the evolution in time of the monitored area. However, the joint problem of the fusion of several distinct data modalities (e.g. multitemporal, multiresolution and multisensor) has been much more scarcely addressed in the remote sensing literature so far.
1.1.2. Multisensor and multiresolution classification
The availability of different kinds of sensors is very advantageous for land cover mapping applications. It allows us to capture a wide variety of properties of the objects contained in a scene, as measured by each sensor at each acquisition time. These properties can be exploited to extract richer information about the imaged area. In particular, the opportunity of joint availability of SAR and optical images within a time series can possibly offer high-resolution, all-weather, day/night, short revisit time data with polarimetric, multifrequency and multispectral acquisition capabilities. This potential is especially emphasized by current satellite missions for Earth Observation (EO), for example, Sentinel-1 and -2, Pléiades, TerraSAR-X, COSMO-SkyMed and COSMO-SkyMed Second Generation, RADARSAT-2 and RADARSAT Constellation, GeoEye-1, WorldView-1, -2, -3, and WorldView Legion, or PRISMA, which convey a huge potential for multisensor optical and SAR observations. They allow a spatially distributed and temporally repetitive view of the monitored area at multiple spatial scales. However, the use of multisource image analysis for land cover classification purposes has been mostly addressed so far by focusing on single-resolution multisensor optical–SAR imagery, whereas the joint use of multisensor and multiresolution capabilities within a time series of images of the same scene has been more scarcely investigated. This approach bears the obvious advantage of simplicity but is, in general, suboptimal. From a methodological viewpoint, when multisensor (optical and SAR) or multiresolution images of a given scene are available, using them separately discards part of the correlations among these multiple data sources and, most importantly, their complementarity.
Figure 1.1. Sensitivity to cloud cover and object size using different wavelength ranges. For a color version of this figure, see www.iste.co.uk/atto/change2.zip
As illustrated in Figure 1.1, SAR and multispectral images exhibit complementary properties in terms of wavelength range (active microwave vs. passive visible and infrared), noisy behavior (often strong in SAR due to speckle, usually less critical in optical imagery), feasibility of photo-interpretation (usually easier with optical than with SAR data), impact of atmospheric conditions and cloud cover (strong for optical acquisitions and almost negligible for SAR) and sensitivity to sun-illumination (strong for optical imagery and negligible for SAR) (Landgrebe 2003; Ulaby and Long 2015). This makes the joint use of high-resolution optical and SAR imagery particularly interesting for many applications related to environmental monitoring and risk management (Serpico et al. 2012).
Within this framework, there is a definite need for classification methods that automatically correlate different sets of images taken at different times, in the same area, from different sensors and at different resolutions. One way to address this problem is to resort to an explicit statistical modeling by finding a joint probability distribution, given the class-conditional marginal probability density function (PDF) of the data collected by each sensor (see Figure 1.2). The joint statistics can be designed by resorting to meta-Gaussian distributions (Storvik et al. 2009), multivariate statistics such as multivariate copulas (Voisin et al. 2014) or non-parametric density estimators (Fukunaga 2013). However, employing heterogeneous data (SAR–optical in our case) makes the task of finding an appropriate multivariate statistical model complex, time demanding and possibly prone to overfitting.
Figure 1.2. Multivariate statistical modeling for optical–SAR data fusion. For a color version of this figure, see www.iste.co.uk/atto/change2.zip
In this context, the rationale of both approaches described in section 1.2 is to benefit from the data fusion capabilities of hierarchical MRFs and avoid the computation of joint statistics. An approach based on multiple quad-trees in cascade and applied to multisensor and multiresolution fusion is described. In the first proposed method, for each sensor, the input images of the series are associated with separate quad-tree structures according to their resolutions. The goal is to generate a classification map based on a series of SAR and optical images acquired over the