The selected technique mainly depends on (a) the nature of the problem and (b) the available ancillary and ground truth data. Hyperspectral images are a special case of imaging, since they fuse spatial, spectral and/or temporal information, thus flexible and adaptive processing strategies need to be exploited. The main methods aiming at information extraction from hyperspectral images are the following:
Classification is mainly a pixel-based process, where each pixel is categorized in a specific category or class. Depending on the spectral signature of the pixel and a reference spectral library of classes that are either manually assembled or automatically extracted from the HSI under study, the pixels are classified. The output of this process is a classification map where each pixel is assigned to one class label. The classification techniques can be based on various mathematical concepts such as statistical analysis, (including neural networks), morphology-based approaches, hierarchical segmentation, etc. Additionally, several classification methods approach the problem not as a pixel-based process, but as segment-based, or cluster-based processes. These type of methods focus on either clustering pixels or segmenting images. The classification process takes place after the grouping of pixels in clusters/segments and considers each one of them as a single entity, i.e. all pixels from the same cluster/segment are assigned to the same class.
Spectral unmixing decomposes the mixed spectrum of a spectral signature to extract sub-pixel level information. It firstly detects and extracts the pure spectral signatures in the hyperspectral image, which depending on the spectral/spatial resolution may correspond to materials or land cover classes. For each pixel the contribution of each pure spectral signature to its formation is quantified by the so-called abundance fractions, whose estimation is a major task in spectral unmixing. The spectral unmixing is an inversion problem based on the spectral mixture analysis.
The target/anomaly detection aims at identifying a relatively small number of objects with fixed shape or spectrum in a scene. Based on specific techniques, it is possible to detect targets of interest (target detection) or anomalies (anomaly detection) in an image scene. The desired target knowledge can be generated directly from the image in an unsupervised way using either matched filtering or quadratic forms on kernels . On the other hand, anomaly detectors search for “targets” that are:
- Not known
- Spectrally distinct from their surrounding
- They have relatively small spatial size with low probabilities of occurrence in an image scene.
Matching, Labeling, and Spectroscopy
In hyperspectral image analysis, spectral matching and labeling corresponds to the process of comparing spectral signatures either to each other or towards a spectral library. In more technical terms, matching and labeling refers to the process where the similarity values of unknown spectral signatures are calculated against a set of known spectral signatures and a label or ID are given to that unknown signatures, using mathematical metrics. The terms matching and labeling are used interchangeably and refer practically to the same process. The only difference is the following: if the RSL (Reference Spectral Library) includes names (labels) then the process is called labeling, but in case that the RSL does not include labels but IDs then the process is called matching. The term label has a semantic meaning to the user, for example, it refers to a land use/cover class. The term ID refers to the unique number of the signature within the RSL without a semantic meaning to the user.
Spectroscopy in the hyperspectral image analysis is the process of quantifying the relation between the spectrum of an object and a parameter. The quantification of the relation lays on the exploitation of the correlation of the spectral bands towards the parameter. The exploitation of the correlation is based on applying spectral preprocessing algorithms, selecting spectral bands and regression algorithms. The parameter can be either numeric or a label. In the case that the parameter is numeric, then spectroscopy refers to the parameter quantification. In the case that the parameter is label, then spectroscopy actually refers to the subtle spectral discrimination.
In remote sensing applications and especially in the case of hyperspectral remote sensing, a change may be considered as an alteration of the surface components. Temporal analysis of hyperspectral remote sensing images is usually facing several difficulties, among them the large amount of data to be processed and the few number of temporal observations. Various methods for change detection has been presented recently. Among them those based on Markov Random Fields, kernels and neural networks have gained significant attention. Focusing on man-made object change detection in urban and peri-urban regions, several approaches have been proposed based on very high resolution optical and radar data.
The aim of the change detection techniques is to identify areas where there is an actual change (e.g. due to land cover change) and not observational changes (e.g. atmospheric conditions, viewing/illumination geometry, sensor’s malfunctioning). In order to apply any change detection technique, two main approaches exist to pre-process the images:
- Atmospheric corrections
- Relative radiometric normalization
The atmosphere is responsible for solar radiation’s scattering and absorption, thus affects the radiance recorded by the detectors. Since the radiant energy may not interact with the Earth’s surface, it provides a component without physical meaning in terms of the spectral measurement, which should be removed. Although atmospheric correction is a daunting process because the atmosphere’s properties vary both in space and time, it is proved a valuable tool for the standardization of the results since it yields reflectance values that are independent on illumination and atmospheric conditions. This is not a trivial task to perform, since in many cases there are not enough ancillary data to be used as input to the atmospheric correction algorithms. This may result to errors that will be transferred to the detected changes.
In the second case, the images are radiometrically “normalized” to a reference image in order to make further comparisons. The normalization process is based on the selection of invariant targets/objects that are used as reference to calculate the numeric spectral transformation among the images. This selection can be done either automatically or manually. This is a faster process, with safe results. However, radiometric normalization does not lead to reflectance values.
Remote Sensing Expertdimsyk@gmail.com