There are numerous methods and techniques to extract information from a hyperspectral image (HSI). By taking into account the complexity of the data as well as, the different objectives of each application, there is not a single information extraction technique that outperforms all others in every practical situation.

The selected technique mainly depends on (a) the nature of the problem and (b) the available ancillary and ground truth data. Hyperspectral images are a special case of imaging, since they fuse spatial, spectral and/or temporal information, thus flexible and adaptive processing strategies need to be exploited. The main methods aiming at information extraction from hyperspectral images are the following:


Classification is mainly a pixel-based process, where each pixel is categorized in a specific category or class. Depending on the spectral signature of the pixel and a reference spectral library of classes that are either manually assembled or automatically extracted from the HSI under study, the pixels are classified. The output of this process is a classification map where each pixel is assigned to one class label. The classification techniques can be based on various mathematical concepts such as statistical analysis, (including neural networks), morphology-based approaches, hierarchical segmentation, etc. Additionally, several classification methods approach the problem not as a pixel-based process, but as segment-based, or cluster-based processes. These type of methods focus on either clustering pixels or segmenting images. The classification process takes place after the grouping of pixels in clusters/segments and considers each one of them as a single entity, i.e. all pixels from the same cluster/segment are assigned to the same class.

Spectral Unmixing

Spectral unmixing decomposes the mixed spectrum of a spectral signature to extract  sub-pixel level information. It firstly detects and extracts the pure spectral signatures in the hyperspectral image, which depending on the spectral/spatial resolution may correspond to materials or land cover classes. For each pixel the contribution of each pure spectral signature to its formation is quantified by the so-called abundance fractions, whose estimation is a major task in spectral unmixing. The spectral unmixing is an inversion problem based on the spectral mixture analysis.

Target/Anomaly Detection

The target/anomaly detection aims at identifying a relatively small number of objects with fixed shape or spectrum in a scene. Based on specific techniques, it is possible to detect targets of interest (target detection) or anomalies (anomaly detection) in an image scene. The desired target knowledge can be generated directly from the image in an unsupervised way using either matched filtering or quadratic forms on kernels [18]. On the other hand, anomaly detectors search for “targets” that are:

  • Not known
  • Spectrally distinct from their surrounding
  • They have relatively small spatial size with low probabilities of occurrence in an image scene.

Matching, Labeling, and Spectroscopy

In hyperspectral image analysis, spectral matching and labeling corresponds to the process of comparing spectral signatures either to each other or towards a spectral library. In more technical terms, matching and labeling refers to the process where the similarity values of unknown spectral signatures are calculated against a set of known spectral signatures and a label or ID are given to that unknown signatures, using mathematical metrics. The terms matching and labeling are used interchangeably and refer practically to the same process. The only difference is the following: if the RSL (Reference Spectral Library) includes names (labels) then the process is called labeling, but in case that the RSL does not include labels but IDs then the process is called matching. The term label has a semantic meaning to the user, for example, it refers to a land use/cover class. The term ID refers to the unique number of the signature within the RSL without a semantic meaning to the user.

Spectroscopy in the hyperspectral image analysis is the process of quantifying the relation between the spectrum of an object and a parameter. The quantification of the relation lays on the exploitation of the correlation of the spectral bands towards the parameter. The exploitation of the correlation is based on applying spectral preprocessing algorithms, selecting spectral bands and regression algorithms. The parameter can be either numeric or a label. In the case that the parameter is numeric, then spectroscopy refers to the parameter quantification. In the case that the parameter is label, then spectroscopy actually refers to the subtle spectral discrimination.

Change Detection

In remote sensing applications and especially in the case of hyperspectral remote sensing, a change may be considered as an alteration of the surface components. Temporal analysis of hyperspectral remote sensing images is usually facing several difficulties, among them the large amount of data to be processed and the few number of temporal observations. Various methods for change detection has been presented recently. Among them those based on Markov Random Fields, kernels and neural networks have gained significant attention. Focusing on man-made object change detection in urban and peri-urban regions, several approaches have been proposed based on very high resolution optical and radar data.

The aim of the change detection techniques is to identify areas where there is an actual change (e.g. due to land cover change) and not observational changes (e.g. atmospheric conditions, viewing/illumination geometry, sensor’s malfunctioning). In order to apply any change detection technique, two main approaches exist to pre-process the images:

  • Atmospheric corrections
  • Relative radiometric normalization

The atmosphere is responsible for solar radiation’s scattering and absorption, thus affects the radiance recorded by the detectors. Since the radiant energy may not interact with the Earth’s surface, it provides a component without physical meaning in terms of the spectral measurement, which should be removed. Although atmospheric correction is a daunting process because the atmosphere’s properties vary both in space and time, it is proved a valuable tool for the standardization of the results since it yields reflectance values that are independent on illumination and atmospheric conditions. This is not a trivial task to perform, since in many cases there are not enough ancillary data to be used as input to the atmospheric correction algorithms. This may result to errors that will be transferred to the detected changes.

In the second case, the images are radiometrically “normalized” to a reference image in order to make further comparisons. The normalization process is based on the selection of invariant targets/objects that are used as reference to calculate the numeric spectral transformation among the images. This selection can be done either automatically or manually. This is a faster process, with safe results. However, radiometric normalization does not lead to reflectance values.

GEO Premium

Access our ENTIRE content. Not just courses. We provide you with courses, tools and data to start learning and advance your skills. Get instant access to all these with a yearly subscription. Get 6000+ minutes of learning content!

About the writer

I'm a Remote Sensing and a Surveying Engineer. I received my degree from NTUA in 2010, where I also received my Ph.D. in hyperspectral remote sensing in 2016. From graduation in 2010, my career started as a Researcher Associate and Teaching Associate in the Laboratory of Remote Sensing of NTUA. From that time I also worked at several private companies as a Remote Sensing Expert and Geospatial Analyst. From the beginning of 2015 I was positioned as Senior Earth Observation Expert. During these years, I have participated in more than 20 funded European Commission and European Space Agency projects, have over 16 peer reviewed scientific publications in the field of Remote Sensing, and have an international patent in hyperspectral data compression.My main research and professional interests are in the optical remote sensing area, where I specialize in data (images, point measurements) processing and algorithm design and development. Some of the software tools that I operate to accomplish my research and business dreams are SNAP, ENVI, IDL, QGIS, ERDAS Imagine, ArcGIS, and Python. I have been working with these tools since 2008.

Dimitris Sykas

Remote Sensing Expert

Next article for further reading

Remote Sensing has been identified as “the field of study associated with extracting information about an object without coming into physical contact with it”. In that regard, remote sensing sensors capture specific information from which decisions can be made. For example, a weather satellite can acquire remote sensing images and by proper algorithmic processing, atmospheric parameters can be derived, which ultimately can drive decisions. Often, the remote sensing sensors do not directly measure the particular information of interest, but simply record data from which the desired information can be extracted or correlated.