We tend to overlook the image metadata parameters...

In most introductory courses and lessons for Remote Sensing, the mechanics of the Remote Sensing system are explained, along with the basic mathematics and algorithms of image processing for satellite imagery. However, we tend to overlook the image metadata parameters, and sometimes, especially for Very High Resolution Imagery, they significantly affect the image processing workflows or can be even used to directly yield useful information from the images!

            Metadata, as most of you know, means “data for the data”. And if satellite images are our data, then we refer to the data that describe these images. What time were they taken? What was the sensor’s geometry at that time? Where exactly, in the Earth, does the image refer to? These kinds of questions are answered by the metadata. In this tutorial, we will refer to three main types of parameters: projection info, solar angles and sensor angles. The meaning of these parameters is the same for any type of sensor (passive or active), however for the time being, we will refer only to applications of optical remote sensing.

Projection Info

            In order to get an approximate positioning of the image in respect to the globe, we need to define the following:

  1. Geodetic Datum
  2. Projection
  3. Image orientation
  4. Coordinates of one pixel (Upper Left, Upper Right, Bottom Left or Bottom Right)
  5. Pixel size


            It is perfectly fine if you are a bit rusty on Geodesy or Cartography, so I will take some time to clear up some, often confusing, principles regarding datums, ellipsoids and projections.

            Imagine the rotating Earth and the oceans extending through the continents through hypothetical canals. This hypothetical shape, assuming the oceans are under the influence only of Earth’s gravity and rotation, is called geoid, and it is an irregular equipotential surface. This means that it cannot be described by basic geometric shapes (irregular) and that the force of gravity towards the Earth’s mass center is perpendicular to every point in that surface (equipotential). Now, since it is irregular, it cannot be described by analytic mathematical equations, therefore it cannot be used directly as a reference of position. However, we can use its property of being equipotential in order to approximate it accurately with a reference ellipsoid.

An ellipsoid, due to having standard equations, is much easier to handle and define

It is described by three parameters: major axis, minor axis and flattening. An ellipsoid together with a coordinate system is called a datum. The datum is the reference surface for most Geomatics applications and WGS84 is one of the most widely used datums. Its origin is meant to be at the Earth’s mass center (measured with an uncertainty less than 2 cm.) and it uses the IERS Reference Meridian as meridian of zero.

GEO Premium Subscription

Access our ENTIRE content with a yearly subscription only 8$/month

Get 1 Year Access Now

I hope I am not making this too confusing, but those who already follow me on my Geo University courses know that I do not like to jump into application without a brief presentation of the theory first.

            Now, the satellite images are not directly connected to the datum. They are a 2D representation of space, whereas an ellipsoid is a 3D representation. Therefore, the images need to be referred to a 2D representation that describes the 3D space. This representation is the cartographic projection. A projection is, essentially, a piece of paper that is wrapped around the ellipsoid, or sphere for further simplification. Projections use a 2D Cartesian coordinate system to give locations of the surface of the Earth, independent of vertical position. For every projection, there are formulas that transform X Y Cartesian coordinates to φ, λ latitude longitude and vice versa.

            Packed with the WGS’84 datum comes the Universal Transverse Mercator (UTM) projection. UTM is not a single universal map projection, but it is divided in into zones, each for a small “stripe” of the Earth’s surface, where in each zone angles (and therefore shapes) are accurately preserved in the representation. Coordinates on a projection are defined as Easting and Northing.

Projections are defined by the central meridian and central parallel, as the start of the coordinate system and, usually, a False Easting and False Northing, which are constant values to ensure that Easting and Northing are always positive.

            Therefore, each image refers to a specific projection, which refers to a datum. The projection is, often, a zone of UTM. For DigitalGlobe imagery metadata (e.g., Worldview), we find the following parameters:


  • “datumName” (e.g., “WE” for WGS84”)
  • “mapProjName” (e.g., “UTM”)
  • “mapZone” (e.g., 17)
  • “mapHemi” (hemisphere: “N” or “S”)


            These parameters are define the datum and projection, but they still do not tell us where in the projection is the image. These are given by additional location, rotation and spacing parameters:


  • “originX” (Easting of upper left pixel, usually equal to ULX)
  • “originY” (Northing of upper left pixel, usually equal to ULY)
  • “orientationAngle” (orientation between the image’s vertical axis and map coordinate system’s North)
  • “colSpacing” (the pixel size in the Easting dimension)
  • “rowSpacing” (the pixel size in the Northing dimension)
  • “productGSD” (the pixel size in general)


            Again, these names correspond to DigitalGlobe imagery metadata, but they can be similar or same for other sensors. Similar to ULX and ULY, “ULlong” and “ULlat” correspond to the upper left pixel’s geodetic longitude and latitude. The image’s vertical axis is most usually North Up, so orientation angle will most often be equal to 0. The pixel size, in official metadata terms is called Ground Sampling Distance (GSD). The GSD, during acquisition is not the same for both dimensions and for all pixels, so in the metadata, we also find parameters the min, max and mean collected GSD for both rows and columns (e.g., “minCollectedRowGSD”). However, the delivered image will be resampled to a round value of GSD, equal in both dimensions, and that is the value of “colSpacing” and “rowSpacing” or “productGSD”, depending on the image product level.

            These parameters can give an exact mapping between pixel coordinates and map coordinates. They serve as an “initial georeferencing”, that is to say that raw data that we download are not completely “raw”. If we import the delivered image into a GIS, it will be placed somewhere specifically, and not in (0,0), which will happen if you import, for example, a Google Earth screenshot. However, depending on the image and the product level, this could be nowhere near enough an accurate positioning the image in space. To achieve that, the process of orthorectification is required, to adjust the image in all three dimensions of space, using a reference map and a Digital Elevation Model. For example, this is necessary for an “Ortho-Ready” product level of a very high resolution image, but not for L1C or L2A Sentinel imagery. In addition, to these parameters, the Rational Polynomial Coefficients (RPC) are also used for accurate orthorectification. The RPC are part of the Rational Polynomial Functions, which accurately “connect” the pixels in the image to their ground coordinates. RPCs are also typically part of the metadata, but their analysis is out of the scope of this tutorial. In conclusion, that is where all these Projection Info parameters play their most important part in the Remote Sensing process.


            We made it through the most theory-heavy part of this tutorial. The other sets of parameters, solar angles and sensor angles need mostly some clarifying shapes to understand, but they have important applications in Remote Sensing.



Solar angles


            Solar angles refer to the position of the sun in relation to the scene that is captured and there are two types:


  1. Sun (or solar) azimuth
  2. Sun (or solar) elevation (or altitude)


            Sun azimuth is the angle between the North and the Sun’s horizontal position. The sun elevation angle is the angle between the horizon and the center of the Sun's disc. I hope the following Figure easily explains these two angles:

They are not constant along and across the entire image. Their variation is very low in scenes of VHRI (which are smaller in swath) and higher in high, medium and low resolution images. In the metadata, we will find “minSunAz”,”maxSunAz”, “meanSunAz” and “minSunEl”,”maxSunEl”, “meanSunEl”.

            Solar angles play their most important part in shadow detection. Sun azimuth determines the angle between a cloud and its shadow and elevation determines their proximity (for elevation=90 degrees, the sun would be exactly on top, so there would be no shadow).


Cloud masking algorithms such as in ESA’s Sentinels Application Platform take advantage of the solar angles to mask shadows, in addition with clouds. Shadow detection is often necessary also in VHRI and particularly in urban scenes with tall buildings, because they affect photointerpretation and classification results. However, they can also be useful to extract height information. An estimation of buildings’ heights or trees’ heights can be given using the shadow length, together with solar angles and sensor angles, which we will explain next. Finally, solar angles, since they are defined by the date and time can affect the radiometry of the images, since, for example, the illumination conditions differ during high noon hours (high solar elevation) and early noon hours (low solar elevation).


 


Sensor angles


            Similar to Sun azimuth and Sun elevation, there are Satellite azimuth and Satellite elevation, defined in exactly the same way, only of course using the satellite’s position to measure the angles from North and horizon respectively. They will also be found in the metadata as “minSatAz”,”maxSatAz”, “meanSatAz” and “minSatEl”,”maxSatEl”, “meanSatEl”. In addition to these angles, the sensor’s geometry is described with:


  • View angle or off-nadir angle (along track and across track)
  • Incidence angle (along track and across track)


            The view angle α from the satellite is the angle between satellite’s look direction and the nadir. It consists of two components: along track view angle and across track view angle. The geometry of view angle is best understood in Pleiades’ User Guide by the manufacturer, Astrium:

View angle geometry. Source: Astrium


α is the Global view angle, αis the along track component (in the R L plane) and  αis the across track component (in the T L plane).


            Similarly, the incidence angle β is the angle between the ground normal and look direction from the satellite. Again, the sketch from Astrium is very helpful in understanding the along track and across track components.

Incidence angle geometry. Source: Astrium

Obviously, global incidence and view angles are related. Airbus provides an online calculator to convert between the two.

Incidence angle and view angle relation. Soure: Astrium


The next figures are used to sum up some sensor angles and to highlight the difference between two major VHRI providers, Airbus and DigitalGlobe

Global incidence and satellite azimuth. Source: Astrium

Global incidence and scan azimuth. Source: Astrium

Satellite azimuth refers to what we defined previously. However, we also find scan azimuth, which is the angle between North and the scan axis. This is nothing more or less than the “orientationangle” that we studied in the Projection Info part. However, in the case of Figure 11, it will be equal to 180osince the scan direction is oriented in the opposite. For further clarification, DigitalGlobe metadata use the terms “InTrackViewAngle”, “CrossTrackViewAngle”, and OffNadirViewAngle, but do not mention the incidence angle.


            High view angles, especially in VHRI, affect significantly the accuracy of an orthorectification, since these images are very “side looking”. Do not forget, also, that when manually capturing buildings from satellite images, it must always be done on the ground level (wherever the bottom of the building is visible) and not on the roof level.

Low view angle (Left) vs high view angle (Right)

However, as we mentioned earlier, sensor angles can be used in conjunction with solar angles and the sensor height to estimate building heights. Figure 13 gives a very simplified schema of the logic behind the estimation:

Sensor/Sun/Object geometry for height estimation. Source: Windham, 2014

Theoretically, measuring the shadow length and using trigonometry with the solar and satellite elevation angles and the satellite’s height, the height of the building can be calculated. The problem is that for wide and tall buildings the entire shadow is not visible. However, there are studies that take into account the entire geometry of the building/sensor/sun system (taking also azimuths into account etc.) to yield more accurate estimations of object heights (Shettigara & Sumerling, 1998). 

Bibliography

Astrium, 2012. Pleiades Imagery User Guide. [Online]
 Available at: http://www.cscrs.itu.edu.tr/assets/downloads/PleiadesUserGuide.pdf 

DigitalGlobe, 2014. Imagery Support Data (ISD) Documentation. [Online]
 Available at: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/106/ISD_External.pdf 

Iliffe, J., 2000. Datums and Map Projections for remote sensing GIS, and surveying. Glaskow: Whittles Publishing.

Shettigara, V. K. & Sumerling, G. M., 1998. Height Determination of Extened Objects Using Shadows in SPOT Images. Photogrammetric Engineering and Remote Sensing, pp. 35-44.

USGS, n.d. Solar Illumination and Sensor Viewing Angle Coefficient Files. [Online]
 Available at: https://www.usgs.gov/land-resources/nli/landsat/solar-illumination-and-sensor-viewing-angle-coefficient-files?qt-science_support_page_related_con=1#qt-science_support_page_related_con 

Windham, C., 2014. Shadows and Angles: Measuring Object Heights from Satellite Imagery. [Online]
 Available at: https://www.gislounge.com/shadows-angles-measuring-object-heights-satellite-imagery/ 


Written by

I received my diploma (MSc equivalent) in Surveying Engineering from Aristotle University of Thessaloniki in 2017 and shortly after, I began working as a GIS and Remote Sensing Engineer for Planetek Hellas, in Athens. My main areas of occupation have been the production of reference mapping using Geographic Information Systems and satellite imagery, and to perform feasibility research study on remote sensing projects. I am passionate about solving of real life problems and I have won a prize in four innovation hackathons, including the Fabspace 2.0 Greece (1st place), the NASA Space Apps Challenge Thessaloniki (2nd place) and the Crowdhackathon smarticity (4th place). I’m also an active researcher on the topics of urban data analysis and remote sensing. My other research interests are spatial analysis & statistics, GIS, geospatial technologies, photogrammetry and computer vision. Feel encouraged to contact me for any questions and feedback regarding my courses. I would be more than happy to find improvement points and have suggestions for next courses!

Alexandros Voukenas

GIS and Remote Sensing Engineer

Alexandros Voukenas