Thursday, October 2, 2014

Lab 3: Radiometric and Atmospheric Correction

Background and Goal
The main goal of this lab is to learn how to correct remotely sensed images by accounting for atmospheric interference. Several methods were experienced throughout the lab. Empirical Line Calibration (ELC), Dark object subtraction (DOS), and multidate image normalization were all used in this lab. ELC utilizes a spectral library to compare the spectral signature of an object in the image to the objects actual signature, and corrects the difference. DOS utilizes algorithms that take sensor gain, offset, solar irradiance, solar zenith angle, atmospheric scattering, and path radiance into account. These first two methods are absolute atmospheric correction, while multidate image normalization is relative atmospheric correction. This method is used when comparing two images of the same location with a different date. It utilizes radiometric ground control points to build regression equations, which are then used to correct one image to match the other. 

Methods
The first step in ELC is to collect several different spectral signatures from the image. This was done using Erdas’ Spectral Analysis tool. In order for this method to be effective, it was necessary to select points from areas in the image that have different albedos. Spectral samples of asphalt, forest, grass, aluminum roofs, and water were selected from the image. Each of these samples were then paired with their respective spectral signature from the ASTER spectral library. The two different spectral signatures were then used to create regression equations for each band of the image. These equations were used to correct the image.

Dark object subtraction is conducted in two steps. First, the satellite image is converted to at-satellite spectral radiance using the formula in Figure 1. Then this spectral radiance is converted into true surface reflectance with the formula in Figure 2. All the necessary data for the first formula is found in the image’s metadata file. To execute this first step, a model was created in Erdas’ Model Maker tool. For each band an input raster, a function, and an output raster was created. The model was run with the first formula being used as the function, with the bands of the original image in each of the input rasters. The second formula has much more factors that need to be calculated. D2 is the distance between the earth and the sun. This was determined by calculating the Julian date of the day the image was taken, to be looked up on a provided table which shows the distance between the sun and the earth on every day of the year. Lλ is the spectral radiance that was created with the first formula. Lλhaze is the estimated path radiance of the image. This was determined by examining the histograms for each band and locating the point where the band’s histogram begins on the X-axis. TAUv and TAUz estimate the optical thickness of the atmosphere when the image was collected. TAUv remained constant at 1 because the radar is at nadir (pointed straight down towards the target). The TAUz values were given in a table. ESUNλ is the solar irradiance of the image, and is determined by finding the respective band in a table for the correct sensor that we were using. θs is the sun zenith angle, which is determined by subtracting the sun’s elevation angle from 90 degrees. The sun’s elevation angle is found in the metadata. All of this was then put into a model in a much similar way as the previous step. This model was then run, producing the corrected image.

Figure 1: The formula used in the first step of DOS atmospheric correction.

Figure 2: The formula used the second step of DOS atmospheric correction.

For multidate image normalization, an image of Chicago from 2009 was corrected to match that of an image from 2000. First, 15 radiometric ground control points were matched between the two images. Points were taken from water bodies and urban areas. These points were collected by using two viewers in Erdas and matching each point with another in the same location in the second image. Figure 3 shows the final result of this step. Then the data was viewed in a table, and for each point the mean brightness value from every band was collected and put into an excel table. Figure 4 shows this data. With these tables a regression equation is created by comparing means of each band between the two years using a scatter plot. Models were then created with the model maker to normalize the 2009 image. The regression equations created in the previous step are used as the function in the model, with y being the output image and x being the input image. The result is a normalized image which can then be used to compare the image from 2000.

Figure 3: The two viewers with each paired radiometric ground control point in place.

Figure 4: The tables of the mean brightness value for each radiometric ground control point of both images.

Results
For the ELC correction, the output image failed to build pyramid layers. Because of this, the image couldn’t be zoomed out to its full extent though if you zoom in the image can still be examined. Shown in Figure 5 are original image and the result. There is little visible difference between the two, but by looking at the spectral profiles, it can be seen that the final image is slightly more normalized to those of the spectral library.

Figure 5: The original image is on the left, and the ELC corrected image is on the right. There is little visual difference between the two images.

The DOS correction was much more successful. Figure 6 shows the original and the result. The final image has much richer colors and a higher contrast. It is easier to see differences between the different vegetation types, and there is clearer definition in the urban area. The original image looks washed out and hazy in comparison.

Figure 6: The original image is on the left, and the DOS corrected image is on the right. This method had a much better result than the ELC method.

The results of the multidate image normalization are hard to interpret. Figure 7 shows the image from 2000 on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. Visually, the original image looks more similar to the 2000 image. The normalized 2009 image has much more vivid colors, and higher contrast.

Figure 7: The image from 2000 is on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. The normalized image looks much better than the original, but it doesn't appear to have similar characteristics to the image from 2000.

No comments:

Post a Comment