Thursday, October 30, 2014

Lab 7: Digital Change Detection

Background and Goal
The goal of this lab is to develop the knowledge and skills necessary to compute changes that occur over time using land use/land cover images. This digital change detection is important because it allows the monitoring of environmental and socioeconomic transitions that happen through time. In this lab we learn a couple techniques, from quick qualitative analysis to a statistical analysis and even a model that displays specific from-to changes between the different classes over time.

Methods
To do a quick qualitative change detection, we used a Write Function Memory Insertion technique. The study area was the surrounding counties of Eau Claire. For this change detection, we used band 3 from a 2011 image, and two copies of band 4 from a 1991 image. These images were then stacked using the Layer Stacking function. This new image was opened, and the layers were then adjusted so that the band 3 image would be viewed through the red color gun, while the band 4 images were viewed through the green and blue color guns. Since band 4 is black and white, while band 3 is red, this highlights any changes that occurred between 1991 and 2011.

Next we learned post-classification comparison change detection. For this exercise, we used images from 2001 and 2006 of the Milwaukee Metropolitan Statistical Area (MSA). This change detection technique calculates quantitative changes and is used to create a model that shows specific class changes between the two years. To quantify the land change, we examined the histogram values for each class in both of the images. In an Excel spreadsheet the different histogram values for all classes in both images were recorded and converted to hectares. The histogram shows how many pixels are within each class. For this specific sensor the spatial resolution is 30 meters, so one square pixel is 900 square meters. With this knowledge we were able to convert the histogram values to square meters. We then converted the values to hectares by multiplying the square meters by 0.0001. Figure 1 shows the class area data for both the years. A table was then created to calculate the percent change of square hectares for each class between 2001 and 2006.

Figure 1: The histogram data and conversion process for the 2001 and 2006 images.

To calculate specific from-to change between classes, we used a quite sophisticated model in ERDAS’ Model Maker. This model consisted of two input rasters, five pairs of functions, five pairs of temporary rasters, five more functions, and finally five output rasters. These were then connected as shown in Figure 2. The input rasters are the images from 2001 and 2006. For this exercise we are trying to reveal change from Agriculture to Urban, Wetlands to Urban, Forest to Urban, Wetland to Agriculture, and Agriculture to Bare Soil. These classes were extracted into the temporary rasters (in the respective order), using a Either IF OR function. This function assigns a 1 IF a pixel is the desired class, OR a 0 if a pixel is not the desired class, creating a binary model of the desired classification. In the next function, these binary models are overlapped using a Bitwise function. Areas that overlap are kept, while the rest are discarded (creating another binary model). This effectively reveals the specific areas of change between the two time periods, between the desired classes.

Figure 2: The complex model that was used for the from-to digital change detection.


Results
Figure 3 shows the final result for the Write Function Memory. This technique is quick an easy, thought it doesn’t create very concrete or specific results. All that you can do with the image is a visual examination to try and reveal areas of change. Figure 4 shows the final table for the quantitative analysis of the Milwaukee MSA. According to this table, Bare Soil shows the greatest change with a 23% increase in the five year period. Figure 5 Shows the final map of the from-to digital change detection method. From this model it is easy to see where change occurred in the study area. Waukesha County went through the most change, while Milwaukee County went through the least change.

Figure 3: The result of the Write Function Memory digital change detection technique. areas that are a light pink color show change between 1991 and 2011, while the red and white areas show no change.

Figure 4: The results of the quantitative analysis of change between 2001 and 2006 in the Milwaukee MSA. Bare Soil changed the most with an increase by 23.6%, while Wetlands changed the least with a decrease of 0.7%.

Figure 5: The results of the from-to digital change detection technique. Waukesha County saw the most change, while Milwaukee County saw the least.


Sources
Earth Resources Observation and Science Center. (1991). [Landsat image of counties surrounding Eau Claire]. United States Geological Survey. Provided by Cyril Wilson.

Earth Resources Observation and Science Center. (2011). [Landsat image of counties surrounding Eau Claire]. United States Geological Survey. Provided by Cyril Wilson.

Fry, J., Xian, G., Jin, S., Dewitz, J., Homer, C., Yang, L., Barnes, C., Herold, N., and Wickham, J., 2011. Completion of the 2006 National Land Cover Database for the Conterminous United States, PE&RS, Vol. 77(9):858-864.

Homer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J.N., and Wickham, J. 2007. Completion of the 2001 National Land Cover Database for the Conterminous United States. Photogrammetric Engineering and Remote Sensing, Vol. 73, No. 4, pp 337-341.





Tuesday, October 28, 2014

Lab 6: Accuracy Assessment

Background and Goal
In this lab, we learned how to evaluate the accuracy of classification maps. This knowledge is important because it is a necessary post-processing exercise that highlights the strengths and weaknesses of a classification map. The results of accuracy assessments provide an easily read quality assessment and rating of the map as a whole, as well as each class within the map.

Methods
Accuracy assessments were completed for both the unsupervised classification model and the supervised classification model that were discussed in Labs 4 and 5 respectively. The process is identical for both assessments. The first step in this process is to generate random ground reference testing samples. These samples are either collected through field work with a GPS unit, or with a high resolution image. In this lab, we used a high resolution image as our reference for these testing samples. The testing samples were generated using ERDAS’ Accuracy Assessment tool. The classification map was to be assessed, using the high resolution image as a reference (this means that points would appear on the reference map, with the classification being applied to the classification map). Normally the total amount of testing samples should be equal to 50 for each class (meaning a minimum of 250 for 5 classes), but for the sake of time we only used 125 samples for our 5 classes. We used a stratified random distribution parameter to ensure a quality sample. Going further, we applied the use of minimum points, making sure that each class had 15 samples at the very least (this makes sure classes of smaller area are still factored into the accuracy assessment). Figure 1 displays the random testing samples on the high resolution image. Once the random testing samples were generated, each sample was located and a classification was applied. This classification would then be compared to the classification applied in the classification map in the accuracy assessment, creating a matrix to display an accuracy assessment report. 

Figure 1: All of the test samples that were used in one of the Accuracy Assessments. The points were randomly generated to reduce bias, and there are 125 points total.


Results
Figure 2 displays the accuracy assessment report of the unsupervised classification map, while Figure 3 displays the accuracy assessment report of the supervised classification map. The matrix displays the overall accuracy, Kappa statistic, producer’s accuracy, and user’s accuracy. The rows in the matrix list the classes that were actually classified on the classification map, while the columns display the classes that were selected with the reference points. The matrix thus displays producer’s accuracy (also known as omission error) and user’s accuracy (also known as commission error). Overall accuracy is the report of the overall proportion of correctly classified pixels in the image based on the reference sample. The Kappa statistic is a measure of the difference between observed agreement between two maps and the agreement that might be attained by chance (it calculates to what degree chance has a role in an accurate classification). A value below 0.4 means there is poor agreement while a value above 0.8 means there is a strong agreement. The producer’s accuracy is the accuracy for a given class that examines how many pixels on the map are classified correctly. The user’s accuracy is the accuracy for a given class that examines how many of the pixels on the classified image are actually what they say they are. Overall, the unsupervised classification map is more accurate, though neither maps are accurate enough to actually be used with accuracy.

Figure 2: The accuracy assessment report for the unsupervised classification map. The map has a low overall accuracy, a fairly low Kappa Statistic, and low producer and user accuracy (though it is slightly more accurate then the supervised classification map).


Figure 3: The accuracy assessment report for the supervised classification map. The map has a very low overall accuracy, and a much lower Kappa Statistic. Except for the case with Water, the producer's and user's accuracy are both awfully low. This map was not accurate whatsoever.


Sources
Earth Resources Observation and Science Center. (2000). [Landsat image used to create a classification map]. United States Geological Survey. Provided by Cyril Wilson.

National Agriculture Imagery Program. (2005). [High resolution image used for reference in accuracy assessment]. United States Department of Agriculture. Provided by Cyril Wilson.



Thursday, October 16, 2014

Lab 5: Pixel-Based Supervised Classification

Background and Goal
In this lab, we expand on the classification process. Instead of doing an unsupervised classification like we did last week, we did a pixel-based supervised classification. This process uses training samples from a training image to assist (or supervise) the classification process.

Methods
The first step in a supervised classification is to collect training samples for the desired classes. For this exercise, we collected a minimum of 12 samples for Water, 11 samples for Forest, 9 samples for Agriculture, 11 samples for Urban/Built-Up, and 7 samples for Bare Soil. The study area was again Eau Claire and Chippewa Counties. To collect training samples, we used Google Earth to confirm the land classification in our ERDAS viewer. We then used the drawing feature to create an area of interest within this class, and imported the signature into ERDAS’ Signature Editor tool. This was done for each class, until a total of 50 signatures were collected (keeping in mind the minimums for each class). Figure 1 shows the complete table of signatures that were collected.

Figure 1: The complete table of training samples.

It was then necessary to evaluate the quality of our training samples. First, they are visually examined, by comparing the different spectral signatures of each sample in each individual class. If any samples do not follow the spectral signature of the class that it is suppose to be in, it was discarded and a new sample was collected. Once all of the spectral signatures look as they should, a signature separability test was performed to examine the statistical quality of the samples. This function calculates the four bands with the best average separability of features, and gives a separability score. This score must be above 1900 to be effective. Once it is confirmed that the training samples are of good quality, all the signatures that were collected must be merged to create one signature for each class. These signatures are then used to perform the supervised classification. To do this, we simply used the Supervised Classification tool in ERDAS, using the signatures collected as the training samples. We used a Maximum Likelihood classification to yield the best results. Figure 2 shows the separability score results, and Figure 3 shows the merged signatures.

Figure 2: My results of the Separability test. The best average score was 1974, and the four bands with best average separability were bands 1, 2, 3, and 4.
Figure 3: The merged spectral signatures to be used in the supervised classification.

Results
Overall, this classification technique yielded poor results compared to the unsupervised classification performed last week. The Water class was not represented to its full degree, and the Urban/Built-Up class was even more expansive than in the unsupervised classification. If we had more time to collect more signature samples, this technique would probably show better results. Figure 4 shows the unsupervised result compared to the supervised results, and Figure 5 is a detailed map of the supervised results.

Figure 4: The unsupervised results are on the left, and the supervised results are on the right. In the unsupervised image, Forest, Water and Urban/Built-Up are much better classified.
Figure 5: A complete map of the final result of supervised classification.

Thursday, October 9, 2014

Lab 4: Unsupervised Classification

Background and Goal
The goal of this lab is to learn how to identify and classify different physical and manmade information from a remotely sensed image by using an unsupervised classification algorithm. Specifically, this lab helped create an understanding for the input and execution requirements for unsupervised classification. In addition to this, we learned how to recode the different spectral clusters into a useful land use/land cover classification scheme.

Methods
To start, we ran a very basic unsupervised classification algorithm using ISODATA. In ERDAS, we used all the default settings in the Unsupervised Classification tool, while bringing the minimum and maximum number of classes down to 10 and increasing the iterations to 250 (to ensure that the threshold is met before iterations run out). This algorithm will organize the pixels into 10 different classes, according to their spectral signatures. After the model was complete, Google Earth was used to classify the created image into five classes: Water, Forest, Agriculture, Urban/Built-up, and Bare Soil. By highlighting each of the 10 created classes and comparing the highlighted areas with the Google Earth viewer, each of the 10 classes were reclassified into the five classes mentioned above. This part of the lab was used to try out unsupervised classification.

In order to make the previous model more accurate, we went back and did a second unsupervised classification. This time 20 classes were created, with a Convergence Threshold of 0.92 instead of 0.95. These 20 classes were then reclassified into the five previously mentioned classes in the same manner described above. Once everything was reclassified, the Recode tool was used to compile all 20 of the classes into the five needed for a Land Use/Land Cover map (Figure 1 displays this change). Once this was complete, a map of the Land Use/Land Cover was created by using ArcMap.

Figure 1: The image's table before and after the classes were re coded. In the before picture there are 20 classes and 5 colors. and the after picture there are only 5 classes (with their respective colors).

Results
Figure 2 displays the results from the first unsupervised classification. Because there were only 10 classes, the image is over generalized. This creates overlap between some of the features in the image. This is easily seen in the northwestern region of the image, where there are several Urban/Built-up regions. Most of these areas are actually agriculture, but they were falsely classified. Also, Agriculture seems to swallow the Bare Soil and Forest classes.

Figure 2: The Land Use/Land Cover image after reclassifying an unsupervised classification of 10 classes.

Figure 3 displays the results of the second unsupervised classification. Because there were 20 classes, the image is more accurate, with less generalized classification. You can see more Bare Soil and Forest classified in this map, though the areas to the northwest are still classified as Urban/Built-up. The spectral signatures are too similar for the classification to pick up the difference.

Figure 3: The Land Use/Land Cover map after reclassifying an unsupervised classification of 20 classes.

Thursday, October 2, 2014

Lab 3: Radiometric and Atmospheric Correction

Background and Goal
The main goal of this lab is to learn how to correct remotely sensed images by accounting for atmospheric interference. Several methods were experienced throughout the lab. Empirical Line Calibration (ELC), Dark object subtraction (DOS), and multidate image normalization were all used in this lab. ELC utilizes a spectral library to compare the spectral signature of an object in the image to the objects actual signature, and corrects the difference. DOS utilizes algorithms that take sensor gain, offset, solar irradiance, solar zenith angle, atmospheric scattering, and path radiance into account. These first two methods are absolute atmospheric correction, while multidate image normalization is relative atmospheric correction. This method is used when comparing two images of the same location with a different date. It utilizes radiometric ground control points to build regression equations, which are then used to correct one image to match the other. 

Methods
The first step in ELC is to collect several different spectral signatures from the image. This was done using Erdas’ Spectral Analysis tool. In order for this method to be effective, it was necessary to select points from areas in the image that have different albedos. Spectral samples of asphalt, forest, grass, aluminum roofs, and water were selected from the image. Each of these samples were then paired with their respective spectral signature from the ASTER spectral library. The two different spectral signatures were then used to create regression equations for each band of the image. These equations were used to correct the image.

Dark object subtraction is conducted in two steps. First, the satellite image is converted to at-satellite spectral radiance using the formula in Figure 1. Then this spectral radiance is converted into true surface reflectance with the formula in Figure 2. All the necessary data for the first formula is found in the image’s metadata file. To execute this first step, a model was created in Erdas’ Model Maker tool. For each band an input raster, a function, and an output raster was created. The model was run with the first formula being used as the function, with the bands of the original image in each of the input rasters. The second formula has much more factors that need to be calculated. D2 is the distance between the earth and the sun. This was determined by calculating the Julian date of the day the image was taken, to be looked up on a provided table which shows the distance between the sun and the earth on every day of the year. Lλ is the spectral radiance that was created with the first formula. Lλhaze is the estimated path radiance of the image. This was determined by examining the histograms for each band and locating the point where the band’s histogram begins on the X-axis. TAUv and TAUz estimate the optical thickness of the atmosphere when the image was collected. TAUv remained constant at 1 because the radar is at nadir (pointed straight down towards the target). The TAUz values were given in a table. ESUNλ is the solar irradiance of the image, and is determined by finding the respective band in a table for the correct sensor that we were using. θs is the sun zenith angle, which is determined by subtracting the sun’s elevation angle from 90 degrees. The sun’s elevation angle is found in the metadata. All of this was then put into a model in a much similar way as the previous step. This model was then run, producing the corrected image.

Figure 1: The formula used in the first step of DOS atmospheric correction.

Figure 2: The formula used the second step of DOS atmospheric correction.

For multidate image normalization, an image of Chicago from 2009 was corrected to match that of an image from 2000. First, 15 radiometric ground control points were matched between the two images. Points were taken from water bodies and urban areas. These points were collected by using two viewers in Erdas and matching each point with another in the same location in the second image. Figure 3 shows the final result of this step. Then the data was viewed in a table, and for each point the mean brightness value from every band was collected and put into an excel table. Figure 4 shows this data. With these tables a regression equation is created by comparing means of each band between the two years using a scatter plot. Models were then created with the model maker to normalize the 2009 image. The regression equations created in the previous step are used as the function in the model, with y being the output image and x being the input image. The result is a normalized image which can then be used to compare the image from 2000.

Figure 3: The two viewers with each paired radiometric ground control point in place.

Figure 4: The tables of the mean brightness value for each radiometric ground control point of both images.

Results
For the ELC correction, the output image failed to build pyramid layers. Because of this, the image couldn’t be zoomed out to its full extent though if you zoom in the image can still be examined. Shown in Figure 5 are original image and the result. There is little visible difference between the two, but by looking at the spectral profiles, it can be seen that the final image is slightly more normalized to those of the spectral library.

Figure 5: The original image is on the left, and the ELC corrected image is on the right. There is little visual difference between the two images.

The DOS correction was much more successful. Figure 6 shows the original and the result. The final image has much richer colors and a higher contrast. It is easier to see differences between the different vegetation types, and there is clearer definition in the urban area. The original image looks washed out and hazy in comparison.

Figure 6: The original image is on the left, and the DOS corrected image is on the right. This method had a much better result than the ELC method.

The results of the multidate image normalization are hard to interpret. Figure 7 shows the image from 2000 on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. Visually, the original image looks more similar to the 2000 image. The normalized 2009 image has much more vivid colors, and higher contrast.

Figure 7: The image from 2000 is on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. The normalized image looks much better than the original, but it doesn't appear to have similar characteristics to the image from 2000.