Thursday, December 11, 2014

Lab 12: Hyperspectral Remote Sensing

Background and Goal
The objective of this lab is to gain experience with the processing of hyperspectral remotely sensed data. Because hyperspectral images have many bands with specific wavelengths, it is common for several of the bands to be corrupt or incorrect from atmospheric influences as well as sensor error. It is necessary to detect these noisy bands and remove them from the processing. We then learned how to detect target features from the hyperspectral image.

Methods
For this lab we used Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data of a geologic field site in Nevada. The image consisted of 255 bands. Anomaly detection was used to compare results between an image that was not preprocessed and an image that had bad bands removed. Anomaly detection identifies pixels in the image that have a significant deviation of spectral signature compared to the rest of the image. First, we did an anomaly detection of the AVIRIS image without removing the bands. Then, we used the Bad Band Selection Tool to identify the bad bands within the image. Figure 1 shows this tool, with the bad bands highlighted. A total of 20 bands were removed. The two outputs of the anomaly detection function were then compared for differences.

Figure 1: The bad band selection tool, with the bad bands highlighted in red.

Next, we explored the use of target detection. This process creates a mask that highlights instances of a given spectral frequency (the target). For this lab, the target was the mineral Buddingtonite. First, we used a simple target detection method that used a spectral signature that was collected from a classified image. This would create a mask in much the same way the anomaly detection did, highlighting areas with the same spectral signature. Next, we used a spectral signature from the USGS spectral signature library and analyzed the two outputs for differences.

Results
Figures 2 and 3 show the anomaly masks for the image without bands removed and the image with bad bands removed, respectively. There isn’t too much of a difference just by looking at them, but by using the swipe tool in the spectral analysis workstation, a difference was determined. After removing the bands, the anomalies were highlighted in the same area, however they were a little larger.

Figure 2: The anomaly mask of the image without the bad bands removed. 

Figure 3: The anomaly mask of the image with the bad bands removed. The anomalies are in the same spot, though they are slightly larger.


Figure 4 shows the results of the target detection overlay. There was an error with the simple target detection which just created a grey box for the target mask. The target detection with the USGS Spectral library signature worked just fine though. I’m guessing that the results of the simple target detection would show more spread and peppering, as the spectral signature was based off of a classification image.

Figure 4: The results of the overlay comparing the simple target detection and the spectral library target detection. There was an error with the spectral signature in the simple target detection, leaving only a grey box.

Sources
Erdas Imagine. (2010). [AVIRIS hyperspectral image]. Hexagon Geospatial. Obtained from Erdas Imagine 2010.



Tuesday, December 2, 2014

Lab 11: Lidar Remote Sensing

Background and Goal
Lidar is a rapidly expanding field in the remote sensing world. It has shown significant growth in recent years, due to its impressive and high tech nature. The main goal of this lab is to experience Lidar processing, as well as gaining knowledge of Lidar data structure. Specifically, we learn how to process and retrieve the different surface and terrain models and how to create derivative images from the point cloud. These images show first return hillshading (which gives a 3D looking, imitation aerial image), ground return hillshading (which reveals the elevation of the landscape itself), as well as intensity imagery (which shows a high contrast grayscale image, much like those of the Landsat panchromatic band).

Methods
The first step in Lidar processing is to create a LAS as Point Cloud file, and add the different LAS tiles within the study area. This was done within ArcMap, by creating a LAS dataset and adding the LAS files to the dataset. Because Lidar data is so big, the study area is composed of several LAS tiles. These files were then examined to determine if they are resembling the area of interest. This was done by looking at the Z values (or elevation) of the points. Since the minimum and maximum Z values matched the profile of Eau Claire, the data was determined to be useful. The dataset was then projected into the correct XY and Z coordinate systems by examining the metadata. Our data used the NAD 1983 HARN Wisconsin CRS Eau Claire projection for the XY coordinates, using feet as the unit. The NAVD 1988 projection was used for the Z coordinates, with feet as the unit as well. We then used a previously projected shapefile of the Eau Claire County to make sure that the dataset was properly projected.

Using the LAS Dataset Tool, we were then able to apply different filters to the dataset in order to further analyze the Lidar points. It was possible to examine the different classes that were assigned to each point (Ground, Building, Water, etc.), as well as elevation differences, slope ratios, and contour lines.

In order to create a Digital Surface Model (DSM), we used a LAS dataset to Raster conversion tool. It was necessary to adjust the sampling value so that it did not exceed the nominal pulse spacing. If the sampling value were finer than the nominal pulse spacing, the data would be inaccurate. This model used First Return points (points that are immediately reflected), to reveal the topmost surfaces of objects in the study area. This process created a black and white raster image, which was then converted into a hillshade image to give a 3D effect.

The production of a Digital Terrain Model (DTM) was executed in a much similar manner. This time, a filter was applied to use the Lidar points that were classified as Ground. This effectively removes buildings and vegetation from the DSM, leaving only the open earth. The process created a black and white raster image, which was then converted to a hillshade image to give a 3D effect. DTMs are very beneficial because they easily reveal the elevation patterns of the study area.

Lastly, an Intensity image was created. This image reveals the intensity of returns of the Lidar data. Areas with higher frequencies show up lighter on the image, while areas with lower frequencies show up darker on the image. This image was created using the same process as both the DSM and the DTM, but instead of measuring elevation, the process measured point intensity.

Results
Figure 1 shows the hillshade DSM image, Figure 2 shows the hillshade DTM image, and Figure 3 shows the Intensity image. Note the effects that water has on the images in all three figures. In the DSM and DTM, the water looks strange because of the lack of point density in those areas.

Figure 1: The DSM image. This image was created using first return points, so it is as if we are actually viewing the landscape from above.

Figure 2: The DTM image with hillshading applied. This image highlights the raw elevation differences throughout the study area, with little obstruction from vegetation and buildings.

Figure 3: The Intensity image. Urban surfaces are lighter because more pulses were reflected and thus returned. Vegetation and water are darker because more pulses were absorbed, so fewer returned.


Sources
Eau Claire County. (2013). [Lidar point cloud data]. Eau Claire County. Obtained from Eau Claire County.

Thursday, November 20, 2014

Lab 10: Object-based Classification

Background and Goal
In this lab, we learn how to utilize object-based classification in eCognition. This method of classification is state-of-the-art, and uses both spectral and spatial information to classify the land surface features of an image. The first of three steps in this process is to segment an image into different homogenous clusters (referred to as objects). This is then followed by selecting training samples to be used in a nearest neighbor classifier, an algorithm which is then used with the samples to create a classification output.

Methods
As stated previously, the first step in object-based classification is to segment the image into different objects. This is done by incorporating a false color infrared image into a multiresolution segmentation algorithm within eCognition’s ‘Process Tree’ function. For this classification, a scale parameter of 10 was used. This determines how detailed or broad the segments are. After this process is executed, it categorizes the different features in the image, splitting them as different objects. Figure 1 shows the different objects that were generated for the Eau Claire area.

Figure 1: An example of the segmentation that was created to differentiate between the different 'objects' of the image.

It is then necessary to create classes for the object-based classification. In the ‘Class Hierarchy’ window of eCognition, five different classes were inserted. The classes we used were Forest, Agriculture, Urban/built-up, Water, and Green vegetation/shrub. Each of these classes was assigned a different color. After this, the nearest neighbor classifier was applied to the classes, with the mean pixel value selected for the objects’ layer value. This will be the classifying algorithm for the process. Once this is complete, samples are selected in a much similar way to supervised classifications. Four objects in each class were selected and used as samples for the nearest neighbor classification. A new process was then created in the process tree, and the classification algorithm was assigned. Once this process was executed, the image becomes classified. It was then necessary to manually edit the image to change objects that had been falsely classified. The results were then able to be exported as an ERDAS Imagine Image file, to be further analyzed in a more familiar program.

Results
Figure 2 shows the result of the classification. This classification process was fairly simple to use, and produced beautiful results. Urban/built-up is still slightly under predicted, but overall the image looks quite accurate.
Figure 2: The final result of the object-based classification technique.

Sources
Earth Resources Observation and Science Center. (2000). [Landsat image of the Eau Claire and Chippewa Counties]. United States Geological Survey. Obtained from http://glovis.usgs.gov/




Thursday, November 13, 2014

Lab 9: Advanced Classifiers II

Background and Goal
In this lab, we learn about two more advanced classification algorithms. These algorithms produce classification maps that are much more accurate than those that are created with simple unsupervised and supervised classifiers. Specifically, this lab explores the use of an expert system classification (using ancillary data), and the development of an artificial neural network (ANN) classification. Both of these functions use very robust algorithms to create complex and highly accurate classification maps.

Methods
The use of expert systems is applied to increase the accuracy of previously classified images using ancillary data. In this lab, we used a classified image the Eau Claire, Altoona, and Chippewa Falls area (Figure 1). The map had the classes of Water, Forest, Agriculture, Urban/built up, and Green vegetation. In this classification map, Agriculture and Green Vegetation classes were over predicted. To use the expert system to correct these errors, it is first necessary to construct a knowledge base. Knowledge bases are built up of hypotheses, rules, and variables. Hypotheses are the targeted LULC classes, rules are the functions that will be used to classify the hypotheses, and variables are the inputs (previously classified image and ancillary data). For this lab, we created eight hypotheses. One was for water, one was for forest, one was for residential, one was for other urban, two were for green vegetation, and two were for agriculture. There are more hypotheses than classes because there must be a hypothesis for each correction. In the exercise, we broke the urban/built up class into ‘Residential’ and ‘Other Urban’, corrected green vegetation areas that were predicted as agriculture, and corrected agriculture areas that were predicted as green vegetation. The functions used bitwise language to classify the different hypotheses using the previously classified image and the ancillary data. Figure 2 shows the complete knowledge base. Once the knowledge base was complete, the classification was run producing a classification image with the eight hypotheses. These were then recoded into the six classes (basically just merging Green Vegetation with Green Vegetation 2 and merging Agriculture with Agriculture 2) complete the classification.

Figure 1: The original classified image of the Eau Claire, Altoona, and Chippewa Falls area.

Figure 2: The complete knowledge base. The hypotheses are on the left (green boxes), with corresponding rules on the right (blue boxes). There is a counter argument for each classifying argument.

ANN simulates the process of the human brain to perform image classification by ‘learning’ the patterns between remotely sensed images and ancillary data. It uses input nodes, hidden layers, and output nodes to bounce information back and forth and reveal the best answer based on the different Training Rates, Training Momentums, and Training Thresholds. In this lab, we used high resolution imagery of the University of North Iowa. To conduct the ANN classification, it was first necessary to create a training sample, by highlighting Regions of Interest (ROI). These ROI’s are essentially the classes that the image will be classified into. For this classification, the ROI’s were Rooftops, Pavement, and Grass. Figure 3 shows the reflective image with ROI’s highlighted. These ROI’s were then used as ancillary data in the ANN classification, with a Threshold of 0.95, a Training Rate of 0.18, and a Training Momentum of 0.7.

Figure 3: The image of UNI's campus overlayed with ROI's of grass (green), rooftop (red), and pavement (blue).

Results
The expert system classification produced a much more accurate image than the previous classified image. The image corrected the over prediction of agriculture and green vegetation, as well as creating two classes for the urban/built up class. Figure 4 shows the result of the expert system classification.

Figure 4: The result of the expert system classification method. It is much more accurate than the previous image.

The ANN classification was surprisingly easy to use, and produced an easily readable classification image. Figure 5 shows the results. It is easy to tell where the roads and the grass are. However, it classified the trees as rooftops (due to their shadows), and some of the rooftops were classified as pavement. Though the image isn't terribly accurate, it is surprisingly easy to distinguish between features, considering the work that the analyst has to do.

Figure 5: The classification image of the ANN classification method. Green areas are grass, blue areas are pavement, and red areas are rooftops.

Sources
Earth Resources Observation and Science Center. (2000). [Landsat image of the Eau Claire and Chippewa Counties]. United States Geological Survey. Provided by Cyril Wilson.

Department of Geography. (2003). [Quickbird High Resolution image of the University of Northern Iowa campus]. University of Northern Iowa. Provided by Cyril Wilson.

Thursday, November 6, 2014

Lab 8: Advanced Classifiers I

Background and Goal
The goal of this lab is to learn two advanced classification algorithms. These advanced classifiers are much more robust, and produce a classified image that is more accurate than those produced by unsupervised and supervised classifiers that were used in previous labs. The two algorithms that are learned in this lab are Spectral Linear Unmixing and Fuzzy classification. Spectral Linear Unmixing utilizes the measurement of ‘pure’ pixels (also known as endmembers) to classify images. Fuzzy classification accounts for mixed pixels in an image. Some pixels are a combination of several classes (due to spectral resolution) and thus membership grades are used to determine which class is more strongly identified within the pixel.

Methods
To conduct a spectral linear unmixing classification, it was first necessary to transform the image into a Principal Components (PC) image. This technique compiles all information into more compact bands, providing most of the image’s information within the first two to three bands in the PC image. This function was done with ENVI. For our image, most of the information was compiled in bands 1 and 2 of the PC image, with some information in band 3. To collect endmembers from the PC image, we created scatterplots of the informational bands. First, a scatterplot was generated with band 1 and the x-axis and band 2 and the y-axis. This created a scatterplot with a triangular shape. The points of this triangle are the ‘pure’ pixels (endmembers). These points were selected and turned into a class. The first scatter plot included the endmembers for water, agriculture, and bare soil (Figure 1 shows this scatterplot). However, to complete the classification, we also need endmembers for the urban class. We created a second scatterplot of bands 3 and 4 to collect these endemembers (Figure 1 shows this scatterplot as well). The endmembers are also highlighted on the reference image to make sure that the correct endmembers were selected (Figure 2). All of these endmembers were then exported into a region of interest (ROI) file and were then used in the Linear Spectral Unmixing function in ENVI, producing fractional maps for each endmember.

Figure 1: The scatterplots used to collect endmembers. The colors correspond to the colors in the reference map in Figure 2.

Figure 2: The reference map for collecting endmembers. The areas highlighted with green are the bare soil endmembers, yellow are the agriculture endmembers, blue are the water endmembers, and purple are the urban endmembers.

The processing for fuzzy classification was all executed within ERDAS. The first step is to select signatures from the input image. These signatures must be from areas where there is a mixture of land cover classes, as well as homogenous land cover. The aoi’s for the signature samples had to contain between 150 and 300 pixels. Four samples were collected for the water, forest, and bare soil classes, and six samples were collected for the agriculture and urban/built up classes. The samples for each of the classes were then merged, to create one aggregated sample for each class. These signatures were then used to create a fuzzy classification with ERDAS’ supervised classification function. This first function creates five layers of classified images, ranking the most probable classes for each pixel. A fuzzy convolution algorithm was then used in ERDAS to turn these layers into one classified image.

Results
Figures 3 through 6 show the results of the linear spectral unmixing function. The bare soil fractional map is quite accurate. It highlights the bare area surrounding the airport very well, as well as the empty crop fields to the east and the west. The fractional map for agriculture is a little less accurate. It seems to slightly highlight vegetation in general, and not just agriculture areas. The water fractional map is surprisingly inconsistent. Usually water is easy to classify, and it was classified well. However, the map also classified areas that were not water (notably the area around the airport). The urban fractional map worked surprisingly well. It successfully highlights the urban areas, but fails to leave other areas in the dark.

Figure 3: The Bare Soil fractional map.

Figure 4: The Agriculture fractional map.

Figure 5: The Water fractional map.

Figure 6: The Urban fractional map.

Figure 7 shows the final result of the fuzzy classification. This classification worked much better than the supervised classification in the previous lab, though the urban and agriculture classes were still over predicted.

Figure 7: The final result of the fuzzy classification.
Sources
Earth Resources Observation and Science Center. (2000). [Landsat image of the Eau Claire and Chippewa Counties]. United States Geological Survey. Provided by Cyril Wilson.



Thursday, October 30, 2014

Lab 7: Digital Change Detection

Background and Goal
The goal of this lab is to develop the knowledge and skills necessary to compute changes that occur over time using land use/land cover images. This digital change detection is important because it allows the monitoring of environmental and socioeconomic transitions that happen through time. In this lab we learn a couple techniques, from quick qualitative analysis to a statistical analysis and even a model that displays specific from-to changes between the different classes over time.

Methods
To do a quick qualitative change detection, we used a Write Function Memory Insertion technique. The study area was the surrounding counties of Eau Claire. For this change detection, we used band 3 from a 2011 image, and two copies of band 4 from a 1991 image. These images were then stacked using the Layer Stacking function. This new image was opened, and the layers were then adjusted so that the band 3 image would be viewed through the red color gun, while the band 4 images were viewed through the green and blue color guns. Since band 4 is black and white, while band 3 is red, this highlights any changes that occurred between 1991 and 2011.

Next we learned post-classification comparison change detection. For this exercise, we used images from 2001 and 2006 of the Milwaukee Metropolitan Statistical Area (MSA). This change detection technique calculates quantitative changes and is used to create a model that shows specific class changes between the two years. To quantify the land change, we examined the histogram values for each class in both of the images. In an Excel spreadsheet the different histogram values for all classes in both images were recorded and converted to hectares. The histogram shows how many pixels are within each class. For this specific sensor the spatial resolution is 30 meters, so one square pixel is 900 square meters. With this knowledge we were able to convert the histogram values to square meters. We then converted the values to hectares by multiplying the square meters by 0.0001. Figure 1 shows the class area data for both the years. A table was then created to calculate the percent change of square hectares for each class between 2001 and 2006.

Figure 1: The histogram data and conversion process for the 2001 and 2006 images.

To calculate specific from-to change between classes, we used a quite sophisticated model in ERDAS’ Model Maker. This model consisted of two input rasters, five pairs of functions, five pairs of temporary rasters, five more functions, and finally five output rasters. These were then connected as shown in Figure 2. The input rasters are the images from 2001 and 2006. For this exercise we are trying to reveal change from Agriculture to Urban, Wetlands to Urban, Forest to Urban, Wetland to Agriculture, and Agriculture to Bare Soil. These classes were extracted into the temporary rasters (in the respective order), using a Either IF OR function. This function assigns a 1 IF a pixel is the desired class, OR a 0 if a pixel is not the desired class, creating a binary model of the desired classification. In the next function, these binary models are overlapped using a Bitwise function. Areas that overlap are kept, while the rest are discarded (creating another binary model). This effectively reveals the specific areas of change between the two time periods, between the desired classes.

Figure 2: The complex model that was used for the from-to digital change detection.


Results
Figure 3 shows the final result for the Write Function Memory. This technique is quick an easy, thought it doesn’t create very concrete or specific results. All that you can do with the image is a visual examination to try and reveal areas of change. Figure 4 shows the final table for the quantitative analysis of the Milwaukee MSA. According to this table, Bare Soil shows the greatest change with a 23% increase in the five year period. Figure 5 Shows the final map of the from-to digital change detection method. From this model it is easy to see where change occurred in the study area. Waukesha County went through the most change, while Milwaukee County went through the least change.

Figure 3: The result of the Write Function Memory digital change detection technique. areas that are a light pink color show change between 1991 and 2011, while the red and white areas show no change.

Figure 4: The results of the quantitative analysis of change between 2001 and 2006 in the Milwaukee MSA. Bare Soil changed the most with an increase by 23.6%, while Wetlands changed the least with a decrease of 0.7%.

Figure 5: The results of the from-to digital change detection technique. Waukesha County saw the most change, while Milwaukee County saw the least.


Sources
Earth Resources Observation and Science Center. (1991). [Landsat image of counties surrounding Eau Claire]. United States Geological Survey. Provided by Cyril Wilson.

Earth Resources Observation and Science Center. (2011). [Landsat image of counties surrounding Eau Claire]. United States Geological Survey. Provided by Cyril Wilson.

Fry, J., Xian, G., Jin, S., Dewitz, J., Homer, C., Yang, L., Barnes, C., Herold, N., and Wickham, J., 2011. Completion of the 2006 National Land Cover Database for the Conterminous United States, PE&RS, Vol. 77(9):858-864.

Homer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J.N., and Wickham, J. 2007. Completion of the 2001 National Land Cover Database for the Conterminous United States. Photogrammetric Engineering and Remote Sensing, Vol. 73, No. 4, pp 337-341.





Tuesday, October 28, 2014

Lab 6: Accuracy Assessment

Background and Goal
In this lab, we learned how to evaluate the accuracy of classification maps. This knowledge is important because it is a necessary post-processing exercise that highlights the strengths and weaknesses of a classification map. The results of accuracy assessments provide an easily read quality assessment and rating of the map as a whole, as well as each class within the map.

Methods
Accuracy assessments were completed for both the unsupervised classification model and the supervised classification model that were discussed in Labs 4 and 5 respectively. The process is identical for both assessments. The first step in this process is to generate random ground reference testing samples. These samples are either collected through field work with a GPS unit, or with a high resolution image. In this lab, we used a high resolution image as our reference for these testing samples. The testing samples were generated using ERDAS’ Accuracy Assessment tool. The classification map was to be assessed, using the high resolution image as a reference (this means that points would appear on the reference map, with the classification being applied to the classification map). Normally the total amount of testing samples should be equal to 50 for each class (meaning a minimum of 250 for 5 classes), but for the sake of time we only used 125 samples for our 5 classes. We used a stratified random distribution parameter to ensure a quality sample. Going further, we applied the use of minimum points, making sure that each class had 15 samples at the very least (this makes sure classes of smaller area are still factored into the accuracy assessment). Figure 1 displays the random testing samples on the high resolution image. Once the random testing samples were generated, each sample was located and a classification was applied. This classification would then be compared to the classification applied in the classification map in the accuracy assessment, creating a matrix to display an accuracy assessment report. 

Figure 1: All of the test samples that were used in one of the Accuracy Assessments. The points were randomly generated to reduce bias, and there are 125 points total.


Results
Figure 2 displays the accuracy assessment report of the unsupervised classification map, while Figure 3 displays the accuracy assessment report of the supervised classification map. The matrix displays the overall accuracy, Kappa statistic, producer’s accuracy, and user’s accuracy. The rows in the matrix list the classes that were actually classified on the classification map, while the columns display the classes that were selected with the reference points. The matrix thus displays producer’s accuracy (also known as omission error) and user’s accuracy (also known as commission error). Overall accuracy is the report of the overall proportion of correctly classified pixels in the image based on the reference sample. The Kappa statistic is a measure of the difference between observed agreement between two maps and the agreement that might be attained by chance (it calculates to what degree chance has a role in an accurate classification). A value below 0.4 means there is poor agreement while a value above 0.8 means there is a strong agreement. The producer’s accuracy is the accuracy for a given class that examines how many pixels on the map are classified correctly. The user’s accuracy is the accuracy for a given class that examines how many of the pixels on the classified image are actually what they say they are. Overall, the unsupervised classification map is more accurate, though neither maps are accurate enough to actually be used with accuracy.

Figure 2: The accuracy assessment report for the unsupervised classification map. The map has a low overall accuracy, a fairly low Kappa Statistic, and low producer and user accuracy (though it is slightly more accurate then the supervised classification map).


Figure 3: The accuracy assessment report for the supervised classification map. The map has a very low overall accuracy, and a much lower Kappa Statistic. Except for the case with Water, the producer's and user's accuracy are both awfully low. This map was not accurate whatsoever.


Sources
Earth Resources Observation and Science Center. (2000). [Landsat image used to create a classification map]. United States Geological Survey. Provided by Cyril Wilson.

National Agriculture Imagery Program. (2005). [High resolution image used for reference in accuracy assessment]. United States Department of Agriculture. Provided by Cyril Wilson.



Thursday, October 16, 2014

Lab 5: Pixel-Based Supervised Classification

Background and Goal
In this lab, we expand on the classification process. Instead of doing an unsupervised classification like we did last week, we did a pixel-based supervised classification. This process uses training samples from a training image to assist (or supervise) the classification process.

Methods
The first step in a supervised classification is to collect training samples for the desired classes. For this exercise, we collected a minimum of 12 samples for Water, 11 samples for Forest, 9 samples for Agriculture, 11 samples for Urban/Built-Up, and 7 samples for Bare Soil. The study area was again Eau Claire and Chippewa Counties. To collect training samples, we used Google Earth to confirm the land classification in our ERDAS viewer. We then used the drawing feature to create an area of interest within this class, and imported the signature into ERDAS’ Signature Editor tool. This was done for each class, until a total of 50 signatures were collected (keeping in mind the minimums for each class). Figure 1 shows the complete table of signatures that were collected.

Figure 1: The complete table of training samples.

It was then necessary to evaluate the quality of our training samples. First, they are visually examined, by comparing the different spectral signatures of each sample in each individual class. If any samples do not follow the spectral signature of the class that it is suppose to be in, it was discarded and a new sample was collected. Once all of the spectral signatures look as they should, a signature separability test was performed to examine the statistical quality of the samples. This function calculates the four bands with the best average separability of features, and gives a separability score. This score must be above 1900 to be effective. Once it is confirmed that the training samples are of good quality, all the signatures that were collected must be merged to create one signature for each class. These signatures are then used to perform the supervised classification. To do this, we simply used the Supervised Classification tool in ERDAS, using the signatures collected as the training samples. We used a Maximum Likelihood classification to yield the best results. Figure 2 shows the separability score results, and Figure 3 shows the merged signatures.

Figure 2: My results of the Separability test. The best average score was 1974, and the four bands with best average separability were bands 1, 2, 3, and 4.
Figure 3: The merged spectral signatures to be used in the supervised classification.

Results
Overall, this classification technique yielded poor results compared to the unsupervised classification performed last week. The Water class was not represented to its full degree, and the Urban/Built-Up class was even more expansive than in the unsupervised classification. If we had more time to collect more signature samples, this technique would probably show better results. Figure 4 shows the unsupervised result compared to the supervised results, and Figure 5 is a detailed map of the supervised results.

Figure 4: The unsupervised results are on the left, and the supervised results are on the right. In the unsupervised image, Forest, Water and Urban/Built-Up are much better classified.
Figure 5: A complete map of the final result of supervised classification.

Thursday, October 9, 2014

Lab 4: Unsupervised Classification

Background and Goal
The goal of this lab is to learn how to identify and classify different physical and manmade information from a remotely sensed image by using an unsupervised classification algorithm. Specifically, this lab helped create an understanding for the input and execution requirements for unsupervised classification. In addition to this, we learned how to recode the different spectral clusters into a useful land use/land cover classification scheme.

Methods
To start, we ran a very basic unsupervised classification algorithm using ISODATA. In ERDAS, we used all the default settings in the Unsupervised Classification tool, while bringing the minimum and maximum number of classes down to 10 and increasing the iterations to 250 (to ensure that the threshold is met before iterations run out). This algorithm will organize the pixels into 10 different classes, according to their spectral signatures. After the model was complete, Google Earth was used to classify the created image into five classes: Water, Forest, Agriculture, Urban/Built-up, and Bare Soil. By highlighting each of the 10 created classes and comparing the highlighted areas with the Google Earth viewer, each of the 10 classes were reclassified into the five classes mentioned above. This part of the lab was used to try out unsupervised classification.

In order to make the previous model more accurate, we went back and did a second unsupervised classification. This time 20 classes were created, with a Convergence Threshold of 0.92 instead of 0.95. These 20 classes were then reclassified into the five previously mentioned classes in the same manner described above. Once everything was reclassified, the Recode tool was used to compile all 20 of the classes into the five needed for a Land Use/Land Cover map (Figure 1 displays this change). Once this was complete, a map of the Land Use/Land Cover was created by using ArcMap.

Figure 1: The image's table before and after the classes were re coded. In the before picture there are 20 classes and 5 colors. and the after picture there are only 5 classes (with their respective colors).

Results
Figure 2 displays the results from the first unsupervised classification. Because there were only 10 classes, the image is over generalized. This creates overlap between some of the features in the image. This is easily seen in the northwestern region of the image, where there are several Urban/Built-up regions. Most of these areas are actually agriculture, but they were falsely classified. Also, Agriculture seems to swallow the Bare Soil and Forest classes.

Figure 2: The Land Use/Land Cover image after reclassifying an unsupervised classification of 10 classes.

Figure 3 displays the results of the second unsupervised classification. Because there were 20 classes, the image is more accurate, with less generalized classification. You can see more Bare Soil and Forest classified in this map, though the areas to the northwest are still classified as Urban/Built-up. The spectral signatures are too similar for the classification to pick up the difference.

Figure 3: The Land Use/Land Cover map after reclassifying an unsupervised classification of 20 classes.

Thursday, October 2, 2014

Lab 3: Radiometric and Atmospheric Correction

Background and Goal
The main goal of this lab is to learn how to correct remotely sensed images by accounting for atmospheric interference. Several methods were experienced throughout the lab. Empirical Line Calibration (ELC), Dark object subtraction (DOS), and multidate image normalization were all used in this lab. ELC utilizes a spectral library to compare the spectral signature of an object in the image to the objects actual signature, and corrects the difference. DOS utilizes algorithms that take sensor gain, offset, solar irradiance, solar zenith angle, atmospheric scattering, and path radiance into account. These first two methods are absolute atmospheric correction, while multidate image normalization is relative atmospheric correction. This method is used when comparing two images of the same location with a different date. It utilizes radiometric ground control points to build regression equations, which are then used to correct one image to match the other. 

Methods
The first step in ELC is to collect several different spectral signatures from the image. This was done using Erdas’ Spectral Analysis tool. In order for this method to be effective, it was necessary to select points from areas in the image that have different albedos. Spectral samples of asphalt, forest, grass, aluminum roofs, and water were selected from the image. Each of these samples were then paired with their respective spectral signature from the ASTER spectral library. The two different spectral signatures were then used to create regression equations for each band of the image. These equations were used to correct the image.

Dark object subtraction is conducted in two steps. First, the satellite image is converted to at-satellite spectral radiance using the formula in Figure 1. Then this spectral radiance is converted into true surface reflectance with the formula in Figure 2. All the necessary data for the first formula is found in the image’s metadata file. To execute this first step, a model was created in Erdas’ Model Maker tool. For each band an input raster, a function, and an output raster was created. The model was run with the first formula being used as the function, with the bands of the original image in each of the input rasters. The second formula has much more factors that need to be calculated. D2 is the distance between the earth and the sun. This was determined by calculating the Julian date of the day the image was taken, to be looked up on a provided table which shows the distance between the sun and the earth on every day of the year. Lλ is the spectral radiance that was created with the first formula. Lλhaze is the estimated path radiance of the image. This was determined by examining the histograms for each band and locating the point where the band’s histogram begins on the X-axis. TAUv and TAUz estimate the optical thickness of the atmosphere when the image was collected. TAUv remained constant at 1 because the radar is at nadir (pointed straight down towards the target). The TAUz values were given in a table. ESUNλ is the solar irradiance of the image, and is determined by finding the respective band in a table for the correct sensor that we were using. θs is the sun zenith angle, which is determined by subtracting the sun’s elevation angle from 90 degrees. The sun’s elevation angle is found in the metadata. All of this was then put into a model in a much similar way as the previous step. This model was then run, producing the corrected image.

Figure 1: The formula used in the first step of DOS atmospheric correction.

Figure 2: The formula used the second step of DOS atmospheric correction.

For multidate image normalization, an image of Chicago from 2009 was corrected to match that of an image from 2000. First, 15 radiometric ground control points were matched between the two images. Points were taken from water bodies and urban areas. These points were collected by using two viewers in Erdas and matching each point with another in the same location in the second image. Figure 3 shows the final result of this step. Then the data was viewed in a table, and for each point the mean brightness value from every band was collected and put into an excel table. Figure 4 shows this data. With these tables a regression equation is created by comparing means of each band between the two years using a scatter plot. Models were then created with the model maker to normalize the 2009 image. The regression equations created in the previous step are used as the function in the model, with y being the output image and x being the input image. The result is a normalized image which can then be used to compare the image from 2000.

Figure 3: The two viewers with each paired radiometric ground control point in place.

Figure 4: The tables of the mean brightness value for each radiometric ground control point of both images.

Results
For the ELC correction, the output image failed to build pyramid layers. Because of this, the image couldn’t be zoomed out to its full extent though if you zoom in the image can still be examined. Shown in Figure 5 are original image and the result. There is little visible difference between the two, but by looking at the spectral profiles, it can be seen that the final image is slightly more normalized to those of the spectral library.

Figure 5: The original image is on the left, and the ELC corrected image is on the right. There is little visual difference between the two images.

The DOS correction was much more successful. Figure 6 shows the original and the result. The final image has much richer colors and a higher contrast. It is easier to see differences between the different vegetation types, and there is clearer definition in the urban area. The original image looks washed out and hazy in comparison.

Figure 6: The original image is on the left, and the DOS corrected image is on the right. This method had a much better result than the ELC method.

The results of the multidate image normalization are hard to interpret. Figure 7 shows the image from 2000 on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. Visually, the original image looks more similar to the 2000 image. The normalized 2009 image has much more vivid colors, and higher contrast.

Figure 7: The image from 2000 is on the left, with the original 2009 image in the upper right and the normalized 2009 image in the lower right. The normalized image looks much better than the original, but it doesn't appear to have similar characteristics to the image from 2000.

Thursday, September 18, 2014

Lab 2: Acquiring Surface Temperature from Thermal Images


Background and Goal
The main goal of this lab is to learn and understand how to acquire surface temperature information from remotely sensed images in a thermal band. This is necessary because thermal sensors only record radiant heat, and not kinetic heat. Kinetic heat is the true temperature of an object, while radiant heat is merely the heat that radiates from the object. This heat is measured in watts, while kinetic heat is measured in the familiar degrees of Fahrenheit, Celsius, or even Kelvin. In remote sensing software, a thermal image displays the radiant heat of scanned objects, and it is through the following process that these images are converted so that they display kinetic heat, or surface temperature. The final goal of this lab is to create a surface temperature map of the Eau Claire and Chippewa Counties.

Methods
The transfer of thermal data from radiant heat to kinetic heat involves three important equations. The first equation (Figure 1) converts the digital numbers (DN) of the image to at-satellite radiance. This equation results in an image that reveals the spectral radiance of the image (Lλ). Grescale is the rescaled gain of the image, and Brescale is the rescaled bias. Grescale is calculated by using the second equation (Figure 2), where LMAX is the spectral at-sensor radiance that is scaled to Qcalmax and LMIN is the spectral at-sensor radiance that is scaled to Qcalmin. QCALMIN is the minimum quantized calibrated pixel value that corresponds to LMIN and QCALMAX is the maximum quantized calibrated pixel value that corresponds to LMAX. All of this information is found in the image’s metadata. Brescale is equivalent to LMIN. The final equation (Figure 3) converts at-satellite radiance to a surface temperature in Kelvin (TB). Lλ is the radiance image that was created in equation one, while K2 and K1 are calibration constants that are specific to each different satellite.

Figure 1: The first equation in the process, revealing the spectral radiance of the image.
Figure 2: The second equation in the process, used to acquire the grescale (gain) of the image.
Figure 3: The third equation in the process, resulting in the final surface temperature image.

For this exercise, we used data from the Landsat 8 satellite, using an information from band 10 (Thermal Infrared). The raw thermal image can be seen in Figure 4. First, it was required to examine the metadata in order to find the LMAX, LMIN, QCALMAX, and QCALMIN values. For Landsat 8, the calibration constants are in the metadata as well. The values were then put into an excel table in order to complete equation two and get the grescale (Figure 5). From here, the model maker in ERDAS Imagine was used to process the rest of the equations. The model that was used can be seen in Figure 6. The model consists of three raster objects and two functions. The first raster object is the thermal image of the at-satellite radiance values (essentially DN from equation one). This raster is used in the first function (Figure 7) to create the second raster (essentially Lλ from equations one and three). However this is merely a temporary file, only being used to complete the final equation, in the second function of the model (Figure 8). This function creates an output file (the final raster), which is our surface temperature image.

Figure 4: The raw thermal infrared image.
Figure 5: The execution of equation two to acquire the grescale value. The equation
can be seen in the function bar above the excel cells.

Figure 6: The ERDAS Imagine model maker. 

Figure 7: The execution of equation one in
the model maker.
Figure 8: The execution of equation three in
the model maker.
Results
Figure 9 displays the surface temperature image, after it was turned into a completed map in ArcMap. It is very easy to see the temperature differences between water, concrete, and vegetation. Within ArcMap, it is possible to determine the exact kinetic temperature of each individual pixel using the identify tool.



Sources
Landsat image is from Earth Resources Observation and Science Center, United States Geological Survey.