contributed by Peg Gronemeyer
Object-based Image Analysis (OBIA)
Digital images are composed of pixels that record the amount of radiation, (i.e. light) reflected from a part of the electromagnetic spectrum. Generally pixels are not visible except at extremely close zoom levels where they appear usually as a series of squares to the human eye. The photographs below show an area of rangeland in the southwestern U.S. The left photo is shown at a very close zoom level where individual pixels are visible. The right photo is the same area (red box) at a more realistic view, showing that the pixels are really parts of shrubs and patches of grass.
Object-based Image Analysis
Object – based image analysis (OBIA), a technique used to analyze digital imagery, was developed relatively recently compared to traditional pixel-based image analysis (Burnett and Blaschke 2003). While pixel-based image analysis is based on the information in each pixel, object-based image analysis is based on information from a set of similar pixels called objects or image objects. More specifically, image objects are groups of pixels that are similar to one another based on a measure of spectral properties (i.e., color), size, shape, and texture, as well as context from a neighborhood surrounding the pixels.
Note: The examples below are drawn from Definiens eCognition®, v. 8. However, there are many other programs available that also provide object – based image analysis. See “Similar Methods” section.
Steps of OBIA
To obtain useful information from an image, the segmentation process splits an image into unclassified “object primitives” that form the basis for the image objects and the rest of the image analysis. Segmentations, and the resulting characteristics of object primitives and eventual image objects, are based on shape, size, color, and pixel topology controlled through parameters set by the user. The values of the parameters define how much influence spectral and spatial characteristics of the image layers will have in defining the shape and size of the image objects. The user modifies the settings depending on the objective, as well as image quality, bands available, and image resolution.
Pixels (left) are grouped into image objects (right) through a segmentation process. In this “false-color” image (live vegetation shows up as red), the red outline indicates an individual shrub.
As a general rule, ‘good’ image objects should be as large as possible, but small enough to show contours of interest and to serve as building blocks for objects of interest not yet identified . If the objective is to classify large shrubs, each object should contain only one (or one group of) shrub. If a single shrub is made up of many small objects, the objects are too small.
The “best” settings for segmentation parameters vary widely, and are usually determined through a combination of trial and error, and experience. Settings that work well for one image may not work at all for another, even if the images are similar.
Color and shape parameters affect how objects are created during a segmentation. The higher the value for color or shape criteria the more the resulting objects would be optimized for spectral or spatial homogeneity. Within the shape criterion, the user also can alter the degree of smoothness (of object border) and compactness of the objects.
The color and shape parameters balance each other, i.e., if color has a high value (high influence on segmentation), shape must have a low value, with less influence. If color and shape parameters are equal, then each will have roughly equal amounts of influence on the segmentation outcome.
The value of the scale parameter affects image segmentation by determining the size of image objects. If the scale value is high, the variability allowed within each object is high and image objects are relatively large. Conversely, small scale values allow less variability within each segment, creating relatively smaller segments.
Example of image segmentation
The aerial photos (3cm resolution) below were acquired in 2008 and show a shrubland in the southwestern U.S. Most of the dark green vegetation is a common shrub, creosotebush,(Larrea tridentata). The pale brown color is soil with some sparse vegetation or litter. A large section of an arroyo shows as bright white – soil in the arroyo has little or no vegetation. On the right is the same image after a segmentation. While the color parameter was given more weight, the shape parameter was of some use because the shrubs are relatively compact. Note that most of the shrubs are individual objects, e.g. green outline. A large section of an arroyo is also a single object (red outline). The segmentation created meaningful objects that carry spectral and spatial information for image analysis.
Image Object Hierarchy
In OBIA, all image objects are part of the image object hierarchy, which may consist of many different levels, but always in a hierarchical manner. Each image object level is a virtual copy of the image, holding information about particular parts of the image. Therefore all objects are linked to neighboring objects on the same level, superobjects on higher (coarser scale) levels, and to subobjects on lower (finer scale) levels. Note that while it is possible to have many object levels, it is not necessary, and the higher the number of image object levels, the more complicated the classification.
The figure below is taken from the Definiens Developer 7, User Guide, p. 26, showing the links between objects on the same and on different levels. Thick blue lines show links between the example “image object” (orange box with the black border) on the same level (neighbors), and at multiple levels (super or subobjects)
After an image has been segmented into appropriate image objects, the image is classified by assigning each object to a class based on features and criteria set by the user.
The definition of a ‘feature’ varies widely. For these purposes, a feature in OBIA (which is different than a feature in GIS), is an algorithm that measures (in relative or absolute terms) various characteristics (shape, size, color, texture, context) of image objects. The efficacy of different features varies widely, again depending on objectives, object size, color, texture, and shape properties, and location within the object hierarchy.
Features usually define the upper and lower limits of a range measures of characteristics of image objects. Image objects within the defined limits are assigned to a specific class. Image objects outside of the feature range are assigned to a different class, (or left unclassified). Features can be applied to image objects, an entire scene, or a class.
The following is a list (not exhaustive) of examples of commonly used features:
- Color: mean or standard deviation of each band, mean brightness, band ratios
- Size: area, length to width ratio, relative border length
- Shape: roundness, asymmetry, rectangular fit
- Texture: smoothness, local homogeneity
- Class level: relation to neighbors, relation to subobjects and superobjects
Two (there are many) common classification methods are briefly described below. Like the segmentation process, there is no “best” method, or combination of methods. The most appropriate method depends on objectives, image characteristics, a priori knowledge, as well as experience and preference of the user.
Nearest neighbor (NN)
- User chooses sample image objects for each class
- Samples are usually based on a priori knowledge of the plant community, and should represent the range of characteristics within a single class
- Software finds objects similar to the samples, then assigns those objects to proper class
- Classification improves through iterative steps
- Appropriate for describing variation in fine resolution images
- User chooses features that have different value thresholds for different classes
- The software separates image objects into classes using the feature threshold identified by the user (see example below)
- Results are more objective than NN, and easy to edit
- Useful if the classes are easily separated using one or a few features
- Appropriate when there is little a priori knowledge about the particular vegetation community in the image
Examples of Membership Function Classification
The best way to understand a classification is to work through a simple example:
The disappearance of native grasslands in the American southwest is a focus of a great deal of research. These grasslands are often replaced by a patchy network of shrubs and bare ground. The magnitude of the increase (over time) in bare ground is one (of many possible) clues to the rate of declining grasslands. Image classification is one way of estimating these changes.
Beginning with the segmented aerial photo above, the brightness feature is used to classify the image into ‘parent’ classes, vegetation and bare ground, and their corresponding ‘child’ classes, which inherit the parent class description. (See class hierachy – which is created by the user).
In a classification using thresholds, the approximate cutoff value for a chosen feature is determined for the class in question. In this example, using the brightness feature, the approximate cutoff between the two parent classes can be defined – note the dark vegetation and much lighter bare ground. Image objects with brightness values below the threshold are assigned to the ‘vegetation’ class. Objects with brightness values above (or equal to) the threshold are assigned to‘bare ground.
To separate shrubs from other types of vegetation, (i.e., ‘not shrub’), the feature, mean of the near infrared (NIR) band, is used. To separate bare soil from sparse cover, the feature, ratio of the blue band, is used. For each feature, a threshold value, or cutoff value, is found that separates the child classes. See figure below.
The figure on the left shows the image classified to the top two parent classes, vegetation (green) and bare ground (yellow). The figure on the right is the image classified into all four child classes. Note arroyo (highlighted in red) and shrub (in bright green) for reference.
There are many different image classification methods, e.g., supervised, unsupervised, or subpixel classification. OBIA is (usually) considered a type of supervised classification because knowledge of the user is part of the input for the resulting classification. Also see image analysis software and http://www.ioer.de/segmentation-evaluation/results.html.
Advantages of OBIA
The spatial relationship information contained in image objects allow for more than one level of analysis. This is critical because image analysis at the landscape scale requires multiple, related levels of segmentation, or scale levels. In pixel – based image analysis, the pixel is assumed to cover an area meaningful at the landscape scale, although this is often not the case. The objects in OBIA provide complex information on various scales (through multiple segmentations with different parameter settings), and thus OBIA is more suited to landscape scale analyses.
Objects can be classified using their spatial relationships with adjacent or nearby objects. For example, some prickly pear species of cactus require a ‘nurse plant’, often a shrub, in order to germinate, grow, and survive, and thus are commonly found together. The presence of cactus objects could be used to help classify the nurse plant species by using “adjacent to” or “distance to” features.
OBIA is able to filter out meaningless information and assimilate other pieces of information into a single object. This is analogous to how the human eye filters information that is then translated by the brain into an image that makes sense. For example, the pixels in an image are filtered and grouped to reveal a pattern, like that of an orchard or tree plantation.
OBIA provides more meaningful information than pixel-based image analysis by allowing for less well-defined edges or borders between different classes. On maps, divisions between different types of vegetation, for example where a shrubland meets a grassland, are generally represented by a single line. In nature, no such abrupt change occurs. Instead the area where the shrubland meets the open grassland is a transition area, called an ecotone, containing characteristic species of each community, and sometimes species unique to the ecotone itself.
OBIA allows for this area of transition by using fuzzy logic. That is, the objects that occur within the ecotone belong to, and are thus considered members of, both the shrubland and grassland classes. The membership value of a pixel to a class varies from 0.0 (no membership) to 1.0, (100% complete membership to a class, and thus no ambiguity). An object in an ecotone might have 80% membership within the shrubland class, and 20% membership within the grassland class. This is a more realistic approach than of objects belonging strictly in one class or another, but not both.
Output of OBIA is usually a classified image, which often then becomes part of a map used, for example, to illustrate different vegetation types in an area. The segmentation itself can be an output, and is often imported into a GIS as a raster (e.g., image file), or a polygon vector layer (e.g., shapefile), to summarize and statistically analyze data. Another possible output of OBIA is an accuracy assessments such as an error matrix indicating the classification quality and amount of uncertainty associated with each class.
Successful Rangeland Uses
- Geneletti D. and B.G.H. Gorte. 2003. A method for object-oriented land cover classification combining Landsat TM data and aerial photographs. Int. J. Remote Sensing 24(6): 1273-1286. Limitations of image analysis because of spatial resolution can be overcome by integrating imagery of different resolutions. Methods included sequential segmentation and classifcation of LandSat TM using maximum likliehood and region-based segmentation of fine resolution, black and white orthophotos. Using OBIA, land cover map was created and tested for accuracy.
- Giada, S., T. de Groeve, D. Ehrlich, and P. Soille. 2003 Information extraction from very high resolution satellite imagery over Lukole refugee camp, Tanzania. International Journal of Remote Sensing 24(22): 4251-4266. // Automatic image analysis of IKONOS imagery for rapid indentification of refugee tents and their spatial extent. Multi-resolution segmentation and mathematical morphology analysis provided best results using object – oriented classifiers.//
- Karl, J.W. in press. Spatial predictions of cover attributes of rangeland ecosystems using regression kriging and remote sensing. Rangeland Ecology and Management. Used a geostatistical prediction technique – regression kriging – with OBIA of Landsat imagery to predict the distribution of cheatgrass, shrub, and bare ground cover.
- Laliberte A.S. and A. Rango. 2009. Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery. IEEE Transactions on Geoscience and Remote Sensing 47(3): 1-10. OBIA was used to find the most suitable texture measures and optimal scale for differentiating vegetation on ranglelands using UAV imagery.
- Laliberte A.S., E.L. Fredrickson, and A. Rango. 2007. Combining decision trees with hierarchical object-oriented image analysis for mapping arid rangelands. Photogrammetric Engineering and Remote Sensing 73(2): 197-207. Found that using OBIA to combine multiresolution segmentation and decision tree analysis resulted in the identification of the most suitable scale and input variables to classify vegetation at various scales. Many features chosen are only available in OBIA, and not in pixel-based analysis.
- Laliberte A.S., A. Rango, and K.M. Havstad. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sensing of Environment 93(1-2): 198-210. OBIA was used to monitor the change in vegetation over time, specifically shrub encroachment into native grasslands in the American southwest. Included advantages of OBIA over pixel-based methods.
- Blaschke, T. 2010. Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing 65(1):2-16.
- Burnett, C. and T. Blaschke. 2003. A multi-scale segmentation/object relationship modeling methodology for landscape analysis. Ecological Modeling 168: 233-249.
- Cracknell, A.P. 1998. Synergy in remote sensing – what’s in a pixel? Int. J. Remote Sensing, 19: 2025-2047.
- Definiens. 2007. Definiens Developer 7 User Guide. Munich, Germany.
- Hay, G.J. and G. Castilla. 2006. Object Based Image Analysis, Strengths, weaknesses, opportunities, and threats (SWOTs). From OBIA 2006 International Archives of Photogrammetry, Remote sensing, and Spatial Information Sciences.
- Jyothi, B.N., G.R. Babu, and I.V.M. Krishna. 2008. Object oriented and multi-scale image analysis: strengths, weaknesses, opportunities, and threats – a review. J. of Computer Science 4(9): 706-712.
- Lillesand, T.M. and R.W. Kiefer, 2004. Remote sensing and image interpretation, 5th ed. John Wiley and Sons Inc, New Jersey.
- Yeaton, R.I. 1978. A cyclical relationship between Larrea tridentata and Opuntia leptocaulis in the northern Chihuahuan desert. J. of Ecology, 66:651-656.
Expensive software, very computer intensive, requiring substantial processing power and large amounts of available memory. Somewhat steep learning curve.
Also see Hay, G.J. and G. Castilla. 2006. Object Based Image Analysis, Strengths, weaknesses, opportunities, and threats (SWOTs). From OBIA 2006 International Archives of Photogrammetry, Remote sensing, and Spatial Information Sciences.
Some of the more popular OBIA software packages are:
- Definiens eCognition
- ENVI Feature Extraction Module
- Feature Analyst (ArcGIS and ERDAS)
See http://www.ioer.de/segmentation-evaluation/results.html for a more thorough list and comparison of OBIA/segmentation software.
- http://wiki.ucalgary.ca/page/OBIA – G. Hay’s OBIA wiki
Who Is Using This Method?
- Austrian Academy of Sciences, Austria
- Canadian Wildlife Service
- Centre of Geoinformatics, Salzburg University, Austria
- Foothills Facility for Remote Sensing and GIScience, University of Calgary, Alberta, Canada
- NASA Jet Propulsion Laboratory
- Universitatea Babes – Bolyai, Cluj – Napoca, Romania
- University of Arkansas, Fayetteville, Arkansas
- USDA/ARS Jornada Experimental Range
- USGS Arkansas Cooperative Fish and Wildlife Research Unit
- Also see http://homepages.ucalgary.ca/~gjhay/geobia/Aug18/GEOBIA%20Proceedings_Linked.pdf for a lot more ….