-->
The need for multi-scale representation of image data next up previous
Next: Scale-space representation: Definition and Up: Scale-space: A framework for Previous: Scale-space: A framework for

The need for multi-scale representation of image data

An inherent property of real-world objects is that they only exist as meaningful entities over certain ranges of scale. A simple example is the concept of a branch of a tree, which makes sense only at a scale from, say, a few centimeters to at most a few meters, It is meaningless to discuss the tree concept at the nanometer or kilometer level. At those scales, it is more relevant to talk about the molecules that form the leaves of the tree, and the forest in which the tree grows, respectively. This fact, that objects in the world appear in different ways depending on the scale of observation, has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance. This general need is well understood in cartography, where maps are produced at different degrees of abstraction. Similarly in physics, phenomena are modelled at several scales, ranging from particle physics and quantum mechanics at fine scales, through solid mechanics and thermodynamics dealing with everyday phenomena, to astronomy and relativity theory at scales much larger than those we are usually dealing with. Notably, the form of description may be strongly dependent upon the scales at which the world is modelled, and this is in clear contrast to certain idealized mathematical concepts, such as 'point' and 'line', which are independent of the scale of observation.

Specifically, the need for multi-scale representation arises when designing methods for automatically analysing and deriving information from real-world measurements. To be able to extract any information from image data, one obviously has to interact with it using certain operators. The type of information that can be obtained is largely determined by the relationship between the size of the actual structures in the data and the size (resolution) of the operators (probes). Some of the very fundamental problems in image processing concern what operators to use, where to apply them and how large they should be. If these problems are not appropriately addressed, then the task of interpreting the operator response can be very hard.

In certain controlled situations, appropriate scales for analysis may be known a priori. For example, a desirable quality of a physicist is his intuitive ability to select proper scales to model a given situation. Under other circumstances, however, it may not be obvious at all to determine in advance what are the proper scales. One such example is a vision system with the task of analysing unknown scenes. Besides the inherent multi-scale properties of real-world objects (which, in general, are unknown), such a system has to face the problems that the perspective mapping gives rise to size variations, that noise is introduced in the image formation process, and that the available data are two-dimensional data sets reflecting indirect properties of a three-dimensional world. To be able to cope with these problems, an essential tool is a formal theory for how to describe image structures at different scales.


next up previous
Next: Scale-space representation: Definition and Up: Scale-space: A framework for Previous: Scale-space: A framework for

Tony Lindeberg
Tue Jul 1 14:57:47 MET DST 1997