ScaleSpace Theory in Computer Vision
Foreword by Jan KoenderinkThe problem of scale pervades both the natural sciences and the visual arts. The earliest scientific discussions concentrate on visual perception (much like today!) and occur in Euclid's (c. 300 B.C.) Optics and Lucretius' (c. 10055 B.C.) On the Nature of the Universe. A very clear account in the spirit of modern "scalespace theory" is presented by Boscovitz (in 1758), with wide ranging applications to mathematics, physics and geography. Early applications occur in the cartographic problem of "generalization", the central idea being that a map in order to be useful has to be a "generalized" (coarse grained) representation of the actual terrain (Miller and Voskuil 1964). Broadening the scope asks for progressive summarizing. Very much the same problem occurs in the (realistic) artistic rendering of scenes. Artistic generalization has been analyzed in surprising detail by John Ruskin (in his Modern Painters, who even describes some of the more intricate generic "scalespace singularities" in detail: Where the ancients considered only the merging of blobs under blurring, Ruskin discusses the case where a blob splits off another one when the resolution is decreased, a case that has given rise to confusion even in the modern literature.It is indeed clear that any physical observation of some extended quantity such as mass density or surface irradiance presupposes a scalespace setting due to the inherent graininess of nature on the small scale and its capricious articulation on the large scale. What is the "right scale" does indeed depend on the problem, i.e., whether one needs to see the forest, the trees or the leaves. (Of course this list could be extended indefinitely towards the microscopic as well as the the mesoscopic domains, as has been done in the popular film Powers of Ten (Morrison and Morrison 1984)). The physicist almost invariably manages to pick the right scale for the problem at hand intuitively. However, in many modern applications the "right scale" need not be obvious at all, and one really needs a principled mathematical analysis of the scale problem. In applications such as vision the front end system has to process the radiance function blindly (since no meaning resides in the photons as such) and the problem of finding the right scale becomes especially acute. This is true for biological and artificial vision systems alike. Here a principled theory is mandatory and can a priori be expected to yield important insights and lead to mechanistic models. The modern scalespace theory has indeed led to an increased understanding of the low level operations and novel handles on ways to design algorithms for problems in machine vision. In this book the author presents a commendably lucid outline of the theory of scalespace, the structure of low level operations in a scalespace setting and algorithmic schemes to use these structures such as to solve important problems in computer vision. The subjects range from a mathematical underpinning, over issues in implementation (discrete scalespace structures) to more open ended algorithmic methods for computer vision problems. The latter methods seem to me to point a way to a range of potentially very important applications. This approach will certainly turn out to be part of the foundations of the theory and practice of machine vision. It was about time for somebody to write a monograph on the subject of scalespace structure and scalespace based methods, and the author has no doubt performed an excellent service to many in the field of both artificial and biological vision.
Responsible for this page:
Tony Lindeberg
