A notion of gauge coordinates which has been adopted in the
computer vision community is to express image descriptors
in terms of local directional derivatives defined from
certain preferred coordinate systems.
At any image point, introduce a local (u, v)-system
such that the v-direction is parallel to the gradient
and introduce directional derivative operators
along these directions by
Then, we can define an edge point as a point for which
the gradient assumes a local maximum in the gradient direction,
and restate this edge definition as
where and denote second- and third-order
directional derivatives in the v-direction.
After expansion to Cartesian coordinates and simplification,
this edge definition assumes the form
Interpolating for zero-crossings of within
the sign-constraints of gives a
straightforward method for sub-pixel edge detection.
Figure 5(a) shows the result of applying
this edge detector to an image of an arm at scale levels
t = 1.0, 16.0 and 256.0.
Observe how qualitatively different types of edge curves are extracted
at the different scales.
A characteristic behaviour is that most of the sharp edge structures
corresponding to object boundaries
give rise to edge curves at both fine and coarse scales.
Moreover, the number of spurious edges due to noise is
much larger at fine scales,
whereas the localization of the edges can be poor at coarse scales.
Notably, the shadow of the arm can only be extracted as
a connected curve at a coarse scale.
This example constitutes one illustration of the need for including
image operators at coarse scales when extracting general classes of
image structures from real-world data.
Figure 5: Edges and bright ridges detected at scale levels t = 1.0, 16.0
and 256.0, respectively.