Automated Matching Using Modified Iterated Hough Transform
The most visible phenomenon characterizing technical developments in various research fields is the flux of an enormous amount of data from digital sensors. In the last two decades, the main research emphasis had been in the area of sensor development and modeling. Now that the sensors are operational, the research is shifting towards focusing on optimal manipulation of available data to produce needed information for various decision making activities. Proper alignment (co-registration) of available datasets has been one of the major problems encountering users of such data. The majority of available co-registration tools require manual and sometimes tedious data processing to identify common primitives in the involved datasets. Automated approaches for solving this problem have been proven to be non-robust and problem specific.
Our research group has developed a new approach for robust and automated co-registration of datasets that might have been captured by different sensors and/or by the same sensor at different epochs. The robustness of the developed methodology is attributed to the fact that it considers the rigorous mathematical relationship among conjugate primitives in the involved datasets, which is rarely considered in current matching algorithms. The developed approach is based on the Hough transform principles and has been successfully used in the following applications:
- Automatic relative orientation of large scale imagery over urban areas (Figure 1).
- Automatic matching and three-dimensional reconstruction of free-form linear features in stereo-imagery (Figure 2).
- Automatic single photo resection using control linear features (Figure 3).
- Automatic registration of multi-source imagery with varying geometric and radiometric properties (Figure 4).
(a)
(b)
(c)
(d)
Figure 1: Input interest points on left (a) and right (b) images, and the matched points in left (c) and right (d) images
Figure 2: Left (a) and right (b) images containing matched linear features and the reconstructed 3-D linear feature (c)
Figure 3: Single photo resection using control linear features (a) while establishing the correspondence with image space features (b)
Figure 4: Single photo resection using control linear features (a) while establishing the correspondence with image space features (b)