Passive ranging using image intensity and contrast measurements

Zarko P. Barbaric, Boban P. Bondzulic, Srdjan T. Mitrovic

This is a postprint of an article which appeared as:
, & : Passive Ranging Using Image Intensity and Contrast Measurements, Electronics Letters 48 (18), pp. 1122-1123, , DOI: 10.1049/el.2012.0632

Abstract

Proposed are two passive ranging approaches using a single camera, based on intensity and contrast measurements. According to the Beer-Lambert law and known atmosphere extinction coefficient and initial range, the range to a moving object is estimated. Using these approaches in a real infrared surveillance sequence, better results are obtained than with methods which use image size measurements. The relative error of 3.2% for contrast and 2.3% for intensity features are obtained. Results indicate that range estimation is possible to a fidelity required for object tracking.

1 Introduction

Passive ranging is of special interest for wide range of applications, such as video surveillance and security, air traffic control, speed control, obstacle detection, homing missile guidance, weapon fire control, etc. The most common techniques used to passively estimate range to an object employ optical flow or/and triangulation [1]. The methods given in [2, 3], exploit size changes of an object in the video sequence, as inferred by processing video frames, to compute distance. As we know, thermal image size measurement is difficult, since the edge of object is not clear [4]. The measurement of object size depends on image processing for object extraction. Furthermore, systems relying on triangulation require two or more sensors [1].

Two passive ranging methods using intensity and contrast measurements from one sensor are suggested in this paper. No prior knowledge about the sensor or about the size, shape or any other features of the object is assumed. Suggested methods allow accurate distance estimation even where multiple sensor views or active measurements are not possible.

2 Theory

The intensity or grey level I in the image is a function of scene radiance attenuated by transmission through the atmosphere, and characteristics of the image sensor. In its simplest form this relationship becomes:
I = G L τ = G L exp -σ D (1)
where G is the sensor transfer function, L is the scene radiance, σ is the atmosphere extinction coefficient, D is optical path length through the atmosphere (distance), and transmittance of the atmosphere τ is described by Beer-Lambert low.
The image contrast is given by:
C= I T - I B I B (2)
where IT is the average grey level value of object (target), and IB is the average grey level value of background. As we know, image contrast is scene contrast reduced by the transmittance of atmosphere:
C = K C S C exp -σ D (3)
where K is the sensor contribution, and CSC is scene contrast, given by:
C S C= L T - L B L B (4)
where LT is the radiance of target, and LB is the radiance of its background.
D 1 I = D 0 I + 1 σ ln I 0 I 1 (5)
where D0I and D1I are object to sensor distances, or ranges, and I0 and I1 are average grey levels of the object in two successive frames. Also, we can derive the range from (3), where we suppose constant contrast in the scene:
D 1 C = D 0 C + 1 σ ln C 0 C 1 (6)
where D0C and D1C are object to sensor distances and C0 and C1 are the target contrast in two successive frames.
Passive range estimations from (4) and (5) are based on measurements of intensity and contrast. To resolve them we need reliable estimates of extinction coefficient σ and initial range D0. Extinction coefficient C0 may be estimated on the base of optical visibility at 0.55μm by programs such as LOWTRAN, MODTRAN, or using known algorithms [5]. Initial range to the object D0 can be measured by a laser rangefinder and/or by other passive ranging methods.

3 Experiment

Theoretical passive ranging methods are tested on infrared image sequence for an airborne object. The sequence is generated by SkyTrack system, which is used for a tracking of air targets. This system uses two cameras for distance measurement. In this paper we use infrared image sequence from one of the cameras with distance given by the system for each frame. Figure 1. shows the first and the last frames of the sequence, with bounding boxes around the aircraft.
frame in real sequence frame in real sequence
Fig. 1. The first (left) and the last (right) frames in real sequence.
The proposed passive ranging processing flow is depicted in Fig. 2. It consists of the following steps: detection - finding the object in the current frame, segmentation - extracting it from the background, feature extraction - obtaining object and/or background description, and finally combining the features with a known atmosphere extinction coefficient and initial range into the target distance estimate.
Passive ranging processing
Fig. 2. Passive ranging processing.

Detection and segmentation from Fig. 2. are done by Tsai's method [6] providing object region An. Mean value of object grey level In and object contrast Cn on Fig. 2. are calculated from the object in An. Range estimation given in Fig. 2, Dn, is calculated using three different approaches: proposed intensity (4) and contrast (5) methods and using image size as in [3]:
D n = D 0 A 0 A n (7)
where I0 is the mean value of grey level and D0 is the distance in the initial frame. Final value of σ is median value of σn values for the entire test sequence n=1,2,...,16, where N=16 is the test sequence length. On the Fig. 2 feedback means that we use object template in correlation algorithm for tracking [7]. In the initialization frame a moving object is manually detected and localized.

4 Analysis of results

Fig. 3 shows estimated target distances obtained using the proposed contrast and intensity methods as well as the area based method. Estimates provided by the proposed methods quickly stabilise close to the ground truth provided by the SkyTrack system for the first 300 frames. In the last 50 frames however sensor intensity saturation as the object moves closer to it causes divergence of the estimates and the true distance. Additionally, significant background changes during these frames influence the contrast measure.
True and estimated distances
Fig. 3. True and estimated distances.

From Fig. 3 it is obvious that distance estimation by image size significantly differs from true distance, over almost the whole sequence. At the end of the sequence its difference from the true distance has decreased to less than that of the proposed methods.
Using the proposed methods we obtain the relative error of 2.3% for intensity and 3.2% for contrast approaches, for the relevant (i.e. non-saturated) 300 frames.

5 Conclusion

We propose intensity and contrast methods for passive ranging using a single camera. Results indicate that proposed approaches are suitable for object tracking, because they provide relative ranging error of less than 3.5%, for large distances. Results obtained and analysed in this paper suggest the use of a hybrid approach with image size, intensity and contrast features all taken into account. Future work will include more complex object motion profiles and incorporating a Kalman tracker.

References

  1. Barniv Y. Error analysis of combined optical-flow and stereo passive ranging, IEEE Transactions on Aerospace and Electronic Systems, 1992, 28(4), pp. 978-989
  2. Rao R. and Lee S. 'A video processing approach for distance estimation', Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Toulouse, France, July 2006, Vol. III, pp. 1192-1195
  3. Raju, C., Zabuawala, S., Krishna, S. and Yadegar, J. 'A hybrid system for information fusion with application to passive ranging', Proc. of Int. Conf. on Image Processing, Computer Vision and Pattern Recognition, Las Vegas, June 2007, pp. 402-406
  4. Diao, W.-H., Mao, X., Chang, L. and Jiang, L.: 'Operating distance evaluation method for infrared imaging system under complicated backgrounds', Electronics Letters, 2009, 45(25), pp. 1309-1310
  5. Dikic, G.D. and Djurovic, Z.M.: 'Unbiased estimation of atmosphere attenuation coefficient', Electrical Engineering, 2007, 89, pp. 343-347
  6. Tsai W.-H 'Moment-preserving thresolding: A new approach',Computer Vision, Graphics, and Image Processing, 1985, 29(3), pp. 377-393
  7. Downey GA. 'Electro-optical tracking considerations II', Proc. of SPIE: Acquisition, Tracking, and Pointing XVII, 5082, Orlando, April 2003, pp. 139-153, doi:10.1117/12.487943