Fusing Depth from Defocus and Stereo with Coded Apertures

Yuichi Takeda, Shinsaku Hiura, Kosuke Sato; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 209-216

Abstract


In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.

Related Material


[pdf]
[bibtex]
@InProceedings{Takeda_2013_CVPR,
author = {Takeda, Yuichi and Hiura, Shinsaku and Sato, Kosuke},
title = {Fusing Depth from Defocus and Stereo with Coded Apertures},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}