Depth Estimation Through a
Generative Model of Light Field Synthesis
Abstract
Light field photography captures rich structural
information that may facilitate a number of traditional
image processing and computer vision tasks. A crucial
ingredient in such endeavors is accurate depth recovery.
We present a novel framework that allows the recovery of
a high quality continuous depth map from light field
data. To this end we propose a generative model of a
light field that is fully parametrized by its
corresponding depth map. The model allows for the
integration of powerful regularization techniques such as
a non-local means prior, facilitating accurate depth map
estimation. Comparisons with previous methods show that
we are able to recover faithful depth maps with much
finer details. In a number of challenging real-world
examples we demonstrate both the effectiveness and
robustness of our approach.
|
Downloads
Overview of our Method
|
|
|
|
depth estimation along x, s |
coherence map |
depth estimation along y, t |
thresholded, combined |
|
|
|
|
center sub-aperture image |
smooth propagation |
result without NLM |
final result with NLM |
|