Light Field Super-Resolution
Ruprecht-Karls-Universitšt Heidelberg
HCI-> Light field analysis

Spatial and Angular Super-Resolution in a 4D Light Field

back to main page

Publications



Refereed Articles and Book Chapters

icon Variational Light Field Analysis for Disparity Estimation and Super-Resolution
S. Wanner, B. Goldluecke
In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013 (to appear). [bib] [pdf]


Refereed Conference Papers

icon Spatial and Angular Variational Super-Resolution of 4D Light Fields
S. Wanner, B. Goldluecke
In European Conference on Computer Vision (ECCV), 2012. [bib] [pdf] [poster]
Powered by bibtexbrowser





Contributions

  • Simultaneous spatial and angular super-resolution for 4D light fields
    \(\rightarrow\) data can e.g. be obtained by recent plenoptic cameras

  • Subpixel-accurate correspondence information required
    \(\rightarrow\) from our depth map algorithm tailored for Lumigraphs

  • A single convex inverse problem is solved to arrive at the result
    \(\rightarrow\) first time view synthesis is formulated in this way



framework
A 4D light field or Lumigraph



Image Formation Model

Input:

  • Images \(v_i:\Omega_i \to \mathbb{R}\) of a scene with depth maps \(d_i:\Omega_i \to \mathbb{R}\) and camera projections \(\pi_i:\mathbb{R}^3 \to \Omega_i\).

  • Each pixel integrates intensities from a collection of rays, corresponding PSF modelled by a blur kernel \(b\).

Output:
  • Synthesized view \(u:\Gamma \to \mathbb{R}\) of the light field from a novel view point, represented by a camera projection \(\pi: \mathbb{R}^3 \to \Gamma\), where \(\Gamma\) is the image plane of the novel view.

  • This projection together with the scene information induces a transfer map \(\tau_i:\Omega_i\to\Gamma\) from each input view to the novel view, together with a binary visibility mask $m_i$ denoting unoccluded points.
framework
Not all points \(x \in \Omega_i\) are visible in \(\Gamma\) due to occlusion,
which is described by the binary mask \(m_i\) on \(\Omega_i\).
Above, \(m_i\left( x \right) = 1\), while \(m_i\left( x \prime \right) = 0\).



Super-resolved View Synthesis as a Variational Inverse Problem

In the ideal, noise-free case, the image formation model implies that the super-resolved novel view $u$ is related to each input view by $b*(u\circ\tau_i)=v_i$. With real-world input, this is never satisfied exactly, so we instead minimize the variational energy corresponding to a MAP estimate \begin{equation} \label{eq:energy} E(u) = \sigma^2 \int_\Gamma \|Du\| + \sum_{i=1}^n \underbrace{\frac{1}{2}\int_{\Omega_i} m_i(b* (u\circ \tau_i) - v_i) ^2 dx}_{=: {E^i}_{data}\left(u\right)}. \end{equation} where the total variation on $u$ acts as a regularizer. This is a convex model, which can be minimized globally and efficiently.

energy illustration
Illustration of the terms in the super-resolution energy.
The figure shows the ground truth depth map for a single input view and the resulting mappings for forward- and backward warps
as well as the visibility mask $m_i$. White pixels in the mask denote points in $\Omega_i$ which are visible in $\Gamma$ as well.



Discretization and Optimization

The functional derivative for the inverse problem above is required in order to find solutions. It is well-known in principle, but made slightly more complicated by the different domains of the integrals. Transforming all integrals to the domain $\Gamma$, we obtain \begin{equation} \label{eq:energy_gradient_transformed} dE^i_\text{data} \left( u \right) = \left( \tilde m_i \;\bar b * (b * (u \circ \tau_i) - v_i ) \right) \circ \beta_i \end{equation} with $\tilde m_i := m_i \| \det \left( D \tau_i \right) \|^{-1}$. We can then apply standard convex optimization techniques to minimize the energy.

framework
Super-resolution algorithm for minimization of the energy.
The above method is a specialization of FISTA, where the inner loop computes a proximation for the total variation using the Bermudez-Moreno algorithm.
The operator \(\Pi_{\sigma^2 \mathbb{E}}\) denotes a point-wise projection onto the ball of radius \(\sigma^2\).



Results on Synthetic Light Fields

framework



Results on Light Fields from a Plenoptic Camera

framework



Super-resolved epipolar plane image
epi_5
$5\times 5$ input views
epi_9
super-resolved to $9\times 9$
epi_17
super-resolved to $17\times 17$



Reconstruction quality
MethodScene DemoScene Motor
Original resolution36.9135.36
$3\times 3$ super-resolution30.8231.72
$3\times 3$ bilinear interpolation23.8922.84
PSNR for synthesized center view at original resolution, $3x3$ super-resolution as well as $3x3$ bilinear interpolation.
The proposed super-resolution framework leads to significantly superior results.



Depth maps
depth demo depth demo
DemoMotor



Last update: 28.05.2013, 22:32
zum Seitenanfang