High Dynamic Range Imaging

CS 180: Computer Vision and Computational Photography, Fall 2024

Rebecca Feng and Austin Zhu

Link to first project

In this project, we recover the full dynamic range of the world that is unable to be captured by cameras. By using multiple exposures of the same image, we can recover a high dynamic range photo for many images. This project follows the algorithms outlined in Debevec and Malik 1997 and Durand 2002.

An example of the different exposure images are shown below for the St. Louis Arch:

1/25 second exposure.
1/4 second exposure.
3 seconds exposure.
17 seconds exposure.

Recovering a Radiance Map from a Collection of Images

First, we want to construct an HDR radiance map from several LDR exposures, where the image pixel values (\(Z_{ij}\)) are a function of our scene radiance (\(E_i\)) and exposure duration (\(\Delta t_j\)), \(Z_{ij} = f(E_i \Delta t_j)\). We ultimately want to solve for the log radiance values using \(g = ln(f^{-1})\), so then we have the relation \(g(Z_{ij}) = ln(E_i) + ln(t_j)\).

Solving for g

In order to solve for \(g\), Debevec finds the values of \(g(Z_i)\) and \(ln(E_i)\) which minimizes the following quadratic objective function: $$\mathcal{O} = \sum_{i=1}^N\sum_{j=1}^P \{w(Z_{ij})[g(Z_{ij} - ln(E_i) - ln\Delta t_j]\}^2 + \lambda \sum_{i=Z_{min}+1}^{Z_max-1}[w(z)g''(z)]^2$$ noting that: This minimization problem reduces to the solving of an overdetermination linear system of equations, which we solve using the provided MATLAB pseudocode from Debevec.
Debevec pseudocode for solving for g.
Running our code on the arch photos, we obtain the following functions of g for our RGB channels:
g function curves for the arch photos.

Recovering Our Scene Radiances

Now that we have our \(g\) functions, we can find our log radiances by rearranging the terms to get \(ln(E_i) = g(Z_{ij}) - ln(\Delta t_j)\). But taking into account our weighting function \(w(\cdot)\) again, we get the equation from Debevec: $$ln(E_i) = \frac{\sum_{j=1}^P w(Z_{ij})(g(Z_{ij})-ln\Delta t_j)}{\sum_{j=1}^P w(Z_{ij})}$$ Then, we simply need to exponentiate in order to construct our scene radiances. The results of applying this algorithm to the arch photos is shown below:
Radiance maps for the arch photos.

Converting This Radiance Map into a Display Image

Now that we have our radiance maps, we need to properly display our images using the results. This will involve performing transforms on our radiance values, followed by stretching the intensity values to fill the whole [0 255] range.

For our project, we consider the following three transformations: Applying these transformations to our arch photos again, we obtain the following results:

More test images!!

Below we show the results for the rest of the provided test images:

Bonsai tree:
Chapel:
Garage:
Garden:
House:
Mug:
Window:
A few things to note: Additionally, we show the bilateral filtering decomposition results into the detail and large scale structure for the chapel images below:
Bilateral decomposition for chapel.

Bells & Whistles

For our bells and whistles, we apply our algorithms to our own images. Below are the images we used:
1/10 second exposure.
1/5 second exposure.
1 second exposure.
Using these different exposures, we obtain the following HDR images: