I've recently been experimenting with a technique known as HDR Photography, short for High Dynamic Range.
Here's the idea: Not long after I arrived in Kemsing, I took a photograph of the interior of the church to use on some printed materials. I used an ordinary digital camera, and took a single photograph. The result is below:
Not a bad photo. (For those interested in church architecture, Kemsing is apparently a classic example of the earlier style of the Victorian Architect Sir Ninian Comper).
Here's the problem. Whilst the shadows cast by the pews create an interesting photograph, there are some very dark areas of the church where you cannot see the colours and the details because they appear almost black. There are some very light areas that are bleached out, so that the details are obscured again.
Such is the nature of a camera sensor; shooting with old-fashioned 35mm film does not solve this problem. There is only a certain range of brightness that a digital sensor or a colour film can record. (Black and white film has a higher range, albeit still limited, which is why black and white photography creates slightly more lifelike images). The human eye and the human brain, by contrast, can decode a much wider range of bright and dark images. (For very bright sunlight, or very dark scenes, the pupil contracts or dilates, the optical sensors known as rods and cones shift the responsibility between themselves. For a normal scene with a range of brightness, the eye can take it all in at once).
One answer to this is HDR photography. The idea is that you shoot several exposures of the same image, then digitally combine them. By under exposing in one or several shots, you capture the detail in the bright areas; by over exposing, you capture the detail in the dark areas. When you combine, you create an image that preserves the detail everywhere.
So, we take three pictures of Kemsing church. One too dark, one in the middle, and on that is too bright. Thus:
Then we combine them, and the result looks like this:
As a photograph: Lovely! Notice how, in contrast to the photograph I took previously (above): The window sills and the gold-coloured picture behind the table at the east end are no longer bleached; you can now see the texture and colour of the wooden pews because they are no longer black.
The interesting question is then to ask whether I have digitally manipulated the photograph so that it is no longer a true representation of what the camera saw. I would argue that I have not. Yes, I have used photo editing software to create this photograph. But I have not brushed areas of the photo, added features, removed features, or anything like that. All I have done is combine several pure photographs in such a way that the photo you get is a more accurate representation of what my eye saw. I would argue that this is therefore a better photograph than one based on a single exposure.
Comments
That is very effective. How
That is very effective. How did you combine the three shots?
HDR Software
I use a PC, and I have Photoshop Elements on it. Photoshop Elements 8 contains a feature called "Photomerge Exposure" which does precisely this.
Sadly, I only have version 7, and I wasn't going to upgrade just for that. But there is a free piece of software called Enfuse GUI which merges photographs with varied exposures. That is the only thing it does, although you can change a fair few settings if you want to fine-tune the process. It then saves the resulting image as a .TIFF file, which means you lose no detail through compression.
Add new comment