An exercise in computational photography was another effort in my midwinter photography exploration. I’ve already talked about the Z-7 evaluation and film transfer. Today I’ll talk a bit about focus stacking which Nikon calls “Focus Shift”.
This is a technique that Vincent Versace presented in his Welcome to Oz book, which is now out of print and selling at a premium used. The original technique was to use multiple captures of a scene (camera on tripod) in which focus, exposure, aperture, and/or shutter speed were varied. These captures are then combined into a single image. It was need to put two different planes of focus together into a single image using the usual masking techniques in Photoshop, If you’re clever and sensitive enough to make it a believable probability, the final image represented reality in a way that satisfied perceptual expectations, but was way beyond a straight capture in camera. I never went to the additional step of image harvesting, where multiple views or angles are combined just because it seemed like photomontage to me, but Vincent has pulled it off quite well.
In these latest Nikon cameras, the D850 and Z series, the autofocus system has a function in which it will step through a range of focus, capturing up to 300 images at a variable step size. This is no different from automated exposure bracketing which has been a DSLR feature for many years and used for High Dynamic Range (HDR) photographs. It’s just auto-adjusting focus.
In the B&H Video, 21st Century Composition Theory (Day 2): ExDR Extending the Dynamic Range of Focus and Bokeh (the quality of blur) and How to Shoot For It</em> Vincent suggests using all 300 available images and a small step size since memory card space is free. Helicon Focus is used to combine the images using an edge detection algorithm. Again, it’s easy to combine stacks with different settings, like f5.6 or f8 for optimal image sharpness with f1.8 for best background blur (bokeh).
I set this little grouping up really quickly since I was out to test the technique. It’s actually 3 images blended, an f2.8 stack and an f8 stack with the f2.8 stack converted from RAW at a neutral white balance and a cool white balance for shadows and depth enhancement via color perception.
As an experiment, the result is interesting. Not such a great image artistically for me. I got a hyper-realism that I wasn’t really expecting, with objects really popping in the image. If you look closely you can see that the plane of focus is not natural with focus extending back at the end of the plastic cup and forward in the shadows to the side.
It’s just one experiment and I expect I’ll try more since I’m often frustrated in making images by having a focus that is either too deep or too narrow. This allows extension of sharp focus to anywhere in the frame, better than the alternative technique I often use of shooting sharp everywhere and selectively adding artificial lens blur where I want to for the purposes of directing the eye in the image.
All of these techniques are good examples of large sensor camera computational photography. The smart phone makers have really embraced these techniques, seamlessly combining multiple exposure values and views from multiple lenses. As is appropriate for this more deliberate style of image making, I’m using these techniques in controlled ways using special purpose software like Helicon Focus and Adobe Photoshop to align and blend them. I think we’ll see more automation like Focus Shift to come, capturing multiple versions of an image to be combined either in camera or for use in post processing to create synthetic images.