Perceptual Choices

Deciding Without Thinking

The original premise of this site is that it possible to actually make better decisions. That’s why I called it called “On Deciding . . . Better” to begin with. After all, we are the agents responsible for our actions, so who else is responsible for making better choices? I’ve written about the false promises made by Decision Theory which asserts that by making choices can be made more rational, the decisions can be more successful. The problem isn’t the mathematicl basis of Decision Theory, it’s the problem with implementing it when people actually need to make decisions in the real world. There are valuable habits and tools in the Decision Theory literature, but it’s clear to me that when our brains are engaged in rapidly making decisions, we generally are not aware of the decision process. If we’re not aware of the decisions, then those deliberative executive function mechanisms can’t be brought online as they are being made.

Perceptual Decision Making

This is the Kanizsa Triangle, created by the Italian psychologist Gaetano Kanizsa in 1955. I like it as an example because it is so simple and yet so stable. The brain creates the contours of the second white triangle. When I first heard this kind of illusion being called a “perceptual choice”, I rejected the notion. After all, a choice is an act of will, of mental agency.

Yet calling this “a perceptual choice” makes a lot of sense from a brain mechanism point of view. A simple set of shapes and lines is projected on the retina and relayed back through the visual cortex and the surrounding cortical areas. That part of the brain makes an unshakable choice to interpret the center of the figure as containing a triangle. Similarly, seeing the face of my son, a different area of cortex decides who it is. Further, circuits are activated with all kinds of associated information, some brought up into consciousness, others not, but just ready to be used when needed.

Are We Responsible for Perceptual Choice?

If perceptual choice is like most other choices, like choosing a breakfast cereal or a spouse, it seems I’m advocating abandoning a lot of perceived responsibility for one’s own’s actions. It seems that we walk through the world mostly unaware of how perceptions are constructed and don’t have access to why we act the way we do. Actions are based on perceptions that were chosen without awareness in the first place. And it goes without saying that we have no responsibility for the perceptions and actions of everyone around us. Their brains, wired mostly in the same way as ours, chose how to perceive our words and our acts.

It seems to me that to make better decisions there have to be rather deep changes in those perceptual brain processes. Any decision tools have to become deeply embedded in how our brains work, any rules to guide how we perceive, choose or act lie as deep habits in those automatic functioning circuits of the brain. Some, like the the Kanizsa Triangle are in the very structure of the brain and can’t be changed. Others are strongly influenced by experience and deepened by practice.

On Finding Style

I’ve been enjoying A.B. Watson’s website and his strong points of view on photographic vision. This essay at the Leica Camera Blog puts it very strongly, emphasizing consistency in process, camera, lens and workflow. Watson’s work has a very strong style in what he chooses to present on the website and on Instagram.

It’s interesting that Watson works as a professional photographer doing product shots, fashion shots and who knows what else. He doesn’t really talk about it and the web is pretty clear of identifiable professional work. It’s the inverse of many professional photographers out promoting their commercial work with their identifiable style with some personal projects thrown in.

I’m fortunate to never have had to support myself with photography. It’s simply a creative outlet for me. I learned a long time ago to simply point my camera at anything that looked interesting visually. Over the years I’ve developed a habit of seeing pattern and structure in what I think of as “small landscapes”, like the curb and bag in the image here. My photography developed in tandem with my career in science which early on featured a lot of light microscopy. So I was shooting through the microscope to document my observations for publication and fortunate to have a darkroom at my disposal right through the advent of usable DSLRs.

My work as a photo-microscopist clearly pushed my style toward seeking sharpness in my subjects, valuing image quality very highly. I’ve learned to use out of focus areas in my images, but most of them demonstrate sharpness and detail that keeps them as a very real reflection of the visual reality that presented itself to the camera.

My Flickr Photostream dates back to 2005 and provides a visual history of my work. For a long time I was really taken by color, producing images that to my eye now look oversaturated and overcooked. It was sometime around 2010 that I returned to my monochrome image roots and became more cinematic in my choice of lighting and rendering. I know that Vincent Versace’s Welcome to Oz was published a bit before that, but I remember seeing it in the bookstore and thinking it looked crazy complicated. So it probably was around 2010 that I worked my way through the chapters one by one, followed by Oz to Kansas in 2012.

Many times over the years I’ve worried that this photographic vision was too limited. You can my little ventures into street photography and scenics on Flickr. I prove I can do it, but feel less affinity with the images. They’re me pretending to be those other photographers. The essence of personal vision is following that deep connection to the images as expression. I may not really have much to say, but the images are mine.

Travels with the Nikon Z7

Last month I did some travel for work and decided to bring along the new Nikon Z7 instead of my Leica M10.  The M10 usually comes along on any kind of trip where the focus is work and not photography, If I get an afternoon free to walk the city I’m in, the M10 with the 50mm lens comes out of the bottom of my travel backpack and I wander. These excursions provide most of the travel images I’ve published over the years. My standing joke is that no matter where I travel in the world, I come back with the same images of cracked walls, asphalt and alleys. I actually have conventional travel shots which I’ve posted from time to time, but most of them are iPhone images and more likely to end up quickly sent to Instagram where I am of course @jjvornov.

I bought the Z-7 to replace my D850. The D850 was my photographic expedition camera as it was much more flexible using wide to telephoto lense, for example shooting landscapes from a tripod. I could get shots with  the 14mm zoom of the Nikon that are impossible with the 50mm lens and a hand held Leica. The z7 with the 24-70mm f4 zoom is about the same weight as the M10, so I thought it might work as a more flexible travel camera with wider angle, a bit longer reach and optical stabilization in a lightweight package, certainly better than the D850 with the 24-120 f4 zoom that I’ve used over the last two years.

In a few hours walking around San Francisco, I captured a few nice images and got to know the camera better. These cameras are complicated and I use a very small fraction of their capabilities. In truth, the buttons and menus get in the way, even as I learn what settings need to be changed when. And of course there’s the risk that a setting is changed at one point, forgetting to change it back, and having unexpected responses from the camera.

The 24-70 lens is good as a travel zoom but it’s not as impressive as the 50mm f1.8 that I used with earlier outings with the camera. Renderings are a little flat compared to the 24-120 f4 F mount lens used on the D850 as a midrange zoom. That whole kit was way bigger and way heavier. I’m hoping the wide angle zoom that’s coming soon will prove to be an outstanding lens that I can use in combination with the really nice 50mm. I’ll note that I find the RAW conversions by Nikon’s own Capture NX-D to be better in detail and contrast that those by Capture One, which serves as my cataloging software these days.

So for my city walks, I’ll be sticking to the Leica M10.

Mental Causation

A few years ago I read George Soros’ small book The Soros Lectures: At the Central European University in which he describes how he came to conceptualize reflexivity in markets, the idea that there is a feedback loop between what people think and market reality, which in turn affects what people think. It’s mental causation, but of course it’s just a manifestation of the the way brains interact socially through language, we affect each other with consequences in the real world.

In copying over note’s from last year’s Hobonichi, I found a note on a similar idea of inducing negative opinions. When you merely bring up a topic with some negative connotations for others, they are compelled to fill in the blanks. When I say “It’s like comparing apples and …”, you can’t help but think oranges. The word rises unbidden to mind, caused by my speech. It’s a powerful effect to have on another person as it’s reflexive and automatic. So by my mentioning a name and situation, your negative feelings, already in place, are reflexively activated cause you to think about those negative feelings and attitudes. Your brain does it, but I’ve directly caused it by my actions.

Just a thought about how powerful we are with just the power of words.

Nikon Focus Stacking and Computational Photography

An exercise in computational photography was another effort in my midwinter photography exploration. I’ve already talked about the Z-7 evaluation and film transfer. Today I’ll talk a bit about focus stacking which Nikon calls “Focus Shift”.

This is a technique that Vincent Versace presented in his Welcome to Oz book, which is now out of print and selling at a premium used. The original technique was to use multiple captures of a scene (camera on tripod) in which focus, exposure, aperture, and/or shutter speed were varied. These captures are then combined into a single image. It was need to put two different planes of focus together into a single image using the usual masking techniques in Photoshop, If you’re clever and sensitive enough to make it a believable probability, the final image represented reality in a way that satisfied perceptual expectations, but was way beyond a straight capture in camera. I never went to the additional step of image harvesting, where multiple views or angles are combined just because it seemed like photomontage to me, but Vincent has pulled it off quite well.

In these latest Nikon cameras, the D850 and Z series, the autofocus system has a function in which it will step through a range of focus, capturing up to 300 images at a variable step size. This is no different from automated exposure bracketing which has been a DSLR feature for many years and used for High Dynamic Range (HDR) photographs. It’s just auto-adjusting focus.

In the B&H Video, 21st Century Composition Theory (Day 2): ExDR Extending the Dynamic Range of Focus and Bokeh (the quality of blur) and How to Shoot For It</em> Vincent suggests using all 300 available images and a small step size since memory card space is free. Helicon Focus is used to combine the images using an edge detection algorithm. Again, it’s easy to combine stacks with different settings, like f5.6 or f8 for optimal image sharpness with f1.8 for best background blur (bokeh).

I set this little grouping up really quickly since I was out to test the technique. It’s actually 3 images blended, an f2.8 stack and an f8 stack with the f2.8 stack converted from RAW at a neutral white balance and a cool white balance for shadows and depth enhancement via color perception.

As an experiment, the result is interesting. Not such a great image artistically for me. I got a hyper-realism that I wasn’t really expecting, with objects really popping in the image. If you look closely you can see that the plane of focus is not natural with focus extending back at the end of the plastic cup and forward in the shadows to the side.

It’s just one experiment and I expect I’ll try more since I’m often frustrated in making images by having a focus that is either too deep or too narrow. This allows extension of sharp focus to anywhere in the frame, better than the alternative technique I often use of shooting sharp everywhere and selectively adding artificial lens blur where I want to for the purposes of directing the eye in the image.

All of these techniques are good examples of large sensor camera computational photography. The smart phone makers have really embraced these techniques, seamlessly combining multiple exposure values and views from multiple lenses. As is appropriate for this more deliberate style of image making, I’m using these techniques in controlled ways using special purpose software like Helicon Focus and Adobe Photoshop to align and blend them. I think we’ll see more automation like Focus Shift to come, capturing multiple versions of an image to be combined either in camera or for use in post processing to create synthetic images.

Film Transfer With the Nikon ES-2

Yesterday, I wrote about how the Z7 might be such an all around success that it could replace the digital Leica rangefinders. It may replace my aging Minolta film scanner as well. During my winter break photography project, looking for new ideas and techniques, I ran across two presentations (here and here) by Vincent Versace on the Nikon ES-2 Film Digitizing Adapter. By the way, watch both. The first is a clean, more formal presentation, the second is a more typical Versace philosophy of photography course about digital, film and the meaning of life.

I got the ES-2 from B&H Photo in NY and a 60mm Nikkor Micro from KEH in Atlanta. I recommend both companies for honesty, customer service and quality product. So last night I quickly digitized a few images just to try it out. Turns out it’s been at least two years since I’ve shot any film, so my eye is attuned to the current quality of digital black and white. But this wasn’t a test of film, it was a tryout of digital transfer over film scanners.

Using the ES-2 reminds me of the time a long time ago when we made slide duplicates. The only way to get a copy of a 35mm slide used for lectures or scientific presentations was to bring it to the lab for copying. A simple method was this kind of adaptor where you took a photo of the slide with slide film. This is just taking a photo of the negative, moving analog to digital.

The camera gets set to base ISO, f8, autofocus on the emulsions side. The post-processing is simple, just invert the curve in any processing program and the negative is converted. Vincent uses Picture Control in camera or Capture NX-D.

Two very obvious conclusions:
1. Its way faster to capture an image with a DSLR than a slide scanner. The scanner is clearly an older technology used to capture point by point what can now be done simultaneously.
2. The improved resolution doesn’t really matter much. It’s a photograph of grain which, at 100% clearly shows how much better the D850 45 MP sensor can resolve compared to Tri-X. So compared to the scanner, the grain is crisper with the DSLR capture, but the image really isn’t any different. The dynamic range of the D850 was wider than every negative I tried, but I could see how some negatives might benefit from a two exposure combination.

The process definitely makes shooting film more attractive. I’ll probably bring a film camera on an upcoming trip to capture some film-appropriate images. It’s more that nostalgia since film provides a rendering that is even more different than digital than it was a few years ago. Digital imaging in the same format is now into large format quality territory with the improvements we’ve seen in sensors and lenses. So for landscape and documentary work, it seems that digital is far and away the best medium. But for the sense of gritty, you are there, 35mm film still provides its quick sketch of the fall of light and sense of movement.

Can the Nikon Z7 Replace my Leica M10?

On New Years Day, we visited the Renwick Gallery in Washington DC. This is a smaller museum in the Smithsonian focusing on American Art and Craft. It’s a joy when a museum encourages photography.

A welcome sight

So I took full advantage of the low light capability of the Z7 to collaborate with the artists on view to create some of my own art based on their art.

The Z7 works well as a travel camera. With the 50mm f1.8 attached it weighs just as much as my Leica M10 with the Summicron APO 50mm f2.0 ASPH. It occupies more space, but it’s no more attention grabbing than the Leica. I don’t have the Nikkor 24-70mm f4 zoom, but extended it does look a bit more like a professional piece of gear. Shooting mode is generally aperture priority, minimum shutter speed 1/125 second and Auto ISO as high as it’ll go. Both cameras are much better at setting exposure than me- And I can blend processed RAW images to bring down highlights and light blocked up shadows as long as the image isn’t blown out.

Interestingly, I’ve found over the last few years that using live view is more acceptable in public than looking through a viewfinder. Maybe it’s just that we’re used to seeing smart phone photography and viewfinder based cameras seem more intrusive. Maybe it’s that you can see the photographer’s face and the camera attracts less attention. With Live View on either the Nikon Z7 or the Leica M10, it’s possible to take photos looking at the area and glancing at the subject.

The Z7 adds several features that aid unobtrusive shooting. The LCD on the rear tilts, so the camera can be low on a table or at the waist out of line of sight. For people, auto focus with face recognition allows shooting. Silent shooting on the Z7 provides a completely silent shooting experience, again avoiding attracting attention with that characteristic shutter sound. The the Z7 also has a higher pixel count and sensor stabilization. There are other advanced features of course, but not in use for this kind of travel photography.

So the question arises as to whether I could sell off all of my other cameras (M10, Monochrom, D850) and just use the Z7 exclusively. I’ll need more data on that, but I think I’ll be selling the D850 as I don’t like the weight and bulk. My Nikon glass will work with the Z7, so it’s redundant. Next, I’ll need to try a Leica M to Nikon Z adaptor to see how Leica glass works with the Z7. But I’ll need to look critically at my image library and do some camera rotation to decide whether the compact form rangefinder has real advantages over the technologically advanced Nikon Z7.

Film? It’s not going anywhere and in fact I plan to try some digital negative transfers with Nikon’s new ES-2 adaptor on the Z7.

When Blogs Were Journals

Looking back on my history of writing on the internet, I came across this nice personal history written by another early EditThisPage user, Frank McPherson who wrote Notes From The Cave.

I don’t think it was the change to titles that did in blogging, it was the move to writing articles rather than journaling, a larger conceptual shift. Looking back through those early sites they were frequently posted links, comments and quick thoughts. And indeed, this is a space now occupied by Twitter and other social media. Social connections in the early days of blogging was easy since the world was small. Twitter and Facebook provided scale for both personal and private networking, so its natural that the infinity of small island blogs like this faded away.

Over the years, this site has been found because of long form reviews or observations that get ranking in Google searches. Any other readers are long time net friends and family. The photos I feature on most pages are decorative, they can’t be found by search engines. I have Flickr for my photographic social network, a place that seems to be recovering from Yahoo’s neglect at this point.

19 Years of Deciding . . . Better

As Hal Rager at Blivet points out, we’ve been blogging for 19 years now since Dave Winer’s EditThisPage. I can’t say it’s been continuous over that time, but it’s been an ongoing project. I always get a kick out of reading the first page I wrote here: Imagination as Simulation

Over the years, I’ve started and stopped work on a long form version of ODB, which I guess most of the world would call a “book”, but the scope of the project has always proved to be overwhelming. The work goes on behind the scenes, with lots more reading on brain mechanisms of deciding, if you look back over the 10 posts I made in 2018.

I think I can see what the outlines of a workable synthesis, but the way the brain works seems to be very, very alien compared what we perceive. It’s not surprising since neurons and their networks aren’t accessible to conscious awareness, so they work very differently from the we would guess or by analogy to mechanical devices. It’s been clear for a long time, given optical illusions that most of awareness occurs automatically. In a way it looks like Sherrington was right, way back in the 20’s that it’s all down to reflexes that act to govern the body via analogues with the external environment. I think we now understand that the awareness is built up from differences between the expected state and the contents of sensory input.

And Scully the cat’s brain mechanisms seem to be very much like our own. Minus the symbolic environment we create through culture, as we are the social animals born with the mechanisms in place to use language.

And yes it’s a bit scary to realize that our perceptions and actions aren’t based on any kind of rational engine, but instead the brain models we’ve developed over a life time of sensory experience. We don’t choose what we see and we don’t choose how we react. Yet I think that it points to relatively simple approaches to deciding better, mostly by being in better, more informative environments that nurture our best selves. And taking time to use imagination as simulation to provide more options for better decisions. Coming full circle.