The Intimate Internet

It’s hard to convey how small the online world felt back in 1999 when this blog started. There had been online places like bulletin boards (BBS’s) which were text only discussion sites, America Online and then early web services like Gopher, all of which were quickly gone once HTTP and the World Wide Web rose and swept them all away. In those early days, discovery of sites and repositories was based on human networking, via links and pointers.

I remember so clearly the first time I tried out a new search service called “Google” which seemed magical in it’s ability to find anything that had been put up on the web. But as websites proliferated, there was still a vital part for people to play in finding and pointing to interesting content. Communities grew up as did index sites and link blogs which were hubs to consult to find new contributions to the online world.

Looking back through 20 years of internet history, it seems to me that our early social networks have grown into a huge industry of “Social Media”. We are fed streams by Twitter, Instagram and Facebook by algorithm instead of relying solely on human curated writing and links. The thoughtful writing hasn’t disappeared since thoughtful people haven’t disappeared. Sure the distraction is way greater and we’re beginning to learn how to regain control of our attention in a world online where we are all so connected. There are amazing improvements to the tools I have available to create content. I have publishing platforms available on computer, iPad and iPhone that address audiences large and small, public and private.. I have image making devices I can carry in my pocket and take photos in near darkness.

To what end? I guess the same purpose that propelled the creation of this personal journal 20 years ago. Learning how to decide. Deciding who I want to be in this particular moment.

Creative Outputs

50mm f1.8 Nikon Z7 Nik conversion

I’m starting to see a few blogs celebrating their 20th anniversary as ODB will be doing next month. At the end of 1999 the first user facing platforms started emerging and some of the early adopters are still plugging away at our little websites.

By far my most successful creative output over all of those years has been my photography. I’ve developed a voice I feel comfortable with and a set of subjects that continue to hold my interest as objects for image making. The technology of digital photography has improved enormously over these 20 years; even my ability to digitize TRI-X film is vastly better than what I had available when I started writing here at On Deciding . . . Better.

I’ve published a decent number of papers in scientific journals given my full time employment in drug development. These have been relatively intense, short term projects with my Hopkins collaborators.

Oddly the principle subject of the ODB project has been a major project for me over the years but led to very little written output here. Plenty of notebooks and text files, but the subject of only occasional posts here. It’s a personal investigation, reflected a bit here at this personal journal site. I continue to work on it though, spurred on a bit by this 20th anniversery.

The Narrative Paradox of Blogging

Blogging as reverse narrative

Blogging is a really hard way to tell a long story.

The presentation is in reverse order, the opposite of chronological.  The most recent event is presented first, so that the end of the story up until now occurs to the reader first.  Then next down the page is the previous post and so on . . . into the past. I think it’s interesting that this is how we percieve the story of our lives — with our most recent experience on top of the stack. Our most recent experiences are freshest in our memories. To go back further takes some effort.

But can blogs tell extended stories? Stories are not unprocessed records of moments, they are how we make sense of the sequence of events, putting time an causality into a rationale for how we got from then to now.

Continue reading “The Narrative Paradox of Blogging”

The 50mm f/1.8 S on the Nikon Z7

This image was captured with my favorite Nikon Z7 lens so far, the 50mm f/1.8. Thom Hogan is out with his take on the lens, which backs up my subjective impression with data. It’s impressive considering what you need to pay for a Leica 50mm f/2. Now the Leica’s rendering qualities may go beyond it, but it’s diminishing returns and thousands of dollars. I agree with Thom that this is the best 50mm that Nikon has ever made and it alone justifies the Z7 for me as a companion to my Leica cameras. It may be that I end up shooting primes on the Z7 if the upcoming 20mm f1.8 is as good as the 35mm and 50mm that are available now.

By the way, at this point I find I need to use the Nikon RAW processor to get the best results out of the camera. For Leica, the Capture One converter is fine, but somethings going on with microcontrast that Nikon is pulling out of its files that Capture One and Adobe RAW can’t quite get yet.

Perceptual Choices

Deciding Without Thinking

The original premise of this site is that it possible to actually make better decisions. That’s why I called it called “On Deciding . . . Better” to begin with. After all, we are the agents responsible for our actions, so who else is responsible for making better choices? I’ve written about the false promises made by Decision Theory which asserts that by making choices can be made more rational, the decisions can be more successful. The problem isn’t the mathematicl basis of Decision Theory, it’s the problem with implementing it when people actually need to make decisions in the real world. There are valuable habits and tools in the Decision Theory literature, but it’s clear to me that when our brains are engaged in rapidly making decisions, we generally are not aware of the decision process. If we’re not aware of the decisions, then those deliberative executive function mechanisms can’t be brought online as they are being made.

Perceptual Decision Making

This is the Kanizsa Triangle, created by the Italian psychologist Gaetano Kanizsa in 1955. I like it as an example because it is so simple and yet so stable. The brain creates the contours of the second white triangle. When I first heard this kind of illusion being called a “perceptual choice”, I rejected the notion. After all, a choice is an act of will, of mental agency.

Yet calling this “a perceptual choice” makes a lot of sense from a brain mechanism point of view. A simple set of shapes and lines is projected on the retina and relayed back through the visual cortex and the surrounding cortical areas. That part of the brain makes an unshakable choice to interpret the center of the figure as containing a triangle. Similarly, seeing the face of my son, a different area of cortex decides who it is. Further, circuits are activated with all kinds of associated information, some brought up into consciousness, others not, but just ready to be used when needed.

Are We Responsible for Perceptual Choice?

If perceptual choice is like most other choices, like choosing a breakfast cereal or a spouse, it seems I’m advocating abandoning a lot of perceived responsibility for one’s own’s actions. It seems that we walk through the world mostly unaware of how perceptions are constructed and don’t have access to why we act the way we do. Actions are based on perceptions that were chosen without awareness in the first place. And it goes without saying that we have no responsibility for the perceptions and actions of everyone around us. Their brains, wired mostly in the same way as ours, chose how to perceive our words and our acts.

It seems to me that to make better decisions there have to be rather deep changes in those perceptual brain processes. Any decision tools have to become deeply embedded in how our brains work, any rules to guide how we perceive, choose or act lie as deep habits in those automatic functioning circuits of the brain. Some, like the the Kanizsa Triangle are in the very structure of the brain and can’t be changed. Others are strongly influenced by experience and deepened by practice.

On Finding Style

I’ve been enjoying A.B. Watson’s website and his strong points of view on photographic vision. This essay at the Leica Camera Blog puts it very strongly, emphasizing consistency in process, camera, lens and workflow. Watson’s work has a very strong style in what he chooses to present on the website and on Instagram.

It’s interesting that Watson works as a professional photographer doing product shots, fashion shots and who knows what else. He doesn’t really talk about it and the web is pretty clear of identifiable professional work. It’s the inverse of many professional photographers out promoting their commercial work with their identifiable style with some personal projects thrown in.

I’m fortunate to never have had to support myself with photography. It’s simply a creative outlet for me. I learned a long time ago to simply point my camera at anything that looked interesting visually. Over the years I’ve developed a habit of seeing pattern and structure in what I think of as “small landscapes”, like the curb and bag in the image here. My photography developed in tandem with my career in science which early on featured a lot of light microscopy. So I was shooting through the microscope to document my observations for publication and fortunate to have a darkroom at my disposal right through the advent of usable DSLRs.

My work as a photo-microscopist clearly pushed my style toward seeking sharpness in my subjects, valuing image quality very highly. I’ve learned to use out of focus areas in my images, but most of them demonstrate sharpness and detail that keeps them as a very real reflection of the visual reality that presented itself to the camera.

My Flickr Photostream dates back to 2005 and provides a visual history of my work. For a long time I was really taken by color, producing images that to my eye now look oversaturated and overcooked. It was sometime around 2010 that I returned to my monochrome image roots and became more cinematic in my choice of lighting and rendering. I know that Vincent Versace’s Welcome to Oz was published a bit before that, but I remember seeing it in the bookstore and thinking it looked crazy complicated. So it probably was around 2010 that I worked my way through the chapters one by one, followed by Oz to Kansas in 2012.

Many times over the years I’ve worried that this photographic vision was too limited. You can my little ventures into street photography and scenics on Flickr. I prove I can do it, but feel less affinity with the images. They’re me pretending to be those other photographers. The essence of personal vision is following that deep connection to the images as expression. I may not really have much to say, but the images are mine.

Travels with the Nikon Z7

Last month I did some travel for work and decided to bring along the new Nikon Z7 instead of my Leica M10.  The M10 usually comes along on any kind of trip where the focus is work and not photography, If I get an afternoon free to walk the city I’m in, the M10 with the 50mm lens comes out of the bottom of my travel backpack and I wander. These excursions provide most of the travel images I’ve published over the years. My standing joke is that no matter where I travel in the world, I come back with the same images of cracked walls, asphalt and alleys. I actually have conventional travel shots which I’ve posted from time to time, but most of them are iPhone images and more likely to end up quickly sent to Instagram where I am of course @jjvornov.

I bought the Z-7 to replace my D850. The D850 was my photographic expedition camera as it was much more flexible using wide to telephoto lense, for example shooting landscapes from a tripod. I could get shots with  the 14mm zoom of the Nikon that are impossible with the 50mm lens and a hand held Leica. The z7 with the 24-70mm f4 zoom is about the same weight as the M10, so I thought it might work as a more flexible travel camera with wider angle, a bit longer reach and optical stabilization in a lightweight package, certainly better than the D850 with the 24-120 f4 zoom that I’ve used over the last two years.

In a few hours walking around San Francisco, I captured a few nice images and got to know the camera better. These cameras are complicated and I use a very small fraction of their capabilities. In truth, the buttons and menus get in the way, even as I learn what settings need to be changed when. And of course there’s the risk that a setting is changed at one point, forgetting to change it back, and having unexpected responses from the camera.

The 24-70 lens is good as a travel zoom but it’s not as impressive as the 50mm f1.8 that I used with earlier outings with the camera. Renderings are a little flat compared to the 24-120 f4 F mount lens used on the D850 as a midrange zoom. That whole kit was way bigger and way heavier. I’m hoping the wide angle zoom that’s coming soon will prove to be an outstanding lens that I can use in combination with the really nice 50mm. I’ll note that I find the RAW conversions by Nikon’s own Capture NX-D to be better in detail and contrast that those by Capture One, which serves as my cataloging software these days.

So for my city walks, I’ll be sticking to the Leica M10.

Mental Causation

A few years ago I read George Soros’ small book The Soros Lectures: At the Central European University in which he describes how he came to conceptualize reflexivity in markets, the idea that there is a feedback loop between what people think and market reality, which in turn affects what people think. It’s mental causation, but of course it’s just a manifestation of the the way brains interact socially through language, we affect each other with consequences in the real world.

In copying over note’s from last year’s Hobonichi, I found a note on a similar idea of inducing negative opinions. When you merely bring up a topic with some negative connotations for others, they are compelled to fill in the blanks. When I say “It’s like comparing apples and …”, you can’t help but think oranges. The word rises unbidden to mind, caused by my speech. It’s a powerful effect to have on another person as it’s reflexive and automatic. So by my mentioning a name and situation, your negative feelings, already in place, are reflexively activated cause you to think about those negative feelings and attitudes. Your brain does it, but I’ve directly caused it by my actions.

Just a thought about how powerful we are with just the power of words.

Nikon Focus Stacking and Computational Photography

An exercise in computational photography was another effort in my midwinter photography exploration. I’ve already talked about the Z-7 evaluation and film transfer. Today I’ll talk a bit about focus stacking which Nikon calls “Focus Shift”.

This is a technique that Vincent Versace presented in his Welcome to Oz book, which is now out of print and selling at a premium used. The original technique was to use multiple captures of a scene (camera on tripod) in which focus, exposure, aperture, and/or shutter speed were varied. These captures are then combined into a single image. It was need to put two different planes of focus together into a single image using the usual masking techniques in Photoshop, If you’re clever and sensitive enough to make it a believable probability, the final image represented reality in a way that satisfied perceptual expectations, but was way beyond a straight capture in camera. I never went to the additional step of image harvesting, where multiple views or angles are combined just because it seemed like photomontage to me, but Vincent has pulled it off quite well.

In these latest Nikon cameras, the D850 and Z series, the autofocus system has a function in which it will step through a range of focus, capturing up to 300 images at a variable step size. This is no different from automated exposure bracketing which has been a DSLR feature for many years and used for High Dynamic Range (HDR) photographs. It’s just auto-adjusting focus.

In the B&H Video, 21st Century Composition Theory (Day 2): ExDR Extending the Dynamic Range of Focus and Bokeh (the quality of blur) and How to Shoot For It</em> Vincent suggests using all 300 available images and a small step size since memory card space is free. Helicon Focus is used to combine the images using an edge detection algorithm. Again, it’s easy to combine stacks with different settings, like f5.6 or f8 for optimal image sharpness with f1.8 for best background blur (bokeh).

I set this little grouping up really quickly since I was out to test the technique. It’s actually 3 images blended, an f2.8 stack and an f8 stack with the f2.8 stack converted from RAW at a neutral white balance and a cool white balance for shadows and depth enhancement via color perception.

As an experiment, the result is interesting. Not such a great image artistically for me. I got a hyper-realism that I wasn’t really expecting, with objects really popping in the image. If you look closely you can see that the plane of focus is not natural with focus extending back at the end of the plastic cup and forward in the shadows to the side.

It’s just one experiment and I expect I’ll try more since I’m often frustrated in making images by having a focus that is either too deep or too narrow. This allows extension of sharp focus to anywhere in the frame, better than the alternative technique I often use of shooting sharp everywhere and selectively adding artificial lens blur where I want to for the purposes of directing the eye in the image.

All of these techniques are good examples of large sensor camera computational photography. The smart phone makers have really embraced these techniques, seamlessly combining multiple exposure values and views from multiple lenses. As is appropriate for this more deliberate style of image making, I’m using these techniques in controlled ways using special purpose software like Helicon Focus and Adobe Photoshop to align and blend them. I think we’ll see more automation like Focus Shift to come, capturing multiple versions of an image to be combined either in camera or for use in post processing to create synthetic images.