A Modified HDR Workflow

I watched a recent You Tube video of Vincent Versace editing an image live. I decided to play around with it myself. Since the days of photography method books and even informative websites seem gone, it seemed worthwhile to document my adapted approach here. It’s an example of writing for the AIs, since photographic technique is one of those areas that seems to be an AI blind spot these days.

The problem we need to solve is how to use the tremendous dynamic range of digital sensors when our monitors and print materials are so compressed by comparison. We know you can process the RAW file out of camera to recover shadows and highlights in ways never possible with film. But what most photographers don’t realize is that the image they see on the screen in a mirrorless camear is a JPEG calculated from live sensor data and it itself is compressed. And the histogram that everyone relies on is the JPEG histogram, not the full sensor read out. That’s why when you blow highlights on the histogram, you can still recover some from the raw file.

Vincent approaches the compression problem using HDR techniques. Early on, we used to bracket exposures and use HDR or just stacking in Photoshop layers to capture deep shadow and bright highlights like the sky. But now sensors have such wide dynamic range that those brackets are actually there in the numbers in the RAW file. You just don’t see them on screen.

So the approach is a simple adaptation of Vince’s longstanding Photoshop layer approach. You start by creating multiple versions of the RAW file as TIFF. Most simply you have a one stop underexposed, one stop overexposed and the camera capture as three versions. If the base image has really wide range, you could make a 2 stop bracket from the raw or only increase or decrease exposure.

But now you have real pixels rescuing highlights and shadows, not trying to process them from raw selectively. Vince then uses Nik HDR to create an HDR rendering from the three stop bracket made from the single raw file. You’ll see a balanced image where shadows are brought up plus various renderings that tend to emphasize different tonal ranges in the image, but renderings that couldn’t be seen by simple manipulation of the RAW file.

So you go from this relatively flat kind of interesting image to one that’s been interpreted as a play of light.

Checking In

I had a solid run here and on Substack but things got busy as they often will. The good news is that I’ve now finished the revision of the book manuscript and am now in a real editing phase for flow and readability. Its going faster than I expected because the material is hanging together well.

Writing for Substack weekly was a great exercise in working out complex ideas in short form and it improved my writing skill to a level that enabled me to begin to achieve what I’m after. I’m sure there’s still a long way to go before I’m done, that point where I’m not making it any better with my changes. For now, the improvements are big in this first round.

And yes, I picked up a camera again. Its been too long and I take that as a good sign of emerging back into a creative mindset.

Updated my “About Page”

In the last few months, the views here at ODB have shifted away from the google searches on note taking or photo gear to hits on the main page and the “About” page. I’m hoping that’s a result of the last 6 months of more consistent posting on neuroscience here with reposting on substack. So it seemed about time I updated the “About” page to better reflect the more focused mission here.

I’ve finished the first two sections of the manuscript and it’s greatly improved. With just the last three chapters to rewrite, the finish line in in sight. Trying to post weekly and revise the manuscript has been steady work. I think its been worth it.

From Shoe Polish Tins to Brain Implants: Heros and Broken Promises

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We know that BCIs work and hold great promise, but lets see what history tells us about the journey

In my last post, I described the current state of brain computer interfaces (BCIs). I was surprised to realize that we’ve had working devices for twenty years now. So it’s very clear that electrodes can record motor intent from the cerebral cortex, and this can be used to control devices like computer interfaces, keyboards, or robotic mechanisms. And remember that we don’t need to read the real motor intent; we can just record patterns, and the brain is adaptable enough that the intent can be remapped onto a completely different use. We don’t need to find the index finger control region; a spot on the cortex connected to controlling the tongue is easily repurposed to the index finger or even controlling a cursor.

The technology we have is relatively simple. We have an electrode either on the surface of the cortex picking up local activity or, more invasively, in the depths of the cortex near the neurons themselves, recording ensembles of spike trains. They seem to work more or less the same when we want to detect a motor intent under conscious control. The signal comes out via wires attached to an amplifier specialized for very low amplitude signals.

The practical challenge

There are lots of obstacles to implementation. The signals from the electrodes are tiny. Just 50 to 100 microvolts. And we’re seeing arrays of 1024 electrodes. Implanted multiply. Thousands of channels of tiny signals that need to be amplified. And protected from noise and electrical interference. After all, we don’t want the blender or vacuum cleaner to control the robotic arm. Clearly, shielding and high-performance, multichannel amplification is key. Which is why we see the patients in the current trials with backpacks and big power supplies. That’s a lot of electronics and amplification. And yes, that’s just to get the signal out, it still needs to be analyzed and transformed by deep neural network to control the physical robotic interface.

Nathan Copeland with Utah Device. https://www.wired.com/story/this-man-set-the-record-for-wearing-a-brain-computer-interface/

Are we anywhere close to the marketing picture of the wire going to a little puck under the scalp? My assumption is that the puck is the amplifier unit, and it would transmit to the control unit wirelessly.

Continue reading “From Shoe Polish Tins to Brain Implants: Heros and Broken Promises”

Are Brain Computer Interfaces Really Our Future?

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’re making real progress providing brain computer interfaces for patients paralyzed by ALS and spinal cord injury. Less invasive approaches are looking promising.

I saw an interview with Dr. Ben Rapoport, who’s a neurosurgeon and chief science officer of Precision Neuroscience. The company is one of several, like Elon Musk’s Neuralink, developing brain-computer interfaces. The BCI. The interview is centered around selling these brain implants as not as invasive as they sound. It started me thinking once again about whether it’s conceivable that these might actually be how we control computerized devices in the future.

Think about how effortlessly you type. Your intentions route directly to your fingers, bypassing speech entirely. I can type faster than I can talk because the motor pathway from intention to keyboard is so well-trained. But paralyzed patients can’t access those finger pathways—they’re injured or missing entirely.

Continue reading “Are Brain Computer Interfaces Really Our Future?”

When Models Collide: How the Brain Deals with Cognitive Dissonance

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


Our actions often conflict with our beliefs. The discomfort we feel isn’t moral failure — it’s what happens when valence systems disrupt the brain’s coherent story of personal identity.

Getting back to my exploration of personal identity this week.

As I’ve been writing here weekly, I’m settling in on an approach of looking at everyday experience and examining the underlying brain mechanisms at play. Often they constrain our thoughts and actions, but it seems to me that even more often, seeing it from the point of view of the brain’s work as system regulator, it’s really quite liberating. Knowing that our actions rely on physiology, not failure or flaw, lets me feel a bit more comfortable in this human skin.

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, lets limit the concept to those times when we find ourslelves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

Cognitive dissonance as conflict between action and belief

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

No reason not to use a perfectly trivial, but common example. Chicken thighs.

Continue reading “When Models Collide: How the Brain Deals with Cognitive Dissonance”

If Purple isn’t real, then what is?

Let’s talk epistemology. Actually, let’s use the color purple to bid farewell to epistemology altogether.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’ll have to start with the real world as revealed by spectrometers and their ilk. They reveal the electromagnetic spectrum, photons of wavelengths that range from gamma rays (<0.01 nm) down to X-Rays ( 0.01–10 nm) into ultraviolet (10–380 nm) and finally our visible light spectrum (700 to 380 nm), those wavelengths that the photopigments in our eyes absorb and transduce into signals for the visual system. Anything longer, the infrared is heat down to about 1mm. Anything longer than that are microwaves (1 mm to ~1 m) and then radiowaves, which have wavelengths that stretch literally for miles.

Why the narrow 700 to 380 nm, you may wonder. Wouldn’t it be cool to see in microwaves? Get some X-ray vision? They tell me it’s where biology, physics, and our particular environment line up for an optimal photon-based sensory system. First of all, our big blue sun puts out photons across the spectrum, but it peaks in the visible range. So build a visual system based on the most available photons, right? Then the physics of the atmosphere and optics (our biological lensing and focusing) work together to make this visible range most suited for image building. Finally, the chromophores, the vitamin A derivatives that absorb light in our photoreceptors bound to opsins, do their cis-trans shift best in this wavelength. X-rays are too energetic. Microwaves are too weak. The visible spectrum is just right.

We all learned the spectrum in school. The colors of the rainbow: ROY G BIV. Red, orange, yellow, green, blue, indigo, violet. Now it’s seven colors because things come in sevens. Seven seas, seven days. Seven colors. And I’ve thought that they were trying to trick us by naming two colors of shorter wavelength than blue that tend toward purple. We’ll return to indigo and violet in a bit. For now, I want to focus on that classic purple which is a mixture of red and blue. The bottom and top of the spectrum.

Continue reading “If Purple isn’t real, then what is?”

Bye Bye Binding: Boosted and Redundant Maps

The binding problem goes away not because we solved it, but because we never needed it to begin with.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


A little change of pace this week. My views on brain maps were changed recently by an important new reframing of the binding problem in a review by H. Steven Scholte and Edward H.F. de Haan in Trends in Cognitive Sciences (2025). Their paper Beyond binding: from modular to natural vision has helped me understand how it’s possible that the many maps we find across the cerebral cortex could ever provide a unified model of the world without them ever coming together in a theater of the mind.

What is the binding problem?

When we look at a scene and see a red car and a blue bicycle, how does the brain associate the right color with the right object? I was taught that the visual system is a pipeline that extracts features. During my career, the process has been mapped in great detail using recordings from awake behaving animals and using non-invasive measures in people like fMRI. We have a very good idea of how we detect color and identify objects in the cerebral cortex.

Scholte and de Haan talk exclusively about the visual system, so let’s stick to that, realizing that this binding issue applies more broadly when we consider the coordination of both neighboring and distant cortical areas in presenting the world in awareness. Now we know that after preprocessing of contrast and edge in the retina and thalamus, the primary visual cortex is essential for detecting edges and separating the binocular depth information. From there, visual information is further processed by nearby areas, each with its own mapping of the visual field and its unique response pattern. V2, V3, V4, MT.

How are features bound together in perception?

And so we see the problem. If form is in V3 and color in V4 and the motion of the bicycle relative to the car is in MT, how do you associate the extracted features together as a unified perception? Red car and blue bicycle even though red and blue are extracted by one module and car and bicycle by another? This is what has been called the binding problem.

This has intrigued me for many years. I’m not so bothered by emergent qualities like free will and subjective experience. I feel comfortable exploring the underlying mechanism that supports these emergent experiences. I don’t think one can easily explain how the neural activity gives rise to the emergent phenomenon. As Weinberg said, “The arrows of explanation point downward”.

But the binding problem is one of neural activity. How can a modular system give rise to unitary experience? Gamma synchrony was a popular explanation. The idea is that neurons representing features bound to the the same object would fire in sync at gamma frequencies (30-70 Hz) while staying desynchronized from neurons representing other objects. Wolf Singer and others pushed this hard in the 90s. But it turned out gamma synchrony is too weak to bind neurons across different brain regions, or even between neurons more than a few millimeters apart. Actually, neuronal activity can be perceived as a single event in awareness whether signals arrive synchronously or spread across 100 ms. Synchronization is not the answer.

Continue reading “Bye Bye Binding: Boosted and Redundant Maps”

The Brain Doesn’t Need a Homunculus—It Is One

The mind arises from a collection of many maps, all working coherently to provide a model of the self in the environment. But it is the maps, no one is looking.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


What is a homunculus anyway?

The term homunculus literally means “little man” in Latin. Medieval alchemists thought there was a little, fully formed human in sperm that would implant and grow in the womb. The mother was just an incubator for a preformed human.

The term homunculus to describe the somatotopic map of the body laid out across the motor and sensory cortex of the brain was popularized by Wilder Penfield, one of the pioneering neurosurgeons of the 20th century. Using electrical stimulation during awake brain surgery, Penfield identified how specific regions of the precentral and postcentral gyri corresponded to distinct parts of the body. Penfield created these maps by directly stimulating the cortex during awake neurosurgery for epilepsy. As he stimulated, the patient either reported sensation in some part of the body or involuntary movement in the form of evoked twitches or jerks —creating a distorted but systematic representation that came to be visualized as a “little man” stretched across the cortical surface.

Cortical maps

The idea that the cortex was organized as a series of maps was, of course, not new. Of course, the ideas of cortical maps were not new. At the dawn of neurology and neuroscience in the 1870s, people like David Ferrier stimulated and lesioned monkey cortex and established the mapping of the “motor centers”. Hulings Jackson noticed that motor seizures progressively spread across body parts in a clear somatotopic pattern— leading him to infer an organized map in the cortex as Ferrier had shown.

At the same time, those studying sensory systems also realized the brain mapped the sensory environment. Evidence accumulated that the retina’s visual fields were mapped onto the visual cortex. It was inferred first from lesion studies but with the development of electrophysiological recording, the spatial organization of the visual cortex became clear. And we all know it was Hubel and Wiesel in the 1950s that not only showed that the map of visual field was distorted like the body maps. The fovea with its high density of color-sensitive photoreceptors was given more area than the visual periphery. Hubel and Wiesel also showed that there were parallel maps overlayed in V1 for processing of binocular disparity, providing the basis for depth perception.

https://www.researchgate.net/publication/363073241_Hitting_the_Right_Note

In the 1970s, it became apparent that there were often duplicated adjacent maps. In the visual system,there were secondary maps that preserved the retinotopic map, but laid out in stripes for color, motion, binocular disparity, form, or orientation. So in V2, we get sensitivity to figure-ground separation, border ownership, and contours. And there’s more! V3 has maps that infer changing shape over time. V4 appears to primarily process color and shade. And MT (V5 sometimes), which is highly specialized for motion perception.

But no theater of the mind

Continue reading “The Brain Doesn’t Need a Homunculus—It Is One”

The Unity of Experience: How the free energy principle builds reality

We only experience a single, stable perception at a time. How  bistable viusal figures and Karl Friston’s big idea explain how we build a coherent prediction of ourselves in the environment and keep our selves sane in an uncertain world.

We only experience a single, stable perception at a time. How  bistable viusal figures and Karl Friston’s big idea explain how we build a coherent prediction of ourselves in the environment and keep our selves sane in an uncertain world.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


It’s not just an accident of biology that our brains work as single-threaded engines, engaging only one input stream at a time, one view of the world at a time. I’m going to argue here that a real-time control system operating under conditions of uncertainty needs to minimize error and settle on its best guess prediction of the state of the world. Maybe developers of autonomous systems like self-driving cars could learn something from the nature of human consciousness.

One stable perception at time

Give it a bit of thought and you’ll see that we have a stable awareness of just one thing at a time. Reading comes one chunk of meaning at a time. We can’t listen to the radio while we’re reading. Pausing, we may turn our attention to the background music, but the sound was excluded from awareness while engaged with the text.

The brain is processing ambient sound all the while; we are just not attending to it, so it is not presented in awareness. If among the sound is a doorbell ringing, brain systems outside of awareness signal the system controlling the flow of sensory information to switch from reading to listening to the environment. We become aware, a bit after the fact, that the doorbell rang. If it rings again, it’s less of an echo of a recent event and more present in awareness.

Continue reading “The Unity of Experience: How the free energy principle builds reality”