The Small World of AI

What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge.

The Glass Cage: Automation and Us

Nicholas Carr

What is human intelligence? It seems to be a miraculous facility that enables man to make complex decisions based on intuition and feelings. We don’t usually consider all the options options, calculate probability of success or really think about whether we really want what we’re going to get? How is it that we are such incredibly sophisticated, rule-based decision making machines?

##Deciding Better
Is it even possible to get better at decision making or has evolution done all of the work already?

Years ago, I began with the premise that formal decision tools could help make more rational, more successful decisions. I learned the concepts of decision making under conditions of uncertainty from consultants and practitioners in the field of Decision Theory. The basic tools like decision trees and simulation were presented as simple consequences of statistical principles. I worked through a basic textbook, Robert Clemens’ Making Hard Decisions: An Introduction to Decision Analysis and began writing about making decisions here at ODB.

After a few years, I realized that no one outside of the world of management consulting and corporate strategy offices really used these tools. Eventually I came to a fundamental reimagining of decision making based on belief models and my training in Neuroscience. The tools of decision analysis, mathematical models, and simulation seemed to best characterized as tools to augment human imagination and understanding. They were useful fictions to guide thinking, merely simple representations of the complex world. I think it’s definitely possible to improve decision making, but the task is as much improving the mental decision making machinery as it is understanding the mathematical tools.

##The Subjective Nature of Probability
I’m not alone in reaching this conclusion. There have always been similar lines of argument, going back to some of the original works in the field of Decision Theory and modern statistics. In Statistical Rethinking, the book I’ worked through to learn Bayesian statistical methods, Richard McElreath provides a wonderful introduction to the Bayesian interpretation of probability. Decision theory is predicated on a specific interpretation of probability. Probability is seen as the subjective likelihood of a particular future event, whether it’s the outcome of a coin flip or the choice of nominee by a political party. These things happen only once, so probability is prediction.

There has been a decades long discussion about whether the probability of heads in a coin flip and the probability of a complex event in the real world can really be the same kind of probability. McElreath presents the formulation of Jimmy Savage, one of the foundational thinkers in the field. Savage proposed that there is a difference between the “small world” of the coin flip which can be accurately reduced to a simple mathematical model and the “large world”, where simple models don’t necessarily hold.

##The Brain Is Not a Small World, Artificial Intelligence Is
The brain is complex and unpredictable and so is by Savage’s term, a large world. Because of our ability for symbolic reasoning, it can contain within it small worlds, models of part of the world that are sometimes explicit but most often implicit. We use these internal models for decision making.

The brain can easily imagine the mechanics of the Bernoulli distribution. But the brain can also contain the mental model of neurological disease and the potential effect of a new medicine. Perhaps machine and other algorithmic, mathematical “small world” systems can never match the “large world” human decision making brain. Why?The brain itself is part of the even larger real world, constantly exchanging information with it, something that we have yet to achieve with our machine intelligence.

Privacy and the Extended Mind

Our thoughts inhabit the most private of spaces. The brain activity an individual perceives as mind is completely inaccessible to any one else. Others know our thoughts only through what we say. There is an absolute ownership of mind based on the metaphysics of the activity, one no police or court can access.

I paused when I read this suggestion that smartphones could share the privileged privacy of mind:

I wonder if every future iPhone product announcement comes wrapped in a message about the importance of smartphones as “an extension of ourselves,” as Cook said today. If you read between the lines, that sure sounds like an argument that smartphones should be a warrant-proof space like the one between our ears.
Tea and scones in Cupertino: The “Loop You In” Apple event – Six Colors

Is it possible that the theory of extended cognition of Andy Clark and others is reaching a wider audience and affected how we think about our personal devices? Clark and others suggest that the physical basis of mind should include parts of the world outside the brain used for thinking or to aid memory. It may follow that if mind is embodied both in the brain and in the physical artifacts that we use to aid thinking, then of course the smartphone could held to be as private as the brain itself.

Speech and writing are not privileged in the same way as thought because once words are put out into the world, others can perceive and interpret the symbols for themselves. But a private language or a coded communication is completely privileged. Of course, a government could ask for those artifacts to be decoded or interpreted, but that’s no different from them asking the brain what it’s thinking. One can simply refuse to answer, refuse to decode, refuse to provide the password. Can the government break into the mind without permission?

Just as thought is encrypted in the brain through its physical medium of neurons and synapses, so are symbols deeply hidden in an encrypted phone. As I contemplate the mind, arising from the interaction of brain with the world, I see the logic in extending the private nature of mind to the personal devices used for extended cognition.

P-Values Are For Small Worlds

This week, the American Statistical Association released a statement on “Statistical Significance and P-Values”. The statistical community is trying to help us with  the problem we’ve gotten ourselves into in science by confusing “statisticial significance” with “truth”.

The validity of scientific conclusions, including their reproducibility, depends on more than the statistical methods themselves. Appropriately chosen techniques, properly conducted analyses and correct interpretation of statistical results also play a key role in ensuring that conclusions are sound and that uncertainty surrounding them is represented properly.

The statement by the ASA alludes to the broader issue I’ve begun discussing in the context of Savage’s Small vs Big World dichotomy. Statistical significance is a simple mathematical calculation that holds true within the “small world” of the sample and the questioned asked of the data. The problems arise from overgeneralizing the principles of probability into realms of uncertainty where they no longer apply.

An Example

All me to provide a simple example. If I measure the height of a few Dutchmen, there’s clearly utility in summarizing the central tendency (mean) and the spread of the distribution (standard deviation). These distributional statements are true in the small world exercise of collecting data, making measurements, using some partictular scale, etc. But the relationship of my calcuations to the “real” height of Dutchmen is a more complicated.

For example, If I now move on to measure the height of a sample of New Yorkers, I’ll have a second sample to compare to the Dutchmen. And with the small world of these two measures, I can summarize and even make a statistical inference about the difference between my sample of Dutchmen and my sample of New Yorkers.

As soon as I decide that I need to  actually  determine whether Dutchmen are really taller than New Yorkers for some real world decision, I’ve moved onto a question to determine “Truth”. In the real world, concluding the Dutch are taller than New Yorkers matters once I decide to act on the result. As William James wrote in Pragmatism:

Pragmatism, on the other hand, asks its usual question. “Grant an idea or belief to be true,” it says, “what concrete difference will its being true make in anyone’s actual life? How will the truth be realized? What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth’s cash-value in experiential terms?”

The moment pragmatism asks this question, it sees the answer: True ideas are those that we can assimilate, validate, corroborate and verify. False ideas are those that we cannot.

So if my real problem is to decide how long the beds for my New York hotel need to be in order to accomodate my usual New York customers and the occasional Dutchman, the question may need to be recast, probably involving the extremes of height, tourist samples and the available length of hotel beds. Perhaps there is a value proposition as well. How many Dutchmen am I willing to distress with a too short bed? What’s the price of extra long beds and bedding? The truth will be known by the satisfaction of my customers and the profitability of my hotel.

Do We Verify in Drug Development?

The truth in medicine and drug development is similarly real world, but our proof comes from the smaller world of a clinical trial or two. I can compare the cardiac output of patients with MI in the presence or absence of a new drug, a small world, mathematical comparison. But once my question moves on to the real world value of the medicine, the question is recast into a large world question and the small world statistical model is one part of the larger question of whether I will accept the efficacy of the drug as true or not.

The truth in medicine will be known by the patients, by the cost to those who pay for the medication. Yet we tend to rely solely on the small world result of the clinical trial, a result that we generally fail to “validate,  corroborate and verify”

Norbert Weiner on Yielding Decisions to Machines

The sense of tragedy is that the world is not a pleasant little nest made for our protection, but a vast and largely hostile environment, in which we can achieve great things only by defying the gods; and that this defiance inevitably brings its own punishment. It is a dangerous world, in which there is no security, save the somewhat negative one of humility and restrained ambitions. It is a world in which there is a condign punishment, not only for him who sins in conscious arrogance, but for him whose sole crime is ignorance of the world around him.

The Human Use of Human Beings: Cybernetics and Society
Norbert Wiener

The last 400 years have seen social changes unlike any earlier period. As our technological reach has grown, our machines threaten to become more than our tools. Our creations seem to slip out of our control and become our masters.

In the early 1950’s Norbert Wiener wrote a slim book, The Human Use of Human Beings: Cybernetics and Society, that was a set of meditations on the relationship of man to his machines. Weiner, who died in 1964, was a American mathematician and philosopher who coined the term “Cybernetics” to describe the complex, unpredictable behavior of simple control systems. In the 50’s, the concern was with automata, robots capable of independent action like those controlling factories or missile systems. It’s striking how different the attitude was toward calculating machines, what we would broadly call “computers”. At the time, these were still large, simple tools without any autonomy. One fed them punch cards and they provided printouts. Weiner clearly saw the changes that telecommunication would bring to automation, worrying that these intelligent machines would move further out of the control of man, talking to each other.

But Wiener was more afraid of man in organizations than organizations of machines:

I have spoken of machines, but not only of machines having brains of brass and thews of iron. When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine. Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions.

The Monkeys Paw of skin and bone is quite as deadly as anything cast out of steel and iron. The djinnee which is a unifying figure of speech for a whole corporation is just as fearsome as if it were a glorified conjuring trick.
The hour is very late, and the choice of good and evil knocks at our door, I find it coming back seated on the whirlwind.

Weiner was one of the early systems thinkers, a group that realized that when individual components were put into an interacting system, the behavior of the whole could not be predicted from the behavior of the individual parts. As The Human Use of Human Beings: Cybernetics and Society makes clear, this is true when the parts are logical systems in a computer, mechanical feedback loops in a robot, neurons in a brain, or people organized into a corporation or political party. Weiner and those who followed destroyed hopes and fears of easy prediction of a clockwork, Newtonian universe. The Large World is a place where simple models don’t hold.

Eventually, these fears abated as we avoided nuclear catastrophe and totalitarianism seemed to recede into history. As Wiener predicted, this was an illusion as we are once again threatened by human machines of death and new “bureaus and vast laboratories” armed Big Data and machine learning algorithms. I believe we are entering another stage in our need to reassert our human ability to make our own decisions and do our own work, resisting the urge to surrender to our own creations.

Worlds, Large and Small

What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge.
The Glass Cage: Automation and Us    Nicholas Carr

What is this human intelligence that enables man to make complex decisions based on intuition and feelings without any formal consideration of options, probabilities or relative desirability of outcomes? Are we incredibly sophisticated, rule-based decision making machines? Is it even possible to get better at decision making or has evolution done all of the work already?

These fundamental questions continue to drive my interest in decision making. It’s a practical matter for me, since drug development requires making plans, spending money, and making innumerable decisions on the path of turning a chemical into a new medicine for clinical use all with near total ignorance of the chances of success.

To further On Deciding, Better, I’m delving into the origins of Decision Theory for the first time. I learned the concepts of making decisions under conditions of uncertainty from consultants and practitioners in the field. The tools of decision trees and simulation were presented as simple consequences of statistical principles. I worked through a basic textbook, Robert Clemens’ Making Hard Decisions: An Introduction to Decision Analysis and began writing about making decisions here at ODB.

After become expert, I realized that no one outside of the world of management consultanting and corporate strategy offices really used these tools. Eventually I came to a fundamental reimagining of decision making based on belief models and my training in Neuroscience. The tools of decision analysis, mathematical models, and simulation seemed to best characterized as tools to augment human imagination and understanding. They were useful fictions to guide thinking, merely simple representations of the complex world.

Working through the subject more deeply now, I realize there were similar lines of argument throughout the original works in the field. In Statistical Rethinking, the book I’m working through to learn Bayesian statistical methods, Richard McElreath provides a wonderful introduction to the Bayesian interpretation of probability. Decision theory is predicated on a specific interpretation of probability. Probability is seen as the subjective likelihood of a particular future event, whether it’s the outcome of a coin flip or the choice of nominee by a political party. These things happen only once, so probability is prediction.

There has been a decades long discussion about whether the probability of heads in a coin flip and the probability of a complex event in the real world can really be the same kind of probability. McElreath presents the formulation of Jimmy Savage, one of the foundational thinkers in the field. Savage proposed that there is a difference between the “small world” of the coin flip which can be accurately reduced to a simple mathematical model and the “large world”, where simple models don’t necessarily hold.

Where does the subjective judgement of probability made by the human brain fit into this scheme? The brain itself clearly is an unpredictable large world, but can contain within it small world models, both explicit and implicit, used for decision making. The brain can easily imagine the mechanics of the Bernoulli distribution. But the brain can also contain the mental model of neurological disease and the potential effect of a new medicine. Perhaps machine and other algorithmic, mathematical “small world” systems can never match the “large world” human decision making brain. If I had to guess why, I’d say it’s because the brain itself is part of the even larger real world.

I can’t take notes while reading

I have a hard time combining reading with taking notes. Reading is a focused activity in itself while taking notes requires a different state of mind. How can you notes and maintain focus while reading?

Cal Newport points in Deep Work, not only is doing the work the path to sucess in the 21st century, it is meaningful, satisfying, and fulfilling. Reading to learn with purpose is one of those joyous activities that increases appetite for a full life. Cal identifies this pleasurable state as flow, described famously by Csikszentmihalyi, arguing that Deep Work is not only valuable and rare, it is also meaningful. Flow is that state of “optimal experience” where time dissolves in the face of completely focused, single minded immersion in doing.

Reading often is a state of flow, making switching to writing notes a problem. Flow is not simply about enjoyment and challenge, it involves continuity. It’s interesting that one of the highest rated experiences is actually driving a car. Sadly, pathological gambling is also a state of deep immersion leading to gamblers fainting at the slot machines, failing to eat or drink. Perhaps Csikszentmihalyi was being a bit too much of a positivist when he wrote so glowingly about entering the mental state of flow. But reading is an easily accessable route to a positive, valuable flow state.

Becoming good at entering a state of flow makes it easier to do Deep Work at the highest level for signicant periods of time. Practicing entering that state of mental performance makes it easier and easier to sit down at a desk and do the work. That work will be of higher quality when the barrier between inside and outside models dissolves- mental energy is not needed to support all kinds of extraneous hypotheticals when the mind is in the world, not in the head. This is an extended or expanded consciousness. When driving a car or, better, a motocycle, down an interesting rural two-lane, the internal mental state becomes the perfect mirror of the process of moving down the road.

I struggle with maintaining the state of flow in taking notes when reading, so have reached something of an accomodation over the years. I was helped a few years ago by the wonderous and appropriatedly titled How to Read a Book by Mortimer Adler and Charles Van Doren who actually break down the process.

“Reading a book analytically is chewing and digesting it”

Once you realize that reading is an activity- or really a group of related activities, then reading can fit into the overall plan of doing the work. On the one hand, there’s the great advice to always be reading by having books at hand and deep pipeline of reading ready to go. But on the other hand, one needs to capture knowledge deliberately.

I’ve found it works best for me to read deeply over a chapter or section of text and avoid taking notes unless the thought that’s occurred to me is so signficant and pressing that I’m compelled to get a notebook and paper to avoid the possible loss of the idea. Making notes while reading is distracting. I’ll read to get the book’s content first, then later review and create notes in conversation with the author. That’s a different, more active frame of mind, the chewing and digesting. Of course some books are spit out and not worth the work.

This kind of note taking is very different from taking notes during a lecture or meeting where the talk pulls you back and wont wait for long interruption of writing. Lectures and meetings proceed more slowly than thought most of the time and there’s plenty of time for notes in the pauses and other mental spaces.

Reading has to become work at some point; thoughts need to processed into notes and the notes have to be further processed into summaries and analyses. Something I happen to have been doing here, writing this post. Thanks for your help.

Book Revew: Deep Work by Cal Newport

I’ve long been a reader of Cal Newport‘s blog, Study Hacks. Cal has been writing and blogging as he’s moved from college to graduate school to academic faculty, as he puts it, “decoding patterns of success.” His first books were not of great interest to me, focusing as they did on school, studying and early career, but I’ve read a few and recommended them over the years.

Now Cal is trying to figure out how to succeed as junior faculty in the Georgetown University Computer Science Department. Deep Work: Rules for Focused Success in a Distracted World is the first book length expression of what he’s learned.

Deep Work is a very academic view of the world – it focuses on gaining world class levels of technical expertise. Cal has to admit that some shallow workers are successful, but he makes the point well that, in general, success in our economy is based on technical expertise. Let’s say this book is aimed at that slice of the new cognitive elite, who read, study and write, not those who work primarily in a world of human relationships as opposed to knowledge. I’ll include myself in this group as an MD, PhD Neurologist who now makes his living providing guidance to companies and teams developing novel therapies in pain, neurological disease and psychiatry. It’s now purely my deep and wide knowledge across medicine, neurobiology and regulations that makes me valuable.

I could have used this book along the way, although most successful professionals figure it out on their own. Time have changed and doing the work is more and more of a challenge in our world of digital distraction. Cal spends about half the book explaining what deep work is and then spends the second half providing some rules of engagement with nice examples drawn from publically available stories and interviews he’s conducted. While useful, it’s important to remember that this is a small slice of the world and Cal is an academic in a certain discipline. But I recall how I too over identified highly successful individuals over the years and worked to reverse-engineer their methods of success.

In essence, the process of deep work is planning the time for uninterrupted periods of deep concentration at hard taks. I enjoy this kind of work and have tried to find jobs where I not only have an opportunity to do it but it’s also a major value driver to my employer.

But out in the world of work, the culture of responsiveness cruelly pulls one away, particularly if one’s surrounded by coworkers who don’t appreciate the value of deep work. They’re scheduling meetings and trying to look busy while you try to analyze data. It’s easier to spend all day farming the inbox and having meetings than to research an unfamilar technical area and synthesize a new business case. But in some companies, answering emails or sitting in meetings to coordinate and manage is the job, or at least most of it. Newport correctly identifies the problem in these coporate cultures, that when being busy is the measure of success then there must be no real work to do anyway. Can such a culture be successful for long?

My own call to “doing the work” is a simpler plea to focus on production. Learning is not productive in itself, real artists ship. Academics are actually scored based on number of publications and the impact ratings of the journals in which the papers appear. My H Score on Google scholar is 21. Not bad for 20 years out of academia. They’ve tried to make keeping score easy. But that’s not been the focus of my career, I’ve been on the business side where being uniquely qualified to help on a project is the goal.

History, Story Telling and Philosophy

A documentary pursues “reality” while fiction’s goal is the “truth” .
A documentary accumulates facts and let’s them tell the story,
While a work of fiction creates an imaginary world.
We filmmakers hide truths within these imaginary worlds.
– Shoichiro Sasaki, Film director: “What Is a Story?”

In Liberty of Conscience, Martha Nussbaum presents her view of the constitutional protection of religion based on the writings of Roger Williams, an early American theologian who left the Massachusetts colony and founded what would become Rhode Island. By tracing a thread from Williams through the drafting of the constitution and Supreme Court decisions, she presents a coherent view of separation of church and state.

I approach these questions as a scholar of constitutional law, but also, and more fundamentally, as a philosopher. Philosophical ideas were important to the Founding, and thinking about some of the philosophical texts that formed its backdrop helps to clarify the underlying issues. I take an independent interest in these philosophical ideas as good ideas to think with, not just ideas that had a certain historical and political influence. But I will also be arguing that the constitutional tradition is best read as embodying at least some of these ideas, in some form.
-Martha Nussbaum
Philosopher/Scholar of Constitutional Law
“Liberty of Conscience”

A Historical Approach to Philosophy?

This historical approach to philosophy uses story understand where we are now. We seek pattern and coherence in the world- it’s the way people are. You might think that history is fixed and unchangable. But if history is viewed as the story we tell to explain what we see now, it makes sense that history will change as the world now changes or as the motivations and knowledge of the storyteller changes.

While the founders didn’t explicitly follow the ideas of Roger Williams, they were familiar with them. As it turns out, it’s useful to see his idea of “Liberty of Conscience” providing the basis for the First Amendment’s two clauses, the Free Exercise Clause and the Establishment Clause. As I learned, it’s not “separation of church and state”; that’s a much more modern concept and one that probably reflects the fact that lack of religion has grown so much in recent decades. In Nussbaum’s view, the First Amendment is aimed at preventing any citizens from being favored, ensuring that all citizens could follow their own way in matters of conscience. And that might include any truth seeking endeavor, whether it involves the God, many gods, or no god at all.

Nussbaum writes further:

People aren’t always content to live with others on terms of mutual respect. So the story of the tradition is also a story of the attacks upon it, as different groups jockey for superiority. What has kept the tradition alive and healthy is continual vigilance against these attacks, which in each new era take a different concrete form.

If deciding better depends primarily on improving the brain’s mapping of the world, then we need the freedom to study and act according to religion and more broadly conscience, philosophy.

Reading the News, Deeply

I remember a time when there was a just a few sources for news. When I was growing up there was the local newspaper, three television networks and magazines like Time (read in waiting rooms). Even after we got cable while I was in High School, CNN was nothing more than expanded, 24 hour a day news channel much like the news on regular TV or from the newspaper. Journalists tried to provide information that was at least confirmed and had been filtered through the prevailing notions of the time. Even if the reporting wasn’t always completely accurate and even it was somewhat biased by political or financial considerations, it was a received truth that created a common world view for citizens. If ther was a single point of view at least it was everyone’s.

In their book Blur: How to Know What’s True in the Age of Information Overload, Bill Kovach and Tom Rosenstiel present the case that individuals now have to take on that role for themselves. With more and more primary information available, understanding an issue now involves searching out multiple sources on the net, synthesizing a personal view. This may be as simple as avoiding the trap of one sided presentation of a situation and searching out other reports to verify or perhaps present a more nuanced picture.

The presidential campaigns have raised fundamental questions about the US, its role in the world and its future. I’m finding much of the conversation to be unconvincing from all sides. Some of the changes to tax policy, trade pacts and immigration are quite radical with potentially far-reaching unintended consequences. Of course, in line with the ideas described in Blur, I’ve been reading multiple sources, opinion across the spectrum and fact checking if possible.

Some of the questions I have are pushing me towards reading some political science, economics and history a bit more in depth order to understand context. For example, I just finished Martha Nussbaum’s Liberty of Conscience: In Defense of America’s Tradition of Religious Equality. Why is political economy relevant to Deciding Better? It’s an example of a complex system. The constitution provides a framework and set of rules. The actor within create an ecology of ideas that govern real events. Even constitutional rights are fluid and changeable. These ideas of tolerance and free exercise have changed over time with fear being a major driver of minority rights being infringed.

Why Film in 2016?

2015 12 24 0033 1

Besides my work with the (M Monochrome)[https://decidingbetter.com/?p=783], I’ve shot and developed 10 rolls of film this year. I’ll probably get one or two more developed before the end of the calendar year.

It’s fair to ask why I chose to shoot film side by side with digital. I think if you look through my (Flickr Photostream)[https://www.flickr.com/photos/jamesvornov/] you’d be hard pressed to pick out film from digital in the images. I treat them as equivalent except for the incredible low light sensitivity you can now get with digital capture that ASA 400 Tri-X can’t match. Once film is scanned, I treat it similarly in the workflow, so film just has a few extra steps compared to digital. Price is not a consideration when buying bricks of Tri-X and doing the developing and scanning at home.

So why film? Authenticity in the photographic process, I think. At this point, once an image is in the digital realm, the tools we have to alter the image are so powerful that the digital file is a starting point, not the object itself. With the techniques I’ve learned over the years, I can infuse light into a flat image, creating a subjective reality for the viewer that is only suggested by the original scene.

Film provides me with an opportunity to let the chemical photographic process do that manipulation of light to image in its time honored way, providing a starting point that is much closer to the end product. My relationship to a scanned film image is different. I adjust brightness and contrast, sharpen the image back up to what its lost in the scanning process and present it more or less as I found it.

The image above was captured on a gray winter day. I scheduled myself to spend a few hours capturing images on film and came home with two rolls exposed, unimpressed by what I had seen. Shooting film, I lacked the feedback provided by the camera LCD, so its a matter of looking, composing and capturing. This image reminds me how much a scene is transformed by its capture to film. This for me is the authentic experience of the photographic process and its linked to the magic of the chemical emulsion and wonders that D-76 works on Tri-X.