We’re already in an augmented reality

My cycling and fitness activities are being enabled by a set of technologies that were not widely available a few years ago- online coaching through internet enabled analysis of power files and streaming fitness sessions monitored by a wearable measuring movement and heart rate.

20 years ago we started using heart rate monitors widely. Before that there was just subjective effort. With a few chips we could see inside ourselves and augment the experience by seeing another, physiological dimension. Now my bike has a power meter, so I can see my actual output and compare it to the physiological effect and the subjective effort. My experience of riding and ability to train is augmented. The winner of Tour de France last year was only 23. The cycling pros of the previous generations ascribe the rise of younger champions to their use of these technologies to skip the years of learning and slow builds that the previous generation had to go through.

The key is that you need to use the technology to achieve real personal goals, not just enrich tech platforms.

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

Do you want to blindly give away your data to others and allow their algorithms manipulate us? Or seek real experiences augmented by technology? Do we live in the real, augmented world or in their “metaverse”

I’m hoping that Apple’s augmented or virtual reality device will be more along the lines of a way to enhance the experience of real or imagined worlds and not a way to enslave us in their artificial environment.

The Augmented Environment

One of my conclusions from studying how the brain interprets the world and how people actually make decisions is that the single most important decision we make is the choice of our environment. And by that I mean both physical and semantic environment. Who are the people we surround ourselves with? What are the books and websites we read? The key is that our beliefs strongly condition how the world occurs to us. We can’t decide in the moment how to react to the statement of a politician or writer.

In statistics, we call these beliefs “priors” that determine the probabilities we assign to events. We update those priors with new information all of the time, so the information we’re exposed to on an ongoing basis determine perception.

At the brain level, we can see this in the most basic forms of perception, like how we see ambiguous figures, for example here: Long-term priors influence visual perception through recruitment of long-range feedback

A computational model incorporating hierarchical-predictive-coding and attractor-network elements reproduced both behavioral and neural findings. In this model, a bias at the top level propagates down the hierarchy, and a prediction error signal propagates up.

It’s reasonable to extend this kind of biased perception extending to how we perceive what people say to us or their motives. If you believe you live in a violent environment or that some classes of people are inherently violent, your priors will influence your interpretation of the words and actions of everyone around you. No choice in the matter, because belief comes from experience and experience largely comes from environment.

The trouble is that in our augmented reality, we don’t experience much of the world at all. We read reports of the world and interpretations of events. That’s an overlay that we experience as part and parcel of the real world, even though it’s just an overlay providing an interpretation, augmenting the pure sensory event.

So choose your friends and your information sources carefully.

First Draft of On Deciding Better Manuscript Complete

I surprised myself this morning by writing what seemed to be the final words in my first draft of the book I call “On Deciding . . . Better: The New Ecology of Mind and Brain”.

This version of the book started with an outline in Google Docs on May 19, 2020 and took about a year to turn into a very rough first draft. As I’ve talked about before, the process of creating what Scribe Writing calls a “vomit draft” got me unstuck from all of my previous attempts.

The manuscript as it stands is pretty close to how I outlined it, which may not surprising given that this is a project I’ve been mulling over for a long time. More than 20 years if you start counting when I starting this blog in late 1999. The central ideas in the book were mostly present in the earliest posts I made here, but I’ve made some progress and had wanted to create something coherent from all of the work.

I’ve struggled with the problem expressing the ideas contained in the book as short posts in a blog. It’s clear that a manuscript provides the space to build the  concepts that can guide a reader through this journey out of simple enlightenment rationality. So I tried to document the journey I’ve made from the rational theories of decision theory to the ways we really make decisions which are based in ecology and evolution.

The book is a bit over 60,000 words which they tel me is a 300 or so page published book. Now comes editing, which I’m guessing will both remove a good bit of text but identify some gaps that need some more material, so net-net this seems about right.

For the edit, I hope I can focus on coherency and finding those gaps so that I can have a manuscript I trust someone could read and understand the general flow and arguments I’m making. It’s only at that point I can imagine anyone else reading it. I’d like to get it into some final form eventually and move on to another long term project to explore this ever facinating area of what it means to be human. It will be fun to get back into an exploratory mode rather than the grind of 250 to 500 words a day, aimed at filling in an outline of what I’ve learned.

I’m not sure what I’ll do with the ODB book once its done. At this point I don’t have much interest in all of the book promotion activity that goes along with publishing these days and I doubt a major publisher would be interested in this niche of neuroscientific philosophy book on decision making. On the other hand, I would like it to reach as large an audience as possible, so I guess that means releasing it on some platform or other and doing some promotion.

So on to editing!

Why You Need to Publish Your Notebook

Note taking seems to be having one of its many turns as an internet topic. The world of analog journalling is flourishing, propelled by (mostly) Japanese stationary brands like Midori with its Traveler’s Notebook and the Hobonichi Techo, but some stalwart European and British brands like Rhodia and even Filofax. On the digital side there’s a new crop of back linking network note taking apps that join the still relevant categories of GTD task managers, plain text approaches, Wikis and Hypertext (like my favorite, Eastgate’s Tinderbox), and integrated note systems from Evernote to Ulysses to Markdown editors to the latest like Craft. And then there’s the handwriting on iPad apps like Notability and Good Notes. I almost forgot the emerging ePaper device category, which has tempted me a few times ((But I know that fountain pen and paper just works for me better than stylus on iPad and probably one of these cool, single purpose devices)).

I ask myself, “With all this note taking, where’s the shared content being shared?”

Lots of notes, lots of systems, lots of discussion. Not so much content coming back out of these notes outside of work product like academic publications, corporate reports and the commercial content universe of the internet which has replaced magazines and newspapers, but tends toward the slick and superficial.

I miss the individual voices of the early internet and it seems that all of this note taking should provide some path for sharing what we’re all learning outside of commercial sites.

An Appeal To Contribute

I often see mention of how Google/YouTube has turned the web into a closed system by controlling search and advertising. For a long time, its been noticeable how many searches return no real results on the first or event the second page. I get advertising, shopping and howto Youtube videos.

If I see real content it’s often a link to Reddit, a company support forum or, sometimes a post on Medium. In fact, I’ve subscribed to Medium because so often search results take me there. Since most of the web writing going on is on commercial sites selling courses or subscription content, even when I get sent to a content page, it’s often a teaser for some course or software. I rarely get linked to one of the sites I follow through RSS feeds via NetNewsWire. In fact, I’ll often add a word or two to the search to make sure that I get an answer from Thom Hogan on photography plus others who may have commented on his opinion.

Curiously, huge amounts of the internet are generally completely cut out of these searches- Twitter, Instagram, Facebook. And email subscription newsletters. And Substack. We’re in a world of walled gardens and click farms.

Sadly, this isn’t entirely the fault of the Machine Learning algorithms that now rule the world and decide how to rank the results on a Google search page. There’s definitely less for the search engines to point to and less interchange between web sites that leads to links an interest.

Where’s the useful content then?

Now I’ll admit my own guilt here. I lost the drive to create useful content here when that early blogging community dissipated. I do learn things I ought to be sharing. There’s a steady stream of traffic here, so I could contribute more. I read, take notes, come to conclusions. My notebooks are full, my blog is empty.

An example? Is it worth the cost to buy a 16GB M1 Mac or should I get the quickly available 8GB memory model? Once I found a few examples of limitations, it seemed clear that for my use, processing big RAW files in Photoshop with Nik filters, a 16GB upgrade was going to be worth it. And it is. I don’t have a suite of test results, but I did see how fast the Apple Silicon Photoshop beta processed gaussian filters to create the now infamous Orton Effect ((If you haven’t heard, the Orton effect is that glowy landscape look, taking over from the overdone HDR workflow (which is not at all natural looking!) as getting likes on Flickr. Previously, I participated in the oversaturated landscape movement, to my everlasting shame.))

Another one? Should I switch to editing this website with the WordPress Gutenberg editor or keep my existing MarsEdit staging workflow? It looks like it makes sense to do only early drafts in MarsEdit to take advantage of creating links and pulling in media from Photos, but then publishing is best out of the new WordPress interface. And it looks like there’s no reason to move off of WordPress to any other system.

Workflows. How to use tools. Hacks, tricks and lessons learned in the course of exploration and implementing. All interesting reads and worth sharing.

If I needed any evidence of the value of sharing useful lessons, I need look no further than the content that is seen here. By far, the most viewed page is How Bill Gates Takes Notes, which is a short piece about the tale of his using the Cornell note taking system during meetings. I wrote that page because I found it interesting and there was no where left on the web where it was ell documented. I also have some photography pages and note taking workflow pages that get a hit or two day. Mostly from Google based on questions asked about how to do something or what something is, bypassing all the shopping and YouTube content.

3 Keys To Writing More

During the pandemic I seem to have figured out a few things. Like how to actually write a book manuscript. The method is simple really, just hard to sustain. But know that most everyone has a book or two in them already.

Last year, I mentioned I was renewing my effort to do some longer form writing on what I’ve learned over the last 20 years. It’s turned out to be a busy year for my work in drug development even with travel curtailed by the COVID-19 pandemic. I’ve found myself working from home for most of the year, but with a pretty packed day of calls and need to get out work product. How was I going to push the writing project forward.

Fortunately, about this time last year I found Tucker Max’s website ((Check out the web site Scribe Writing. Download the free ebook. Follow the directions. In a few months, you’ll have a first draft of a manuscript.)). His company provides services to authors like you and me who have books in them but perhaps no real ambition for a career in writing. Books can boost careers, publicize businesses and influence public discourse. I don’t have any these motives really. I’m just interested in sharing what I’ve learned from what I think is a unique perspective as a neuroscientist working in business environments.

Tucker provides a lot of free content on the site besides selling courses and services to writers. I extracted three key insights :

  1. Use the tools at hand to get words into the computer. Word, Google Docs, Notes- anything. I know I have way too many tools. So I picked Ulysses because it’s plain text, semistructured and syncs across Mac and iOS.
  2. Writers write. A book of 100 to 200 pages is 20 to 40 thousand words. Write 250 words a day, every day for at least 30 minutes a day. 60 minutes a day is reasonable and 120 is optimal. Since my schedule is different every day given project meetings, fitness schedule and work product due dates, I simply decided that my first 30 minutes free at my desk would be dedicated to getting at least 250 words out. Sometimes I’ve gotten 500 or more if I had an hour, but I just put fingers on keyboard and got the ideas down.
  3. The first draft is a marathon to get the words out. A lousy first draft. A vomit draft. At 250 words a day, 100 days of work equals a short book. What I have is nothing I’d want to share with anyone, but I think that after the first editing round I could serialize the chapters on the website here. Ulysses telsls me I’ve gotten 30,348 words down and I’m about halfway through the outline. So half of a 200 page book done.

I found that Tucker will actually answer questions, just as he promises on his website. One of my biggest obstacles was how much material I already had at hand. 20 years of blog posting and at least 2 previous attempts at turning the material into a book. I wanted to edit it into a book to save the time writing.

So I asked Tucker how to deal with the mass of material I’ve collected over all this time. Several manuscripts in volume really. He suggested I use it as reference, but start with a fresh outline and start writing anew.

Starting over turned out to be exactly the right approach. His assertion that people like me have a book in our heads already is absolutely right. And I’ve done this many times before as it turns out. I’ve often written research papers, review articles and book chapters by referring to. references, but then doing the careful citation and fact checking during editing once the ideas and flow were down on the page. The principle is that that the author has the book inside already, it just needs to get out of their head and into that linear computer file called a book manuscript.

As I’ve been writing, I’ve gotten a clearer idea where this is all going, so I spend some other time diving into some of the newest insights into brain mechanisms for decision making as well as my guidepost books on self improvement and making better decisions.

I hope to share the effort at some point, but given that I have no ambition to an author, the whole exercise has been a personal project to clarify some of the ideas I’ve had over the years about the relationhsip between decision making, brain function and the philosophy of ethics.

Our Limited Capacity to Decide

With over 30,000 words done on my manuscript and about halfway plowing through the outline, the thesis of the book has become ever more clear. Here’s the essential. question about decision making from the point of view of neuroscience:

The ability of our brain’s executive function to make decisions is limited not only by the model it creates out of experience , but by the decisions made by brain systems that are inaccessible to awareness or executive control. We then can ask : How do you make better decisions when agency is so limited?

This morning I drafted a few paragraphs about eating that seemed to encapsulate much of the argument and seemed worth sharing here. I think it provides a little idea of the style and approach I’m taking in a longer form.

Recognize this? Your visual system will make a perceptual decision.

Embodied cognition

We’re on the subject of what has been called “the embodied mind” ((The Stanford Encyclopedia of Philosophy makes the historical attribution of embodied cogntion to a short list of authors including George Lakoff and Andy Clark. Their books have been important influences on me in formulating the abstract world of thought as metaphor in the brain’s model of the real world)). The brain is the body’s regulator of behavior of all kind including not only attention and voluntary movement but every regulatory system in the body that keeps us alive and healthy. There’s no line between “mind” and “body”, so our experience has to include input from those systems and our behavior has to be adjusted to take care of them appropriately without involvement of the executive network in the cerebral cortex.

For example, while I can’t control my heart rate directly, the feeling of my heart pounding is an important aspect of the world. My internal model of the world needs to account for whether it’s pounding because I’m sprinting in a friendly competition on my bike in a group ride or whether I’m finishing a high stakes race or whether I’m angry with and/or afraid the driver of the car that just side swiped me and who’s continuing to threaten me ((Sadly confrontations with motorists is all too common out there on the bike)). There’s context to heart rate that important in the bigger world model beyond regulation of the cardiovascular system.

The exquisite control of appetite

And so too with appetite. When I was in medical school, they had just introduced lectures on nutrition into the curriculum. There was one fact I took away from the lectures that I think about very frequently. Now you have to understand that the two families that owned the Coca Cola Company (the Candlers and then in 1919 the Woodruffs) have been major benefactors of Emory University, where I got my MD, PhD training. ((In fact my father, who had been a Pepsi drinker all his life switched to Coke after I was accepted to the Medical Scientist Training Program there to show his gratitude. It was a program with full scholarship and stipend after all.))

A 12 oz can of Coke has 140 calories. If you decided to add a can of coke a day to your daily diet, perhaps with lunch at the campus cafeteria, that would be 365 cans of Coke or 51,100 extra calories a year. Over a decade, more than half a million additional calories from that can of Coke. We know that there are 3500 calories in a pound of body fat. So that half million extra calories would add 146 extra pounds. Drink that can of Coke for a few decades and you’ll be hundreds of pounds overweight. Looked at another way, the average caloric intake for a man in the US is 2500 calories per day. That can of Coke that caused so much theoretical havoc is only 5.6% of daily caloric intake. For most of us with relatively stable body weight year by year, that means that our average daily intake of calories is regulated down to single digit percentage points! Not only that, but the great difficulty that we find with dieting and the empiric data showing that diets don’t work for most, demonstrates that consciously trying to regulate caloric intake is almost impossible over the long term.

Some of the ability to maintain stable body weight is due to cellular metabolic control regulating basal metabolic rate, the burning of calories at rest. But most of it is behavioral- how active we are choosing to sleep, sit or move around. And of course what and how much we eat. We are no more in control of eating than we are of breathing or respiratory rate or blood pressure.

A complex set of signals exchanged between the body’s fat stores, the gastrointestinal tract, the endocrine system and the brain allows hormones and levels of blood nutrients (sugar, amino acids, fats) to trigger food acquisition and consumption. We’re really good at knowning how much to eat. We’re fully in charge yet not aware of the expertise we have implicitly and not in any kind of long term control of it. Within a few percentage points, every day year after year. Despite the best efforts of our executive network to influence body weight.

Perceptual Choices

Deciding Without Thinking

The original premise of this site is that it possible to actually make better decisions. That’s why I called it called “On Deciding . . . Better” to begin with. After all, we are the agents responsible for our actions, so who else is responsible for making better choices? I’ve written about the false promises made by Decision Theory which asserts that by making choices can be made more rational, the decisions can be more successful. The problem isn’t the mathematicl basis of Decision Theory, it’s the problem with implementing it when people actually need to make decisions in the real world. There are valuable habits and tools in the Decision Theory literature, but it’s clear to me that when our brains are engaged in rapidly making decisions, we generally are not aware of the decision process. If we’re not aware of the decisions, then those deliberative executive function mechanisms can’t be brought online as they are being made.

Perceptual Decision Making

This is the Kanizsa Triangle, created by the Italian psychologist Gaetano Kanizsa in 1955. I like it as an example because it is so simple and yet so stable. The brain creates the contours of the second white triangle. When I first heard this kind of illusion being called a “perceptual choice”, I rejected the notion. After all, a choice is an act of will, of mental agency.

Yet calling this “a perceptual choice” makes a lot of sense from a brain mechanism point of view. A simple set of shapes and lines is projected on the retina and relayed back through the visual cortex and the surrounding cortical areas. That part of the brain makes an unshakable choice to interpret the center of the figure as containing a triangle. Similarly, seeing the face of my son, a different area of cortex decides who it is. Further, circuits are activated with all kinds of associated information, some brought up into consciousness, others not, but just ready to be used when needed.

Are We Responsible for Perceptual Choice?

If perceptual choice is like most other choices, like choosing a breakfast cereal or a spouse, it seems I’m advocating abandoning a lot of perceived responsibility for one’s own’s actions. It seems that we walk through the world mostly unaware of how perceptions are constructed and don’t have access to why we act the way we do. Actions are based on perceptions that were chosen without awareness in the first place. And it goes without saying that we have no responsibility for the perceptions and actions of everyone around us. Their brains, wired mostly in the same way as ours, chose how to perceive our words and our acts.

It seems to me that to make better decisions there have to be rather deep changes in those perceptual brain processes. Any decision tools have to become deeply embedded in how our brains work, any rules to guide how we perceive, choose or act lie as deep habits in those automatic functioning circuits of the brain. Some, like the the Kanizsa Triangle are in the very structure of the brain and can’t be changed. Others are strongly influenced by experience and deepened by practice.

Lessons in Science and Culture

John Nernst at Everything Studies provides a long and thoughtful analysis of a discussion of a dangerous idea: A Deep Dive into the Harris-Klein Controversy. I think it’s worth a comment here as well.

As a neuroscientist and reader of all of these public personalities (Charles Murray, Sam Harris and Ezra Klein), I’ve followed the discussion race and IQ over the years. We know that intelligence, like many other traits like height or cardiovascular risk are in part inherited and influenced strongly by environment. Professionally, I’m interested in the heritability of complex traits like psychiatric disorders and neurodegenerative diseases. The measured differences in IQ between groups falls squarely in this category of heritable traits where an effect can be measured, but the individual genes responsible have remained elusive.

I’m going to side with Erza Klein who in essence argues that there are scientific subjects where it is a social good to politely avoid discussion. One can learn about human population genetics, even with regard to cognitive neuroscience without entering into an arena where the science is used for the purpose of perpetuating racial stereotypes and promoting racist agendas of prejudice. That the data has a social context that cannot be ignored.

Sam Harris, on the other side of the argument, has taken on the mantle of defender of free scientific discourse. He takes the position that no legitimate scientific subject should be off limits for discussion based on social objections. His view seems to be that is that there is no negative value to free and open discussion of data. He was upset, as was I, at Murray’s treatment at Middlebury College and invited Murray onto his podcast. Sam was said by some to be promoting a racist agenda by promoting discussion of the heritability of IQ in the context of race.

In fact, Ezra Klein joined the conversation after his website Vox published a critique of the podcast portraying Harris as falling for Murray’s pseudoscience. But that’s nothing new really; Murray surfaces and his discussion of differences in IQ between populations is denounced.

As one who knows the science and have looked at the data, it bothers me like it bothers Harris that the data itself is attacked. Even if Murray’s reasons for looking at group differences is to further his social agenda, the data on group differences is not really suprising. Group differences for lots of complex inherited traits are to be expected, so why would intelligence be any different than height? And the genes responsible for complex traits are being explored, whether its height, body mass index or risk for neurodegenerative disease. Blue eyes or red hair, we have access to genomic and phenotypic data that is being analyzed. The question is whether looking at racial differences in IQ is itself racist.

I’ve surprised myself by siding with Klein in this case. His explanation of the background is here and his discussion after his conversation directly with Harris is here. Klein convincingly makes the argument that social context cannot be ignored in favor of some rationalist ideal of scientific discourse. Because we’re human, we bring our cultural suppositions to every discussion, every framing of every problem. Culture is fundamental to perception, so while data is indifferent to our thought, the interpretation of data can never be free of perceptual bias. Race, like every category we create with language, is a cultural construct. It happens to be loaded with evil, destructive context and thus is best avoided if possible, unless we’re discussing the legacy of slavery in the United States, which I think is Klein’s ultimate point.

Since these discussions are so loaded with historical and social baggage, they inevitably become social conversations, not scientific ones. Constructive social conversations are useful. Pointless defense of data is not useful; we should be talking about what can be done to overcome those social evils. No matter how much Sam would like us to be rational and data driven, people don’t operate that way. I see this flaw, incidentally, in his struggle with how to formulate his ethics. He argues with the simple truth that humans are born with basic ethics wired in just like basic language ability is wired in. We then get a cultural overlay on the receptive wiring that dictate much of how we perceive the world.

Way back when, almost 20 years ago, I named this blog “On Deciding . . . Better” based on my belief that deciding better was possible, but not easy. In the 20 years that have passed I’ve learned just how hard it is to improve and how much real work it takes. Work by us as individuals and work by us in groups and as societies.

Lack of Authority

It's hard to believe how little we trust what we read in this age of the internet.

The US election of 2016 demonstrated just how profoundly our relationship to authority has changed. We're exposed to conflicting opinions from the online media. We hear facts followed by denial and statement of the opposite as true. Everyone lies, apparently. There's no way to make sense of this online world in the way one makes sense of a tree or a dog or a computer.

Perhaps relying on confirmation bias, we are forced to interpret events without resort to reasoned argument or weight of evidence. We have to fall back on what we already believe. You have to pick a side. Faced with a deafening roar of comments on Twitter, cable news, news websites, we shape what we hear to be create a stable, consistent worldview.

Welcome to a world of narrative where the truth is simply the story we believe. And pragmatically, it seems not to matter much. Believe what you will, since we mostly yield no power in the world.

So what am I to make of this nice new MacBook Pro that I'm using right now? Is it really evidence of Apple's incompetence or their desire to marginalize or milk the Mac during its dying days? Again, believe what you will, but I've got some work to do.

Mind In The Cloud

“Technology changes ”how“ not ”what.“ Expands in space, compresses in time. The results are sometimes breathtaking.”

Notebooks as Extended Mind

In 1998, Andy Clark and David Chalmers made the radical suggestion that the mind might not stop at the borders of the brain. In their paper, The Extended Mind, they suggested that the activity of the brain that we experience as consciousness is dependent not only on brain but also on input from the rest of the world. Clark’s later book, Supersizing the Mind clarifies and expands on the idea. Taken to its logical conclusion, this extended mind hypothesis locates mind in the interactions between the brain and the external world. The physical basis of consciousness includes the brain, the body and nearby office products.

I mean to say that your mind is, in part, in your notebook. In the original paper, Clark and Chalmers use the hypothetical case of Otto. Otto has Alzheimer’s Disease and compensates for his memory deficit by carrying a notebook around with him at all times. They argue for externalism- that Otto’s new memories are in the notebook, not in his brain. The system that constitutes Otto’s mind, his cognitive activities depends not only on his brain, but on the the notebook. If he were to lose the notebook those memories would disappear just as if removed from his brain by psychosurgery. It should make no difference whether memory is stored as physical traces in neuronal circuitry or as ink marks on paper since the use is the same in the end.

The paper actually opens with more extreme cases like neural implants that blur completely whether information is coming from the brain or outside. We have brain mechanisms to separate what is internally generated and what is external. The point is that these external aids are extensions. In medical school I learned to use index cards and a pocket notebook reference, commonly referred to as one’s “peripheral brain”. Those of us who think well but remember poorly succeed only with these kinds of external knowledge systems.

In 1998, when The Extended Mind was published, we used mostly paper notebooks and computer screens. The Apple Newton was launched August, 1993. The first Palm Pilot device, which I think was the first ubiquitous pocket computing device , in March, 1997.

The Organized Extended Mind

When David Allen published Getting Things Done in 2001, index cards and paper notebooks were rapidly being left behind as the world accelerated toward our current age of email and internet. I’ll always think of Getting Things Done system as a PDA system because of the lists I created my system that lived on mobile devices. First it was the Palm, then Blackberry and most recently, iPhone. @Actions, @WaitingFor, @Projects were edited on the PC and synced to a device that needed daily connection to the computer. I had a nice collection of reference files, particularly for travel called “When in London”, “When in Paris”, etc.

My information flow moved to the PC as it became connected to the global network. Two communication functions really: conversations and read/write publishing. Email and message boards provided two way interaction that was generally one to one or among a small community. Wider publishing was to the web. Both of these migrated seamlessly to hand held devices that replicated email apps or the browser on the PC. Eventually the mobile device became combined with phone. Even though capabilities have grown with faster data rates, touch interfaces, bigger screens and large amounts of solid state data storage, The first iPhones and iPads showed their PDA roots as a tethered PC device in the way they backed up and synced information. That world is rapidly fading as the internet becomes a ubiquitous wireless connection.

Access to email and internet through smartphones has served to further “expand time”“ and ”compress space" as Dave put it. I adopted a text file based approach so that I could switch at will between my iPhone, iPad and MacBook Air and have my external thoughts available. The synced plain text files seems transformational, but feels like my old Palm set of lists.

The age of the cloud is one of information flakes. Much of what we know is now latent and external requiring reference to a small device. Is it any wonder that our streets and cafes are filled with people peering down into a screen rather than out into the world?

It was a rapid transition. One that continues to evolve and that demands frequent reconsideration of the means and methods for constructing the extended mind.

A Mind Released

The SimpleNote and Notational Velocity and DropBox ecosystem was the enabling technology for me. Suddenly there was seamless syncing between the iPad or iPhone and the Mac. The rapid adoption of Dropbox as the defacto file system for iOS broke the game wide open so that standard formats could be edited anywhere- Mac, Windows, iPhone, iPad, Unix shell. This was a stable fast data store available whenever a network was available.

Editing data on a server is also not a new idea. Shell accounts used for editing text with vi or Emacs on a remote machine from anywhere is as old as computer networking. I started this website in late 1999 on Dave Winer’s Edit This Page service where a text editor in the browser allowed simple website publishing for the first time.

Incremental searching of text files eliminates the need for databases or hierarchical structure. Text editors like Notational Velocity, nvAlt, SimpleNote or Notesy make searching multiple files as effortless as brain recall from long term memory. Just start typing associations or, for wider browsing, tags embedded in metadata, and unorganized large collections become useful. Just like brain free recall of red objects or words that begin with the letter F. Incremental searching itself is not a new idea for text editors. What’s new is that we’re not seeing just a line of text, but rather multiline previews and instant access to each file found. But together incremental searching with ubiquitous access and the extended mind is enabled across time and space.

What seems to have happened is that the data recorded as external memory has finally broken free from its home in notebooks or on the PC and is resident on the net where it can be accessed by many devices. My pocket notebook and set of GTD text lists is now a set of text files in the cloud. Instantly usable across platforms , small text files have once again become the unit of knowledge. Instant access to personal notebook knowledge via sync and search.

Do ADHD Drugs Work Longterm?

“Illness is the doctor to whom we pay most heed; to kindness, to knowledge, we make promise only; pain we obey.”
― Marcel Proust

An essay on ADHD in the New York Times launched an interesting Twitter exchange with Steve Silberman and a medical blogger PalMD on how well we understand psychiatric disorders and treatment.

In the article, Dr. Sroufe concludes that since there is no evidence for longterm use of ADHD medication, their use should be abandoned. He is right that the evidence of efficacy is all short term. Over the long term, no benefit has been shown. Of course almost no one dealing with the issue on a day to day basis would agree. Parents, teachers and physicians all agree that these medications have a use to improve the lives of these children. Count me among those who believe it is highly probable that treatment over the course of months and years has utility, but is hard to prove.

As a problem in decision making, this is a good example of the difference between believing and knowing.

There is a difference between the practice of science and an absolutist approach to truth. In decision making, we must be practical. As Williams James said, “Truth is what works.” He believed that science was a pragmatic search for useful models of the world, including mind. Those that look for abstract, absolute truth in clinical research will be confused, misguided and as often as not, wrong in their decisions. Truth is something that happens to a belief over time as evidence is accumulated, not something that is established by a single positive experiment.

Belief in the usefulness of therapy in medicine follows this model of accumulation of belief. The complexity and variability of human behavior demands a skeptical approach to evidence and a sifting through to discover what works.

Clinical trials for drugs to affect behavior are generally relatively small, short experiments that measure a change from baseline in some clinically meaningful variable. These trials are clinical pharmacology studies in the classic sense, studies in patients (clinical) of drug effect (pharmacology). No one is expecting cure or even modification of the disease. The benefit is short term symptom relief, so the trial examines short term symptom relief. In the case of a pain reliever, we ask whether patient’s self reports of pain are decreased by therapy compared to before therapy. In ADHD, we ask whether a group of target behaviors is changed by treatment compared to baseline.

This approach of measuring change from baseline has a host of pitfalls that limit the generalizability of clinical trials to real life medicine. First, baseline measures are subject to large amounts of bias. One of the worst sources of bias in these trials is the patient and physician’s joint desire to have the patient meet the severity required to be enrolled. The investigator is under pressure to contribute patients to the trial. The patient hopes to gain access to some new therapy, either during the trial or during some subsequent opportunity. Both of these factors pressure patients to maximize the severity of their complaint at baseline. How do you get into a trial? Exaggerate your problem! Even without conscious or unconscious bias from patients, any trial will enroll patients that happen to be worse than their average state. When measured repeatedly over time, the scores will tend to drop- a classic regression to the mean. If you select more severe outliers, they will tend to look more average over time.

Second, diseases are not stable over time. Without any intervention, measures of a disease will be most highly correlated when measured with a short duration between assessments. The longer you wait to measure again, the lower the correlation. Measuring a drug effect in a controlled trial accurately depends on a high level of correlation. All else being equal, the longer one treats, the harder it will be to measure the effect of the drug. This is the major pitfall of depression trials. Episodes are usually limited in duration, so most patients will get better over time without treatment.

So perhaps its not surprising that its very hard to measure the effect of ADHD drugs after months or years in chronic therapy trials. These kids get better over time both from regression to the mean and the natural history of the disease.

Another important issue in ADHD research is that these drugs have effects in healthy volunteers. As Dr. Sroufe points out, amphetamines help college students study for exams- no diagnosis of ADHD needed. This makes it easier to do pharmacology studies, but means that diagnosis in those studies doesn’t really matter- the pharmacology is largely independent of any real pathological state. One could never study a cancer drug in some one without cancer, but this is not true of a cognitive enhancing drug. Its probably most likely that the kids with ADHD don’t have a single pathophysiology, but rather a combination of being at one end of a normal spectrum of behavior plus stress or lack of coping mechanisms that create problems for them in the school environment where those behaviors are disruptive to their learning and that of others. The pharmacology of stimulants helps then all- after all it helps even neurotypic college students and computer programmers.

Treatment response does not confirm diagnosis in ADHD as it does in some other neurological diseases like Parkinson’s Disease. While we’d like to call ADHD a disease or at least abnormal brain state, we have no routine way of assessing the current state of a child’s brain. We have even less ability to predict the state of the brain in the future. Thus diagnosis, in the real meaning of the word- “dia” to separate and “gnosis” to know, is something we can’t do. We don’t know how to separate these kids from normal or into any useful categories. And we have no way of describing prognosis- predicting their course. So a trial that enrolls children on the basis of a behavior at a moment in time and tries to examine the effects of an intervention over the long term is probably doomed to failure. Many of those enrolled won’t need the intervention over time. Many of those who don’t get the intervention will seek other treatment methods over time.

With all of these methodological problems, we can’t accept lack of positive trials to be proof that drugs are ineffective long term. We can’t even prove that powerful opioid pain relievers have longterm efficacy. In fact, it was not too long ago that we struggled with a lack of evidence that opioids were effective even over time periods as short as 12 weeks.

Our short term data in ADHD provides convincing evidence of the symptomatic effects of treatment. Instead of abandoning their use, we should be looking at better ways to collect long term data and test which long term treatment algorithms lead to the best outcomes. And we should be using our powerful tools to look at brain function to understand both the spectrum of ADHD behaviors and the actions of drugs in specific brain regions.