I think more that a few photographers have begun to realize that we finally have some digital cameras with enough dynamic range to begin to replicate a more classic black and white look. Film, with its slow fade to pure white and pure black captured tonality with ease. This D800 is about there. With a 50mm prime, it’s not much bigger than the film cameras I started with, although it is a good bit heavier.
Author: James Vornov
Oregon Ridge, MD
Having made a major life transition in moving to a new job, perhaps it’s not surprising that I’m feeling a need for some simple visual work. Vince Versace’s two new books have been waiting for me. My first goal will be to catch up on the updated technique in Welcome to Oz 2.0. So far, I find my new conversions a bit more naturalistic, a bit less dramatic.
Mind In The Cloud
“Technology changes ”how“ not ”what.“ Expands in space, compresses in time. The results are sometimes breathtaking.”
- David Rogers via Twitter
Notebooks as Extended Mind
In 1998, Andy Clark and David Chalmers made the radical suggestion that the mind might not stop at the borders of the brain. In their paper, The Extended Mind, they suggested that the activity of the brain that we experience as consciousness is dependent not only on brain but also on input from the rest of the world. Clark’s later book, Supersizing the Mind clarifies and expands on the idea. Taken to its logical conclusion, this extended mind hypothesis locates mind in the interactions between the brain and the external world. The physical basis of consciousness includes the brain, the body and nearby office products.
I mean to say that your mind is, in part, in your notebook. In the original paper, Clark and Chalmers use the hypothetical case of Otto. Otto has Alzheimer’s Disease and compensates for his memory deficit by carrying a notebook around with him at all times. They argue for externalism- that Otto’s new memories are in the notebook, not in his brain. The system that constitutes Otto’s mind, his cognitive activities depends not only on his brain, but on the the notebook. If he were to lose the notebook those memories would disappear just as if removed from his brain by psychosurgery. It should make no difference whether memory is stored as physical traces in neuronal circuitry or as ink marks on paper since the use is the same in the end.
The paper actually opens with more extreme cases like neural implants that blur completely whether information is coming from the brain or outside. We have brain mechanisms to separate what is internally generated and what is external. The point is that these external aids are extensions. In medical school I learned to use index cards and a pocket notebook reference, commonly referred to as one’s “peripheral brain”. Those of us who think well but remember poorly succeed only with these kinds of external knowledge systems.
In 1998, when The Extended Mind was published, we used mostly paper notebooks and computer screens. The Apple Newton was launched August, 1993. The first Palm Pilot device, which I think was the first ubiquitous pocket computing device , in March, 1997.
The Organized Extended Mind
When David Allen published Getting Things Done in 2001, index cards and paper notebooks were rapidly being left behind as the world accelerated toward our current age of email and internet. I’ll always think of Getting Things Done system as a PDA system because of the lists I created my system that lived on mobile devices. First it was the Palm, then Blackberry and most recently, iPhone. @Actions, @WaitingFor, @Projects were edited on the PC and synced to a device that needed daily connection to the computer. I had a nice collection of reference files, particularly for travel called “When in London”, “When in Paris”, etc.
My information flow moved to the PC as it became connected to the global network. Two communication functions really: conversations and read/write publishing. Email and message boards provided two way interaction that was generally one to one or among a small community. Wider publishing was to the web. Both of these migrated seamlessly to hand held devices that replicated email apps or the browser on the PC. Eventually the mobile device became combined with phone. Even though capabilities have grown with faster data rates, touch interfaces, bigger screens and large amounts of solid state data storage, The first iPhones and iPads showed their PDA roots as a tethered PC device in the way they backed up and synced information. That world is rapidly fading as the internet becomes a ubiquitous wireless connection.
Access to email and internet through smartphones has served to further “expand time”“ and ”compress space" as Dave put it. I adopted a text file based approach so that I could switch at will between my iPhone, iPad and MacBook Air and have my external thoughts available. The synced plain text files seems transformational, but feels like my old Palm set of lists.
The age of the cloud is one of information flakes. Much of what we know is now latent and external requiring reference to a small device. Is it any wonder that our streets and cafes are filled with people peering down into a screen rather than out into the world?
It was a rapid transition. One that continues to evolve and that demands frequent reconsideration of the means and methods for constructing the extended mind.
A Mind Released
The SimpleNote and Notational Velocity and DropBox ecosystem was the enabling technology for me. Suddenly there was seamless syncing between the iPad or iPhone and the Mac. The rapid adoption of Dropbox as the defacto file system for iOS broke the game wide open so that standard formats could be edited anywhere- Mac, Windows, iPhone, iPad, Unix shell. This was a stable fast data store available whenever a network was available.
Editing data on a server is also not a new idea. Shell accounts used for editing text with vi or Emacs on a remote machine from anywhere is as old as computer networking. I started this website in late 1999 on Dave Winer’s Edit This Page service where a text editor in the browser allowed simple website publishing for the first time.
Incremental searching of text files eliminates the need for databases or hierarchical structure. Text editors like Notational Velocity, nvAlt, SimpleNote or Notesy make searching multiple files as effortless as brain recall from long term memory. Just start typing associations or, for wider browsing, tags embedded in metadata, and unorganized large collections become useful. Just like brain free recall of red objects or words that begin with the letter F. Incremental searching itself is not a new idea for text editors. What’s new is that we’re not seeing just a line of text, but rather multiline previews and instant access to each file found. But together incremental searching with ubiquitous access and the extended mind is enabled across time and space.
What seems to have happened is that the data recorded as external memory has finally broken free from its home in notebooks or on the PC and is resident on the net where it can be accessed by many devices. My pocket notebook and set of GTD text lists is now a set of text files in the cloud. Instantly usable across platforms , small text files have once again become the unit of knowledge. Instant access to personal notebook knowledge via sync and search.
Thoughts On The Raw Shark Texts
“Why do you speak to me of the stones? It is only the arch that matters to me.”
Polo answers: “Without stones there is no arch.”
-Italo Calvino
Invisible Cities (1972)
As Steven Hall’s The Raw Shark Texts begins, a man awakes without memory of his life or of his personal identity. He finds instructions to visit a therapist who will explain his situation. The therapist tells him he has a dissociative disorder- he’s blocked his memory because of a traumatic event. This is more or less a conventional explanation for global amnesia, a loss of the past and personal identity.
An alternative explanation soon presents itself. Mysterious letters and packages arrive at the apartment. These have somehow been sent by his previous self, the first Eric Sanderson. His past self warns him that he is pursued by a conceptual shark that seeks to consume whats left of his mind and sense of self. The novel is the story of his quest to understand his situation and somehow defeat or evade the abstract menace that threatens his tenuous existence.
What to make of this abstract threat to mental and possibly physical life? In the opening scenes of the novel, Eric is completely uncertain whether he is crazy or really threatened by the abstract world. Which is real, the outside world or the interior world of mind? The reader is just as confused.
The philosopher recognizes this conflict as having its roots going back to Rene Descartes. Descartes famously asserted that mind and the world “out there†are separate. As thinking beings, people live on the side of mind, entirely without access to physical reality. That is, if the real world can be said to exist at all if it is so far out of reach. The apparent entry of the abstract into the real world. The breach of mind into world breaks the Cartesian divide wide creating the central suspense of the novel.
How can a neuroscientist approach a book like this? I come to the book with a certainty that the central conflict doesn’t exist. There is no abstract world of mind, only physically based activity of the brain. It couldn’t be more clear that Eric’s conceptual shark and his the loss of personal identity are occurring in his brain. Electrical patterns of neuronal discharge can’t leave the brain and pursue the body though the world. So I can’t bring myself to a suspension of disbelief and accept the premise that a conceptual shark might be real and a physical threat.
And yet once I settled into the book I was swept up in the quest and invested in the outcome. It was a great and clever read, full of suspense and action even though I knew the events were occurring in Eric’s brain with no external referent even if he perceived it to be so.
Some neuroscientists take the rejection of Cartesian dualism too far, denying that mind matters at all. After all, it makes no sense to say that mind can control anything since only brain circuits can do anything. Mind is what brain does and so is not able to cause or create anything in a real sense, all metaphor aside. Subjective experience, in the most extreme version of this view, is of no importance. They would, I presume, find the entire book of no interest. Mental states are not something to be taken seriously. We know that scientists and philosophers who espouse this view don’t act this way. Typically when the subject turns to mental events they often act Cartesians, treating mind as existing in a separate world that somehow influences the brain. Like abstract sharks stalking vulnerable people.
A novel like The Raw Shark Texts actually helps demonstrate how important the content of thought really is. Mind matters because it is the only window we have onto reality. It is not reality itself. It is a simplified approximation of the world represented in brain maps that we experience with this marvelous subjective sense of sight, sound, smell and meaning everywhere.
The brain needs the sensory input to be anchored in reality. Without the input, the reproduction of the world continues on, just without as much relevance. Deprived of sensation to anchor it, the brain runs independently and disconnected. Sensory deprivation, for example, results in vivid hallucinations. Dreams are likely to be similar brain network activation patterns untethered to external input.
The fact is that I can close my eyes at will and bring sights and sounds to mind that exist only in memory. Imagination is a mechanism by which new reality can be constructed from bits and pieces of the known and remembered.
After all, the brain is an amazing realtime engine that uses a simplified model of the world to reconstruct a guessed at reality and predict outcomes based on imperfect knowledge. There is nothing more important than the battles that occur inside, the self-referential arguments to decide what is real and what is right.
The Raw Shark Texts puts this idea in a literary frame and dramatically portrays the struggle as a fantasy thriller. The genre we call magical realism gets much its power from projecting the interior imagination out into a story that seems to be about the real world. It’s more dramatic to read about the clash of swords than the clash of ideas. It’s a bigger problem to have your life threatened than to have your ideological underpinnings or emotional stability attacked.
The life of mind is important. It’s our most important experience in life because it’s our only experience in life. What you perceive becomes what is. Your perception of me, my motivations, my character is what I am to you. Who I really am or what I really think is entirely inaccessible to you. Even as you read this you are assembling a model of me that may be wildly different from the view I hold of myself. Who’s understanding of me is more accurate?
To awaken completely isolated from oneself, from one’s memories is a dangerous situation for Eric Sanderson. His journey to save himself is no less epic if it only happens in his mind. It makes my own daily interior struggle to be better seem more important, more noble.
It would be depressing to judge myself only based on my position in the world. Material success as measured by the bank account or the rank and esteem bestowed by others seem empty because they are external. Perceptions of the perceptions of others are mere shadows of shadows. It’s bad enough that we have to live in the illusion we’ve created for ourselves. What a greater pity to live that fragile illusion reflected from the shifting and conflicted illusions of others.
We know or at least imagine who we want to be. Success is best measured by the distance between who we are now and that goal of thinking, knowing and acting well.
Eric Sanderson awakes completely missing his memories, disconnected from what he has been and what he’s learned. He’s as far away from who he wants to be as possible. He doesn’t know who he was, who he is or who he wants to be. That conceptual shark has taken away most of his life and now threatens to wipe him out completely. I can’t think of anything more real than that.
Do ADHD Drugs Work Longterm?
“Illness is the doctor to whom we pay most heed; to kindness, to knowledge, we make promise only; pain we obey.â€
― Marcel Proust
An essay on ADHD in the New York Times launched an interesting Twitter exchange with Steve Silberman and a medical blogger PalMD on how well we understand psychiatric disorders and treatment.
In the article, Dr. Sroufe concludes that since there is no evidence for longterm use of ADHD medication, their use should be abandoned. He is right that the evidence of efficacy is all short term. Over the long term, no benefit has been shown. Of course almost no one dealing with the issue on a day to day basis would agree. Parents, teachers and physicians all agree that these medications have a use to improve the lives of these children. Count me among those who believe it is highly probable that treatment over the course of months and years has utility, but is hard to prove.
As a problem in decision making, this is a good example of the difference between believing and knowing.
There is a difference between the practice of science and an absolutist approach to truth. In decision making, we must be practical. As Williams James said, “Truth is what works.” He believed that science was a pragmatic search for useful models of the world, including mind. Those that look for abstract, absolute truth in clinical research will be confused, misguided and as often as not, wrong in their decisions. Truth is something that happens to a belief over time as evidence is accumulated, not something that is established by a single positive experiment.
Belief in the usefulness of therapy in medicine follows this model of accumulation of belief. The complexity and variability of human behavior demands a skeptical approach to evidence and a sifting through to discover what works.
Clinical trials for drugs to affect behavior are generally relatively small, short experiments that measure a change from baseline in some clinically meaningful variable. These trials are clinical pharmacology studies in the classic sense, studies in patients (clinical) of drug effect (pharmacology). No one is expecting cure or even modification of the disease. The benefit is short term symptom relief, so the trial examines short term symptom relief. In the case of a pain reliever, we ask whether patient’s self reports of pain are decreased by therapy compared to before therapy. In ADHD, we ask whether a group of target behaviors is changed by treatment compared to baseline.
This approach of measuring change from baseline has a host of pitfalls that limit the generalizability of clinical trials to real life medicine. First, baseline measures are subject to large amounts of bias. One of the worst sources of bias in these trials is the patient and physician’s joint desire to have the patient meet the severity required to be enrolled. The investigator is under pressure to contribute patients to the trial. The patient hopes to gain access to some new therapy, either during the trial or during some subsequent opportunity. Both of these factors pressure patients to maximize the severity of their complaint at baseline. How do you get into a trial? Exaggerate your problem! Even without conscious or unconscious bias from patients, any trial will enroll patients that happen to be worse than their average state. When measured repeatedly over time, the scores will tend to drop- a classic regression to the mean. If you select more severe outliers, they will tend to look more average over time.
Second, diseases are not stable over time. Without any intervention, measures of a disease will be most highly correlated when measured with a short duration between assessments. The longer you wait to measure again, the lower the correlation. Measuring a drug effect in a controlled trial accurately depends on a high level of correlation. All else being equal, the longer one treats, the harder it will be to measure the effect of the drug. This is the major pitfall of depression trials. Episodes are usually limited in duration, so most patients will get better over time without treatment.
So perhaps its not surprising that its very hard to measure the effect of ADHD drugs after months or years in chronic therapy trials. These kids get better over time both from regression to the mean and the natural history of the disease.
Another important issue in ADHD research is that these drugs have effects in healthy volunteers. As Dr. Sroufe points out, amphetamines help college students study for exams- no diagnosis of ADHD needed. This makes it easier to do pharmacology studies, but means that diagnosis in those studies doesn’t really matter- the pharmacology is largely independent of any real pathological state. One could never study a cancer drug in some one without cancer, but this is not true of a cognitive enhancing drug. Its probably most likely that the kids with ADHD don’t have a single pathophysiology, but rather a combination of being at one end of a normal spectrum of behavior plus stress or lack of coping mechanisms that create problems for them in the school environment where those behaviors are disruptive to their learning and that of others. The pharmacology of stimulants helps then all- after all it helps even neurotypic college students and computer programmers.
Treatment response does not confirm diagnosis in ADHD as it does in some other neurological diseases like Parkinson’s Disease. While we’d like to call ADHD a disease or at least abnormal brain state, we have no routine way of assessing the current state of a child’s brain. We have even less ability to predict the state of the brain in the future. Thus diagnosis, in the real meaning of the word- “dia” to separate and “gnosis” to know, is something we can’t do. We don’t know how to separate these kids from normal or into any useful categories. And we have no way of describing prognosis- predicting their course. So a trial that enrolls children on the basis of a behavior at a moment in time and tries to examine the effects of an intervention over the long term is probably doomed to failure. Many of those enrolled won’t need the intervention over time. Many of those who don’t get the intervention will seek other treatment methods over time.
With all of these methodological problems, we can’t accept lack of positive trials to be proof that drugs are ineffective long term. We can’t even prove that powerful opioid pain relievers have longterm efficacy. In fact, it was not too long ago that we struggled with a lack of evidence that opioids were effective even over time periods as short as 12 weeks.
Our short term data in ADHD provides convincing evidence of the symptomatic effects of treatment. Instead of abandoning their use, we should be looking at better ways to collect long term data and test which long term treatment algorithms lead to the best outcomes. And we should be using our powerful tools to look at brain function to understand both the spectrum of ADHD behaviors and the actions of drugs in specific brain regions.
The Brain is the Map of the Mind
I dwell in Possibility– – Emily Dickinson
So What?
Why learn about the world with no immediate practical application?
I’ve said that there is value in writing to learn and photographing to see. What’s the value of learning? Of seeing?
Knowing how to bake bread or bake beans is clearly useful and there’s no question of practicality. Science, even at its most exploratory, seems useful as long as it promises more powerful manipulation of nature. Sometimes the possibility of science is obvious, as in understanding the role of an enzyme in energy metabolism to affect cellular function. Even when the connection is unclear, learning more seems to have potential value even if the present result is impractical. All information has value and there are no dead ends, only detours. Learning broadly is often necessary preparation for learning more narrowly and usefully
Is philosophy of mind of any use? Is it as useful as neuroscience itself? Might thinking about the nature of the mind at least contribute to the usefulness of information about brain structure and function? Why explore the relationship between mind and brain? Why worry about the apparent contradictions between deterministic physical models and subjective free will?
If I can’t tell whether the world is real or an illusion, does it matter? Is the mind made of ectoplasm attached to the brain by a neural-spiritual interface? These questions have been around for centuries. Every year we learn more about the brain. Do we know any more about the mind? Is what we’ve learned potentially useful?
I’d like to convince you that understanding how mind is generated from brain is a useful way to improve brain function. For me, this is a fundamental reason why brain science is important. Learning about the brain should be a path to deciding better.
Learning From Experience, Teaching the Brain
In the *Consolations of Philosophy“, Alain DeBotton writes, ”In their different ways, art and philosophy help us, in Schopenhauer’s words, to turn pain into knowledge." We know what art is and we know that art helps us learn to see. Philosophy, in the broadest meaning of love of knowledge is a similar direct path from experience to knowledge.
Ignoring the question and pretending that knowledge and brain are independent domains is to miss an opportunity to understand what it means to “know” and therefore try to improve the everyday use of knowledge.
So what have you learned personally from your years of mental experience? TYou’ve made good, profitable decisions about the world. You’ve made mistakes of course. Better yet, how often have you thought that you were right, absolutely sure you were right, and later learned that the true state of the world was not at all what you thought?
The stock market serves as a wonderful lab for training the mind to decide better if approached mindfully. There’s profit in correctly identifying an undervalued stock that subsequently rises in value. On the other hand, buying into hype and choosing a company close to failure is exactly the kind of pain that Schopenhauer was referring to as leading to knowledge. What is it that has improved from years of experience in the market that we call “knowledge”? Is it mind that has better judgement now? Is it the brain that can now choose more accurately under conditions of uncertainty?
The Brain Makes Maps
We are beginning, just beginning I think, to understand how knowledge is stored and retrieved in the brain. The insights go back to the beginnings of brain physiology, when recordings of single neurons in awake, behaving animals began to be possible. It was obvious from the start that you and I don’t perceive the world directly and whole, but broken down into very small elements. Our retinas are the light sensing neural arrays at the back of the eye. Like the individual pixels that make up the sensor array in a camera, each photoreceptor senses the light from a small part of the visual scene. The whole picture is represented, but its been deconstructed into a mosaic in which each element has been disconnected from every other element.
Somehow that array of light intensity is reconstructed into a sensory impression that we experience subjectively as seeing the world. What’s reconstructed is more than just a visual sensory impression, the seen world has meaning. Its as if there are little call outs from the objects- blue book, time, moving fly making that buzzing noise, so annoying …
The way in which sensory input is organized into coherent perception remains one of the fundamental questions in neuroscience. In the visual system, the brain starts abstracting local features like color form and edges from the map of intensities sensed at the eye. These features are mapped from visual space into brain maps, creating a neural representation of features in the scene. At higher levels, features become maps of objects with meaning like books and flies. The maps of words for these objects are separate, but can be called on when the fly or the book needs to be mentioned.
The brain is a set of maps, spatially orgainized, each representing different sensory streams or, on the action side, control of different parts of the bodies and movement of them through space. To catch a fly requires the map of visual space containing the fly to be registered with the map of arm and hand movement. The connections and coordinating systems to do all of that are known. In fact, simpler versions of them can be studied in frogs who need project a sticky tongue out into visual space for fly catching activities. Lunch in this case.
It’s a small conceptual step to suggest that valuation of stock, reading another person’s motivation, and understanding calculus are all brain maps of various types. They are simplified representations that model aspects of the real world. The maps are not strictly spatial, but reflect our models of how the physical world is laid out and how it can be manipulated. As simple models, they are not perfectly accurate just as a geographical map is not the terrain itself but rather a useful representation for navigation.
The Mind Mapped
Learning is the act of making better brain maps. The more accurate the model of the world is in the brain, the better it will navigate the world itself. Misconceptions, inaccuracies and the unknown are all bad or missing parts of the map that will make decisions more prone to error. A fully accurate and comprehensive map isn’t ever possible. By their very nature, maps are restricted representations of the world. The world itself is too big and complex to deal with directly.
The exploration of the relationship between mind and brain is for me an effort to create a more accurate map of deciding. We feel like we are creatures split in two. Our ethereal minds seem to inhabit and be constrained by physical bodies. A more accurate map would show just the brain working away and our subjective mental experience as a view into what that brain is doing. It becomes easier to discard distinctions like “rational decision making” and “intuition” when the underlying brain structure and function is the map of mind.
The Connectedness of the Abstract
Yesterday I visited the Metropolitain Museum of Art in NYC to see the Stieglitz and His Artists: Matisse to O’Keeffe just to soak in this important period of the arts in the US. There was a small associated gallery with some of Stieglitz’ photography collection. The photographs brought to mind once again how much photography was competing with painting until technical advances in film and printing gave photography the kind of technical image perfection that we now think of “photographicâ€. I’ve always yearned for the painterly results that the early techniques porduced. Our technical abilities have now advanced to the point that Flickr has thousands of well composed, perfectly exposed images that were hard to achieve in the predigital era.
There was another small photography gallery at the Met which was more inspiring to me, After the Gold Rush: Contemporary Photographs from the Collection. The images were all ones that addressed culture and political power. What struck me was the choice of a test wall installation as the closer for the gallery. I’d never seen his work before, but I believe he is on to something in presenting mulitple images in multiple sizes of multiple subjects that relate to each others in a way that a single image cannot.
I’ve begun printing images again and have been looking at how images relate when prints are placed in relation to each other. So far, it suggests why these images get created and how they are connected.
Happy Chanukah
Last night’s dinner: Duck Risotto with Duck Skin Gribenes. In the US, fried skins are usually called Cracklings. Gribenes is the Yiddish term of art.
On Chanukah, it’s traditional to eat fried foods like potato latkes. It’s just a culinary not to the miracle of the oil that’s celebrated.
The Future of the Library Is Service Not Place
Reading Mark Bernstein’s discussion of the future of books reminded me of my research methods in my academic days and a shocking recent discovery about the future of a library.
I never learned the classic index card source method. I would grab a new legal pad and scribble some information at the top of a page to link the notes to some source, generally a book. I’d then just write and write and write on the page, flipping to a fresh page as needed. It was a crazy mix of my my own thoughts and information directly from the source. The raw material. Since I never had the patience to actually copy quotes or information, I only wrote enough to recapture and document the dissuasion going on between the source and myself in that notes page.
I remember now that I’d often read a book cover to cover with excitement and then take a second, leisurely stroll through it to have that conversation with the author. Other times it was a page flipping attack on a book to see whether a particular subject was to be found in its pages. Researching obscure topics lead to a lot of attacks on a lot of books. Often I’d go from one end of the shelf of the library to the other, flipping systematically through every plausible source.
When it was time to write, I’d outline a section and decide what sources were relevant. Then I’d have the sources physically piled up in front of me, my notes to my left and a new pad for writing on the right. The process was to write, reference material by pulling the source and my notes and write some more. Of course revisions included pulling all of the physical sources together again.
This method was perfectly suited both to my poor memory, distractibility and the library environment. I didn’t work in my dorm room or, later, my office. The library was where I read, researched and wrote. As a faculty member at Johns Hopkins, I loved setting aside the time to begin the preparation of a manuscript or grant so that I could spend those hours lost in the stacks of Welch Medical Library.
Once out of college and focused on neuroscience, the system worked just as well as long as the Xerox machines were working. I probably spent more time dragging bound journals down to the basement and flipping pages to copy than I did reading or writing. I developed a commitment to never cite research that I hadn’t read and, generally, didn’t have a physical copy of to refer to. In fact, hidden in a couple of the book chapters I wrote are indirect references to sources, such “Jones (1936) as summarized in Smith (1994)”" when I couldn’t get a copy of Jones to read directly or it was written in another language. Sadly, as abstract services became electronic, it became a very, very common practice for papers to cite research that the author had clearly never read critically based solely on the research as inaccurately described in the paper’s abstract.
Imagine my surprise when I learned last month that Welch Medical Library at Johns Hopkins was going to close the doors of its historic building. Of course library services were not going away, it was decided that a better use of resources would be to enhance electronic services and campus delivery of books rather than keep a building open for the 106 visitors a day who circulated around the stacks of journals which were almost all available on line anyway.
Checking the Welch Medical Library website today, I see that the plan has been scuttled and instead there will be a larger discussion about the library. Despite such rearguard actions, the future of the library is clear. Libraries will be a service, not a place.
Mark closed his discussion about the future of books this way, “If our books are not as good as they can be, we can make them better. If we know how to make them better, and we want to make them better, why would we not?”
I submit that the books (and journal articles) are only one part of this transition. The real change is in the environment of these books, migrating from physical form to distributed digital information. It’s chaotic now, with way too many devices and contexts for retrieving and collecting information. I find myself redeveloping my stack of of sources, my legal pads of notes and my writing pad over and over again every six months, but where the sources are in a dozen forms that partially overlap, notes in half a dozen systems (text files, Findings, Tinderbox), and writing in yet another half dozen programs.
We call it “workflow”, but that’s a symptom of the world in transition. If we’re smart, we’ll be choosing and developing tools that together create a more powerful version of the library carrel. And valuing scholarship, not just the ability to weave a few sources into an entertaining yarn.
Portfolio Review
Reclining
Florence, Italy
I was inspired by Zack Arias to look back on my photographic output of the last few years and try to put together a portfolio, perhaps as prints. I’ve been working on editing and rating photos lately to put together some photo books to give as gifts. That turns out to be a real motivator for me to bring photography more into my life.
I thought that I might cut the process short by just finding the photos in my aperture library that were tagged by FlickrExport as being uploaded. Flickr tells me that since I started uploading in late 2005, I’ve posted 628 images. Using the Archive Thumbnail View I could survey my body of work on screen quickly and see the number of images for each year. It looks like this:
Year | Flickr Images |
---|---|
2006 | 112 |
2007 | 168 |
2008 | 182 |
2009 | 66 |
2010 | 49 |
2011 | 38 |
So in 2008, my Flickr posting peaked at about an image every other day. I was surprised the pattern. Based on a review of image counts within Aperture, it looks like it has to do both with decreased numbers of shots taken and processed plus a waning of intrest in Flickr’s social side. A few years ago, I was active in several groups and anticipated the views and feedback.
I’m pretty confident that I can use the Flickr tags through 2008. But I’ll need to do a broader reivew for the last few years to identify promising images and process them.