Sunday Contemplation — Finding Wisdom — Daniel Dennett in “Darwin’s Dangerous Idea”

Daniel_Dennett

“The Darwinian Revolution is both a scientific and a philosophical revolution, and neither revolution could have occurred without the other. As we shall see, it was the philosophical prejudices of the scientists, more than their lack of scientific evidence, that prevented them from seeing how the theory could actually work, but those philosophical prejudices that had to be overthrown were too deeply entrenched to be dislodged by mere philosophical brilliance. It took an irresistible parade of hard-won scientific facts to force thinkers to take seriously the weird new outlook that Darwin proposed…. If I were to give an award for the single best idea anyone has ever had, I’d give it to Darwin, ahead of Newton and Einstein and everyone else. In a single stroke, the idea of evolution by natural selection unifies the realm of life, meaning, and purpose with the realm of space and time, cause and effect, mechanism and physical law. But it is not just a wonderful scientific idea. It is a dangerous idea.”

Daniel Dennett (pictured above thanks to Wikipedia) is the Co-Director of the Center for Cognitive Studies and Austin B. Fletcher Professor of Philosophy at Tufts University.  He is also known as “Dawkins’ Bulldog”, for his pointed criticism of what he viewed as unnecessary revisions to Darwinian Theory by Stephen Jay Gould, who was also a previous subject of this blog, and others.  In popular culture he has also been numbered among the “Four Horsemen” of the so-called “New Atheism”.  His intellectual and academic achievements are many, and his insights into evolution, social systems, cognition, consciousness, free will, philosophy, and artificial intelligence are extremely influential.

Back in 1995, when I was a newly minted Commander in the United States Navy, I happened across an intriguing book in a Jacksonville, Florida bookshop during a temporary duty assignment.  The book was entitled Darwin’s Dangerous Idea: Evolution and the Meanings of Life.  I opened it that afternoon during a gentle early spring Florida day and found myself astounded and my mind liberated, as if chains which I had not previously noticed, but which had bound my mind, had been broken and released me, so great was the influence of the philosophical articulation of this “dangerous idea”.

Here, for the first time, was a book that took what we currently know about the biological sciences and placed them within the context of other scientific domains–and done so in a highly organized, articulate, and readable manner.  The achievement of the book was not so much in deriving new knowledge, but in presenting an exposition of the known state of the science and tracing its significance and impact–no mean achievement given the complexity of the subject matter and the depth and breadth of knowledge being covered.  The subject matter, of course, is highly controversial only because it addresses subjects that engender the most fear: the facts of human origins, development, nature, biological interconnectedness, and the inevitability of mortality.

Dennett divides his thesis into three parts: the method of developing the theory and its empirical proofs, it’s impact on the biological sciences, and the impact on other disciplines, especially regarding consciousness, philosophy, sociology, and morality.  He introduces and develops several concepts, virtually all of which have since become cornerstones in human inquiry, and not only among the biological sciences.

Among these are the concepts of design space, of natural selection behaving as an algorithm, of Darwinism acting as a “universal acid” that transforms the worldview of everything it touches, and of the mental concepts of skyhooks, cranes and “just-so” stories–fallacious and magical ways of thinking that have no underlying empirical foundation to explain natural phenomena.

The concept of the design space has troubled many, though not most evolutionary biologists and physicists, only because Dennett posits a philosophical position in lieu of a mathematical one.  This does not necessarily undermine his thesis, simply because one must usually begin with a description of a thesis before one can determine whether it can be disproven.  Furthermore, Dennett is a philosopher of the analytical school and so the scope of his work is designed from that perspective.

But there are examples that approach the analogue of design space in physics–those that visualize space-time and general relativity as at this site.  It is not a stretch to understand that our reality–the design space that the earth inhabits among many alternative types of design spaces that may exist that relate to biological evolution–can eventually be mathematically formulated.  Given that our knowledge of comparative planetary and biological physics is still largely speculative and relegated to cosmological speculation, the analogy for now is sufficient and understandable.  It also gives a new cast to the concept of adaptation away from the popular (and erroneous) concept of “survival of the fittest”, since fitness is based on the ability to adapt to environmental pressures and to find niches that may exist in that environment.  With our tracing of the effects of climate change on species, we will be witnessing first hand the brutal concept of design space.

Going hand-in-hand with design space is the concept that Darwinian evolution through the agent of natural selection is an algorithmic process.  This understanding becomes “universal acid” that, according to Dennett, “eats through just about every traditional concept and leaves in its wake a revolutionized world-view.”

One can understand the objection of philosophers and practitioners of metaphysics to this concept, which many of them have characterized as nihilistic.  This, of course, is argument from analogy–a fallacious form of rhetoric.  The objection to the book through these arguments, regardless of the speciousness of their basis, is premature and a charge to which Dennett effectively responds through his book Consciousness Explained.  It is in this volume that Dennett addresses the basis for the conscious self, “intentionality”, and the concept of free will (and its limitations)–what in the biological and complexity sciences is described as emergence.

What Dennett has done through describing the universal acid of Darwinian evolution is to describe a phenomenon: the explanatory reason for rapid social change that we have and are witnessing, and the resulting reaction and backlash to it.  For example, the revolution that was engendered from the Human Genome Project not only has confirmed our species’ place in the web of life on Earth and our evolutionary place among primates, but also the interconnections deriving from descent from common ancestors of the entire human species, exploding the concept of race and any claim to inherent superiority or inferiority to any cultural grouping of humans.

One can clearly see the threat this basic truth has to entrenched beliefs deriving from conservative philosophy, cultural tradition, metaphysics, religion, national borders, ethnic identity, and economic self-interest.

For it is apparent to me, given my reading not only of Dennett, but also that of both popularizers and the leading minds in the biological sciences that included Dawkins, Goodall, Margulis, Wilson, Watson, Venter, Crick, Sanger, and Gould; in physics from Hawking, Penrose, Weinberg, Guth, and Krauss, in mathematics from Wiles, Witten, and Diaconis; in astrophysics from Sandage, Sagan, and deGrasse Tyson; in climate science from Hansen and many others; and in the information sciences from Moore, Knuth, and Berners-Lee, that we are in the midst of another intellectual revolution.  This intellectual revolution far outstrips both the Renaissance and the Enlightenment as periods of human achievement and advancement, if only because of the widespread availability of education, literacy, healthcare, and technology, as well as human diversity, which both accelerates and expands many times over the impact of each increment in knowledge.

When one realizes that both of those earlier periods of scientific and intellectual advance engendered significant periods of social, political, and economic instability, upheaval, and conflict, then the reasons for many of the conflicts in our own times become clear.  It was apparent to me then–and even more apparent to me now–that there will be a great overturning of the institutional, legal, economic, social, political, and philosophic ideas and structures that now exist as a result.  We are already seeing the strains in many areas.  No doubt there are interests looking to see if they can capitalize on or exploit these new alignments.  But for those overarching power structures that exert control, conflict, backlash, and eventual resolution is inevitable.

In this way Fukuyama was wrong in the most basic sense in his thesis in The End of History and the Last Man to the extent that he misidentified ideologies as the driving force behind the future of human social organization.  What he missed in his social “science” (*) is the shift to the empirical sciences as the nexus of change.  The development of analytical philosophy (especially American Pragmatism) and more scientifically-based modeling in the social sciences are only the start, but one can make the argument that these ideas have been more influential in clearly demonstrating that history, in Fukuyama’s definition, is not over.

Among the first shots over the bow from science into the social sciences have come works from such diverse writers as Jared Diamond (Guns, Germs, and Steel: The Fates of Human Societies (1997)) and Sam Harris (The Moral Landscape: How Science Can Determine Human Values (2010)).  The next wave will, no doubt, be more intense and drive further resistance and conflict.

The imperative of science informing our other institutions is amply demonstrated by two facts.

  1. On March 11, 2016 an asteroid that was large enough to extinguish a good part of all life on earth came within 19,900 miles of our planet’s center.  This was not as close, however, as the one that passed on February 25 (8,900 miles).  There is no invisible shield or Goldilocks Zone to magically protect us.  The evidence of previous life-ending collisions are more apparent with each new high resolution satellite image of our planet’s surface.  One day we will look up and see our end slowly but inevitably making its way toward us, unless we decide to take measures to prevent such a catastrophe.
  2. Despite the desire to deny that it’s happening, 2015 was the hottest recorded year on record and 2016 thus far is surpassing that, providing further empirical evidence of the validity of Global Warming models.  In fact, the last four consecutive years fall within the four hottest years on record (2014 was the previous hottest year).  The outlier was 2010, another previous high, which is hanging in at number 3 for now.  2013 is at number 4 and 2012 at number 8.  Note the general trend.  As Jared Diamond has convincingly demonstrated–the basis of conflict and societal collapse is usually rooted in population pressures exacerbated by resource scarcity.  We are just about to the point of no return, given the complexity of the systems involved, and can only mitigate the inevitable–but we must act now to do.

What human civilization does not want to be is on the wrong side of history in how to deal with these challenges.  Existing human power structures and interests would like to keep the scientific community within the box of technology–and no doubt there are still scientists that are comfortable to stay within that box.

The fear regarding allowing science to move beyond the box of technology and general knowledge is its misuse and misinterpretation, usually by non-scientists, such as the reprehensible meme of Social Darwinism (which is neither social nor Darwinian).**  This fear is oftentimes transmitted by people with a stake in controlling the agenda or interpreting what science has determined.  Its contingent nature also is a point of fear.  While few major theories are usually completely overturned as new knowledge is uncovered, the very nature of revision and adjustment to theory is frightening to people who depend on, at least, the illusion of continuity and hard truths.  Finally, science puts us in our place within the universe.  If there are millions of planets that can harbor some kind of life, and a sub-set of those that have the design space to allow for some kind of intelligent life (as we understand that concept), are we really so special after all?

But not only within the universe.  Within societies, if all humans have developed from a common set of ancestors, then our basic humanity is a shared one.  If the health and sustainability of an ecology is based on its biodiversity, then the implication for human societies is likewise found in diversity of thought and culture, eschewing tribalism and extreme social stratification.  If the universe is deterministic with only probability determining ultimate cause and effect, then how truly free is free will?  And what does this say about the circumstances in which each of us finds him or herself?

The question now is whether we embrace our fears, manipulated by demagogues and oligarchs, or embrace the future, before the future overwhelms and extinguishes us–and to do so in a manner that is consistent with our humanity and ethical reasoning.

 

Note:  Full disclosure.  As a senior officer concerned with questions of AI, cognition, and complex adaptive systems, I opened a short correspondence with Dr. Dennett about those subjects.  I also addressed what I viewed as his unfair criticism (being Dawkins’ Bulldog) of punctuated equilibrium, spandrels, and other minor concepts advanced by Stephen Jay Gould, offering a way that Gould’s concepts were well within Darwinian Theory, as well as being both interesting and explanatory.  Given that less complex adaptive systems that can be observed do display punctuated periods of rapid development–and also continue to have the vestiges of previous adaptations that no longer have a purpose–it seemed to me that larger systems must also do so, the punctuation being on a different time-scale, and that any adaptation cannot be precise given that biological organisms are imprecise.  He was most accommodating and patient, and this writer learned quite a bit in our short exchange.  My only regret was not to continue the conversation.  I do agree with Dr. Dennett (and others) on their criticism of non-overlapping magisteria (NOMA), as is apparent in this post.

I Can’t Drive 55 — The New York Times and Moore’s Law

Yesterday the New York Times published an article about Moore’s Law.  While interesting in that John Markoff, who is the Times science writer, speculates that in about 5 years the computing industry will be “manipulating material as small as atoms” and therefore may hit a wall in what has become a back of the envelope calculation of the multiplicative nature of computing complexity and power in the silicon age.

This article prompted a follow on from Brian Feldman at NY Mag, that the Institute of Electrical and Electronics Engineers (IEEE) has anticipated a broader definition of the phenomenon of the accelerating rate of computing power to take into account quantum computing.  Note here that the definition used in this context is the literal one: the doubling of the number of transistors over time that can be placed on a microchip.  That is a correct summation of what Gordon Moore said, but it not how Moore’s Law is viewed or applied within the tech industry.

Moore’s Law (which is really a rule of thumb or guideline in lieu of an ironclad law) has been used, instead, as a analogue to describe the geometric acceleration that has been seen in computer power over the last 50 years.  As Moore originally described the phenomenon, the doubling of transistors occurred every two years.  Then it was revised later to occur about every 18 months or so, and now it is down to 12 months or less.  Furthermore, aside from increasing transistors, there are many other parallel strategies that engineers have applied to increase speed and performance.  When we combine the observation of Moore’s Law with other principles tied to the physical world, such as Landauer’s Principle and Information Theory, we begin to find a coherence in our observations that are truly tied to physics.  Thus, rather than being a break from Moore’s Law (and the observations of these other principles and theory noted above), quantum computing, to which the articles refer, sits on a continuum rather than a break with these concepts.

Bottom line: computing, memory, and storage systems are becoming more powerful, faster, and expandable.

Thus, Moore’s Law in terms of computing power looks like this over time:

Moore's Law Chart

Furthermore, when we calculate the cost associated with erasing a bit of memory we begin to approach identifying the Demon* in defying the the Second Law of Thermodynamics.

Moore's Law Cost Chart

Note, however, that the Second Law is not really being defied, it is just that we are constantly approaching zero, though never actually achieving it.  But the principle here is that the marginal cost associated with each additional bit of information become vanishingly small to the point of not passing the “so what” test, at least in everyday life.  Though, of course, when we get to neural networks and strong AI such differences are very large indeed–akin to mathematics being somewhat accurate when we want to travel from, say, San Francisco to London, but requiring more rigor and fidelity when traveling from Kennedy Space Center to Gale Crater on Mars.

The challenge, then, in computing is to be able to effectively harness such power.  Our current programming languages and operating environments are only scratching the surface of how to do this, and the joke in the industry is that the speed of software is inversely proportional to the advance in computing power provided by Moore’s Law.  The issue is that our brains, and thus the languages we harness to utilize computational power, are based in an analog understanding of the universe, while the machines we are harnessing are digital.  For now this knowledge can only build bad software and robots, but given our drive into the brave new world of heuristics, may lead us to Skynet and the AI apocalypse if we are not careful–making science fiction, once again, science fact.

Back to present time, however, what this means is that for at least the next decade, we will see an acceleration of the ability to use more and larger sets of data.  The risks, that we seem to have to relearn due to a new generation of techies entering the market which lack a well rounded liberal arts education, is that the basic statistical and scientific rules in the conversion, interpretation, and application of intelligence and information can still be roundly abused and violated.  Bad management, bad decision making, bad leadership, bad mathematics, bad statisticians, specious logic, and plain old common human failings are just made worse, with greater impact on more people, given the misuse of that intelligence and information.

The watchman against these abuses, then, must be incorporated into the solutions that use this intelligence and information.  This is especially critical given the accelerated pace of computing power, and the greater interdependence of human and complex systems that this acceleration creates.

*Maxwell’s Demon

Note:  I’ve defaulted to the Wikipedia definitions of both Landauer’s Principle and Information Theory for the sake of simplicity.  I’ve referenced more detailed work on these concepts in previous posts and invite readers to seek those out in the archives of this blog.

The Revolution Will Not Be Televised — The Sustainability Manifesto for Projects

While doing stuff and living life (which seems to take me away from writing) there were a good many interesting things written on project management.  The very insightful Dave Gordon at his blog, The Practicing IT Project Manager, provides a useful weekly list of the latest contributions to the literature that are of note.  If you haven’t checked it out please do so–I recommend it highly.

While I was away Dave posted to an interesting link on the concept of sustainability in project management.  Along those lines three PM professionals have proposed a Sustainability Manifesto for Projects.  As Dave points out in his own post on the topic, it rests on three basic principles:

  • Benefits realization over metrics limited to time, scope, and cost
  • Value for many over value of money
  • The long-term impact of our projects over their immediate results

These are worthy goals and no one needs to have me rain on their parade.  I would like to see these ethical principles, which is what they really are, incorporated into how we all conduct ourselves in business.  But then there is reality–the “is” over the “ought.”

For example, Dave and I have had some correspondence regarding the nature of the marketplace in which we operate through this blog.  Some time ago I wrote a series of posts here, here, and here providing an analysis of the markets in which we operate both in macroeconomic and microeconomic terms.

This came in response to one my colleagues making the counterfactual assertion that we operate in a “free market” based on the concept of “private enterprise.”  Apparently, such just-so stories are lies we have to tell ourselves to make the hypocrisy of daily life bearable.  But, to bring the point home, in talking about the concept of sustainability, what concrete measures will the authors of the manifesto bring to the table to counter the financialization of American business that has occurred of the past 35 years?

For example, the news lately has been replete with stories of companies moving plants from the United States to Mexico.  This despite rising and record corporate profits during a period of stagnating median working class incomes.  Free trade and globalization have been cited as the cause, but this involves more hand waving and the invocation of mantras, rather than analysis.  There has also been the predictable invocations of the Ayn Randian cult and the pseudoscience* of Social Darwinism.  Those on the opposite side of the debate characterize things as a morality play, with the public good versus greed being the main issue.  All of these explanations miss their mark, some more than others.

An article setting aside a few myths was recently published by Jonathan Rothwell at Brookings, which came to me via Mark Thoma’s blog, in the article, “Make elites compete: Why the 1% earn so much and what to do about it”.  Rothwell looks at the relative gains of the market over the last 40 years and finds that corporate profits, while doing well, have not been the driver of inequality that Robert Reich and other economists would have it be.  In looking at another myth that has been promulgated by Greg Mankiw, he finds that the rewards of one’s labors is not related to any special intelligence or skill.  On the contrary, one’s entry into the 1% is actually related to what industry one chooses to enter, regardless of all other factors.  This disparity is known as a “pay premium”.  As expected, petroleum and coal products, financial instruments, financial institutions, and lawyers, are at the top of the pay premium.  What is not, against all expectations of popular culture and popular economic writing, is the IT industry–hardware, software, etc.  Though they are the poster children of new technology, Bill Gates, Mark Zuckerburg, and others are the exception to the rule in an industry that is marked by a 90% failure rate.  Our most educated and talented people–those in science, engineering, the arts, and academia–are poorly paid–with negative pay premiums associated with their vocations.

The financialization of the economy is not a new or unnoticed phenomenon.  Kevin Phillips, in Wealth and Democracy, which was written in 2003, noted this trend.  There have been others.  What has not happened as a result is a national discussion on what to do about it, particularly in defining the term “sustainability”.

For those of us who have worked in the acquisition community, the practical impact of financialization and de-industrialization have made logistics challenging to say the least.  As a young contract negotiator and Navy Contracting Officer, I was challenged to support the fleet when any kind of fabrication or production was involved, especially in non-stocked machined spares of any significant complexity or size.  Oftentimes my search would find that the company that manufactured the items was out of business, its pieces sold off during Chapter 11, and most of the production work for those items still available done seasonally out of country.  My “out” at the time–during the height of the Cold War–was to take the technical specs, which were paid for and therefore owned by the government, to one of the Navy industrial activities for fabrication and production.  The skillset for such work was still fairly widespread, supported by the quality control provided by a fairly well-unionized and trade-based workforce–especially among machinists and other skilled workers.

Given the new and unique ways judges and lawyers have applied privatized IP law to items financed by the public, such opportunities to support our public institutions and infrastructure, as I was able, have been largely closed out.  Furthermore, the places to send such work, where possible, have also gotten vanishingly smaller.  Perhaps digital printing will be the savior for manufacturing that it is touted to be.  What it will not do is stitch back the social fabric that has been ripped apart in communities hollowed out by the loss of their economic base, which, when replaced, comes with lowered expectations and quality of life–and often shortened lives.

In the end, though, such “fixes” benefit a shrinkingly few individuals at the expense of the democratic enterprise.  Capitalism did not exist when the country was formed, despite the assertion of polemicists to link the economic system to our democratic government.  Smith did not write his pre-modern scientific tract until 1776, and much of what it meant was years off into the future, and its relevance given what we’ve learned over the last 240 years about human nature and our world is up for debate.  What was not part of such a discussion back then–and would not have been understood–was the concept of sustainability.  Sustainability in the study of healthy ecosystems usually involves the maintenance of great diversity and the flourishing of life that denotes health.  This is science.  Economics, despite Keynes and others, is still largely rooted in 18th and 19th century pseudoscience.

I know of no fix or commitment to a sustainability manifesto that includes global, environmental, and social sustainability that makes this possible short of a major intellectual, social or political movement willing to make a long-term commitment to incremental, achievable goals toward that ultimate end.  Otherwise it’s just the mental equivalent to camping out in Zuccotti Park.  The anger we note around us during this election year of 2016 (our year of discontent) is a natural human reaction to the end of an idea, which has outlived its explanatory power and, therefore, its usefulness.  Which way shall we lurch?

The Sustainability Manifesto for Projects, then, is a modest proposal.  It may also simply be a sign of the times, albeit a rational one.  As such, it leaves open a lot of questions, and most of these questions cannot be addressed or determined by the people to which it is targeted: project managers, who are usually simply employees of a larger enterprise.  People behave as they are treated–to the incentives and disincentives presented to them, oftentimes not completely apparent on the conscious level.  Thus, I’m not sure if this manifesto hits its mark or even the right one.

*This term is often misunderstood by non-scientists.  Pseudoscience means non-science, just as alternative medicine means non-medicine.  If any of the various hypotheses of pseudoscience are found true, given proper vetting and methodology, that proposition would simply be called science.  Just as alternative methods of treatment, if found effective and consistent, given proper controls, would simply be called medicine.

For the Weekend: Music, Data, and Florence + The Machine

Saturdays–and some Sundays–have usually have been set aside for music as an interlude from all things data, information technology, and my work in general.  Admittedly, blogging has suffered because of the demands of work and, you know, having a life, especially with family.  But flying back from a series of important meetings that will, no doubt, make up for the lack of blogging in the near future, I settled in finally to listen to Ms. Welch’s latest.

As a fan from the beginning, I have not been impressed with the early singles that were released from her album, How Big, How Blue, How Beautiful.  My reaction to the title song, using a single syllable sound, was “meh.”  Same for the song “What Kind of Man,” which apparently grasping for some kind of significance, I viewed as inarticulate at best and largely muddled.  The message in this case, at least for me, didn’t save the medium.

So I kicked back on the plane after another 12 hour (or so) day and was intent on not giving up on her artistry.  So I listened to the album mostly with eyes closed, but with occasional forays into checking out the beautiful moonlit dome of the sky while traveling over the eastern seaboard with the glittering lights of the houses and towns 35,000 feet below.  (A series of “Supermoon” events are happening).  About four songs in I found myself taken in by what can only be described as another strong song cycle that possesses more subtlety and maturity than the bang-on pyrotechnics of Ceremonials.

The red-headed Celtic Goddess can still drive a tune and a theme that, having experienced one of her concerts in the desert of New Mexico under a cloudless night sky with the expanse of the Milky Way overhead, can become both transcendent and almost spooky, especially as her acolytes dance and sway in the trance state induced by her music.  Thus, I have come to realize that releasing any of her songs on their own from this album is largely a mistake because they cannot hold up as “singles” in the American tradition of Tin Pan Alley–nor even as prog rock.  Listening to the entire album from start to finish gives you the perspective from which you need to assess its artistic merit.  

For me, her lyrics and themes hark back and forth across the dimension of human experience, tying them together and, thus, fusing time in the process, opening up pathways in the mind to an almost elemental suggestion of the essence of existence which is communicated through the beat and expanse of the music.

Therefore, rather than a sample from Youtube, which I usually post at this point, I instead strongly recommend that you give the album a listen.  It’ll keep the band in business making more beautiful music as well.

Before I be accused by some readers of going off the deep end in exhaustion or overstatement in describing the effect of Ms. Welch’s music on me, I would caution that there is a scientific basis for it.  Many other writers and artists have noted the power of music without the need for other stimuli to have this same effect on them, as documented by the recently passed neuroscientist Oliver Sacks.  

Proust used music to delve into his inner consciousness to inform his writings.  Tolstoy was so taken by music that he was careful about when and what it was to which he listened since when he immersed himself in it he felt himself to be taken to an altered mental state.  Clinical experience document that many Parkinson’s and Tourette’s patients are affected–and sometimes coerced–by the power of music into involuntary states.  On the darker side of human experience, it is no coincidence that music is used by oppressive regimes and militaries to coerce, and sometimes manipulate, prisoners and captives.  On the positive side in my own experience, I was able to come to a mathematical solution to a problem in one afternoon by immersing myself fully in John Coltrane’s “A Love Supreme.”

Aside from being an aural experience that stimulates neurobiological systems, underlying music is mathematics, and underlying the mathematics are digital packets of information.  We live in a digital world.  (And–and yes–Madonna is a digital girl).  No doubt the larger implications of this view are somewhat controversial (though compelling) in the scientific community with the questions surrounding it under the discipline of digital physics.

But if we view music as information (which at many levels it is) and our minds as the decoders, then the images and states of consciousness that we enter are implicit in the message, with bias introduced by our conscious minds in attempting to provide both structure and coherence.  It is the same with any data.  We can listen to a single song, but find ourselves placing undue emphasis on just one small aspect of the whole, missing out on what is significant.

Our own digital systems approaches are often similar.  When we concentrate on a sliver of information we bias our perspectives.  We see this all the time in business systems and project management.  Sometimes you just have to listen to the whole album, or step up to bigger data.

Note:  The post has been edited from the original to correct grammatical errors and for clarity.

 

Sunday Contemplation — Finding Wisdom — A General Theory of Love

When I first wrote about the book, A General Theory of Love, by Thomas Lewis, Fari Amini, and Richard Lannon, I said that it was an important book in the category of general psychology and human development.  While my comments for this post reprise some of my earlier observations, I think it is worthwhile to reprise and expand upon them.

Human psychology and social psychology have been ripe with pseudo-scientific methods and explanations.  In many cases ideology and just plain societal prejudice has also played a role.   In this work the authors effectively eviscerate the pre-scientific approach to understanding human behavior and mental health. They posit that an understanding of the physical structure of the brain, and the relationship and interplay of the environment to it, is necessary in understanding the manifestation of behaviors found in our species. In outlining the science of the brain’s structure, the authors effectively undermine the approach that the human mind and our emotional lives are self-contained.

According to Thomas Lewis, “the book describes the nature of 3 fundamental neurophysiologic processes that create and govern love: limbic resonance, the wordless and nearly instantaneous emotional attunement that allows us to sense each other’s feeling states; limbic regulation, the modulation and control of our physiology by our relationships; and limbic revision, the manner in which relationships alter the very structure of our brains. those whom we love, as our book describes, change who we are, and who we can become.”

This concept is not without its own limitations.  In the book the authors discuss the concept of the triune brain, that is, the portions of the brain that are derived from our evolutionary ancestors from our reptilian complex, through the limbic system (paleo-mammalian), and ending with the neo-cortex (neo-mammalian).  This model is an effective one for generalization, but it has not been completely accepted in neuroscience as an accurate model. Also, the identification of what constitutes the limbus is a shifting science, as is the evolutionary theory of the brain.  But one would expect such contingency in a scientific field only now garnering results.  What this shows is that we have been amazingly ignorant of the most important part of our anatomy that explains what we are, how our personalities and emotional lives are formed, and how those needs create the society in which we live.

Rather than individuals which are disconnected from those around us, what the book demonstrates is that the present state of psychiatry and neuroscience clearly shows that we are indelibly connected to those around us.  This not only includes family, but also our environments (both neo- and post-natal), and society.  Given that we are in the midst of a new renaissance in the sciences, the ambition of a “general theory” is a bit premature.

But what the authors have done is provide a strong hypothesis that is proving itself out in experimental and evolutionary biology and neuroscience: that we are social animals, that we have a strong and essential need for love and support early in our development, that our relationships and environment mold the structures of the brain, that emotional regulation is important throughout our lives, and that we are connected to each other in both intuitive and overt ways that make us what we are individually and societally.

They also provide, knowing the psychological needs of human flourishing, that the materialism and dispersion of modern society has contributed to the pathology and neuroses we see today: anxiety, depression, and narcissism, among others.  That this understanding is not academic–that understanding and applying this knowledge in solving human problems is also existential–is the challenge of our own time.

Out of Winter Woodshedding — Thinking about Project Risk and passing the “So What?” test

“Woodshedding” is a slang term in music, particularly in relation to jazz, in which the musician practices on an instrument usually outside of public performance, the purpose of which is to explore new musical insights without critical judgment.  This can be done with or without the participation of other musicians.  For example, much attention recently has been given to Bob Dylan’s Basement Tapes release.  Usually it is unusual to bother recording such music, given the purpose of improvisation and exploration, and so few additional examples of “basement tapes” exist from other notable artists.

So for me the holiday is a sort of opportunity to do some woodshedding.  The next step is to vet such thoughts on informal media, such as this blog, where the high standards involved in white and professional papers do not allow for informal dialogue and exchange of information, and thoughts are not yet fully formed and defensible.  My latest mental romps have been inspired by the movie about Alan Turing–The Imitation Game–and the British series The Bletchley Circle.  Thinking about one of the fathers of modern computing reminded me that the first use of the term “computer” referred to people.

As a matter of fact, though the terminology now refers to the digital devices that have insinuated themselves into every part of our lives, people continue to act as computers.  Despite fantastical fears surrounding AI taking our jobs and taking over the world, we are far from the singularity.  Our digital devices can only be programmed to go so far.  The so-called heuristics in computing today are still hard-wired functions, similar to replicating the methods used by a good con artist in “reading” the audience or the mark.  With the new technology in dealing with big data we have the ability to many of the methods originated by the people in the real life Bletchley Park of the Second World War.  Still, even with refinements and advances in the math, they provide great external information regarding the patterns and probable actions of the objects of the data, but very little insight into the internal cause-and-effect that creates the data, which still requires human intervention, computation, empathy, and insight.

Thus, my latest woodshedding has involved thinking about project risk.  The reason for this is the emphasis recently on the use of simulated Monte Carlo analysis in project management, usually focused on the time-phased schedule.  Cost is also sometimes included in this discussion as a function of resources assigned to the time-phased plan, though the fatal error in this approach is to fail to understand that technical achievement and financial value analysis are separate functions that require a bit more computation.

It is useful to understand the original purpose of simulated Monte Carlo analysis.  Nobel physicist Murray Gell-Mann, while working at RAND Corporation (Research and No Development) came up with the method with a team of other physicists (Jess Marcum and Keith Breuckner) to determine the probability of a number coming up from a set of seemingly random numbers.  For a full rendering of the theory and its proof Gell-Mann provides a good overview in his book The Quark and the Jaguar.  The insight derived from the insight of Monte Carlo computation has been to show that systems in the universe often organize themselves into patterns.  Instead of some event being probable by chance, we find that, given all of the events that have occurred to date, that there is some determinism which will yield regularities that can be tracked and predicted.  Thus, the use of simulated Monte Carlo analysis in our nether world of project management, which inhabits that void between microeconomics and business economics, provides us with some transient predictive probabilities given the information stream at that particular time, of the risks that have manifested and are influencing the project.

What the use of Monte Carlo and other such methods in identifying regularities do not do is to determine cause-and-effect.  We attempt to bridge this deficiency with qualitative risk in which we articulate risk factors to handle that are then tied to cost and schedule artifacts.  This is good as far as it goes.  But it seems that we have some of this backward.  Oftentimes, despite the application of these systems to project management, we still fail to overcome the risks inherent in the project, which then require a redefinition of project goals.  We often attribute these failures to personnel systems and there are no amount of consultants all too willing to sell the latest secret answer to project success.  Yet, despite years of such consulting methods applied to many of the same organizations, there is still a fairly consistent rate of failure in properly identifying cause-and-effect.

Cause-and-effect is the purpose of all of our metrics.  Only by properly “computing” cause-and-effect will we pass the “So What?” test.  Our first forays into this area involve modeling.  Given enough data we can model our systems and, when the real-time results of our in-time experiments play out to approximate what actually happens then we know that our models are true.  Both economists and physicists (well, the best ones) use the modeling method.  This allows us to get the answer even if not entirely understanding the question of the internal workings that lead to the final result.  As in Douglas Adams’ answer to the secret of life, the universe, and everything where the answer is “42,” we can at least work backwards.  And oftentimes this is what we are left, which explains the high rate of failure in time.

While I was pondering this reality I came across this article in Quanta magazine outlining the new important work of the MIT physicist Jeremy England entitled “A New Physics Theory of Life.”  From the perspective of evolutionary biology, this pretty much shows that not only does the Second Law of Thermodynamics support the existence and evolution of life (which we’ve known as far back as Schrodinger), but probably makes life inevitable under a host of conditions.  In relation to project management and risk, it was this passage that struck me most forcefully:

“Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.”

No project is a closed system (just as the earth is not on a larger level).  The level of entropy in the system will vary by the external inputs that will change it:  effort, resources, and technical expertise.  As I have written previously (and somewhat controversially), there is both chaos and determinism in our systems.  An individual or a system of individuals can adapt to the conditions in which they are placed but only to a certain level.  It is non-zero that an individual or system of individuals can largely overcome the risks realized to date, but the probability of that occurring is vanishingly small.  The chance that a peasant will be a president is the same.  The idea that it is possible, even if vanishingly so, keeps the class of peasants in line so that those born with privilege can continue to reassuringly pretend that their success is more than mathematics.

When we measure risk what we are measuring is the amount of entropy in the system that we need to handle, or overcome.  We do this by borrowing energy in the form of resources of some kind from other, external systems.  The conditions in which we operate may be ideal or less than ideal.

What England’s work combined with his predecessors’ seem to suggest is that the Second Law almost makes life inevitable except where it is impossible.  For astrophysics this makes the entire Rare Earth hypothesis a non sequitur.  That is, wherever life can develop it will develop.  The life that does develop is fit for its environment and continues to evolve as changes to the environment occur.  Thus, new forms of organization and structure are found in otherwise chaotic systems as a natural outgrowth of entropy.

Similarly, when we look at more cohesive and less complex systems, such as projects, what we find are systems that adapt and are fit for the environments in which they are conceived.  This insight is not new and has been observed for organizations using more mundane tools, such as Deming’s red bead experiment.  Scientifically, however, we now have insight into the means of determining what the limitations of success are given the risk and entropy that has already been realized, against the needed resources that are needed to bring the project within acceptable ranges of success.  This information goes beyond simply stating the problem, leaving the computing to the person and thus passes the “So What?” test.

Finding Wisdom — Stephen Jay Gould in “The Mismeasure of Man”

Stephen Jay Gould

Perhaps no modern thinker among the modern scientific community from the late 1970s into the new century pushed the boundaries of interpretation and thought regarding evolutionary biology and paleontology more significantly than Stephen Jay Gould.  An eminent scholar himself–among evolutionary biologists his technical work Ontogeny and Phylogeny (1977) is considered one of the most significant works in the field and he is considered to be among the most important historians of science in the late 20th century–he was the foremost popularizer of science during his generation (with the possible exception of Richard Dawkins and Carl Sagan) who used his position to advance scientific knowledge, critical thinking, and to attack pseudoscientific, racist, and magical thinking which misused and misrepresented scientific knowledge and methods.

His concepts of punctuated equilibrium, spandrels, and the Panglossian Paradigm pushed other evolutionary biologists in the field to rise to new heights in considering and defending their own applications of neo-Darwinian theory, prompting (sometimes heated) debate.  These ideas continue to be controversial in the evolutionary community with, it seems, most of the objections being based on the fear that they will be misused by non-scientists against evolution itself, and it is true that creationists and other pseudoscientists–aided and abetted by the scientifically illiterate popular press–misrepresented the so-called “Darwin Wars” as being more significant than they really were.  But many of his ideas were reconciled and resolved into a new synthesis within the science of evolution.  Thus, his insights, based as they were in the scientific method and within proven theory, epitomized the very subject that he popularized–that nothing is ever completely settled in science, that all areas of human understanding are open to inquiry and–perhaps–revision, even if slight.

Having established himself as a preeminent science historian, science popularizer, scholar in several fields, and occasional iconoclast, Gould focused his attention on an area that well into the late 20th century was rife with ideology, prejudice, and pseudoscience–the issue of human intelligence and its measurement.  As Darwin learned over a hundred years before, it is one thing to propose that natural selection is the agent of evolution, it is another to then demonstrate that the human species descended from other primate ancestors, and the manner in which sexual selection plays a role in human evolution: for some well entrenched societal interests and specialists it is one step too far.  Gould’s work was attacked, but it has withstood these attacks and criticisms, and stands as a shining example of using critical thinking and analytical skills in striking down an artifact of popular culture and bad social science.

In The Mismeasure of Man, Gould begins his work by surveying the first scientific efforts at understanding human intelligence by researchers such as Louis Agassiz and Paul Broca, among others, who studied human capabilities through the now-defunct science of craniometry.  I was reminded, when I first picked up the book, of Carl Sagan’s collection of writings in the 1979 book, Broca’s Brain, in which some of the same observations are made.  What Gould demonstrates is that the racial and sexual selection bias of the archetypes chosen by the researchers, in particular Samuel George Morton (1799-1851), provided them with the answers they wanted to find–that their methodology was biased, and therefore, invalid from the start.  In particular, the differences in the skulls of Caucasians (of a particular portion of Europe), Black people (without differentiating ethnic or geographical differences), and Mongolians (Asian peoples without differentiation) in identifying different human “species” lacked rigor and was biased in its definitions from the outset.

In order to be fair, a peer-reviewed paper challenged Gould’s assertion that Morton may have fudged his findings on cranial measurements, since the researcher used bird seed (or iron pellets depending on the source) as the basis for measurement, and found, in a sample of some of the same skulls, (combined with a survey from 1988) that Morton was largely accurate in his measures.  The research, however, was unable to undermine the remainder of Gould’s thesis while attempting to resurrect the integrity of Morton in light of his own, largely pre-scientific time.  I can understand the point made by Gould’s critics regarding Morton that it is not necessarily constructive to apply modern methodological methods–or imply dishonesty–to those early pioneers whose works has led to modern scientific understanding.  But as an historian I also understand that when reading Gibbon on the Roman Empire that we learn a great deal about the prejudices of 18th century British society–perhaps more than we learn of the Romans.  Gibbon and Morton, as with most people, were not consciously aware of their own biases–or that they were biases.  This is the reason for modern research and methodological standards in academic fields–and why human understanding is always “revisionist” to use a supposed pejorative that I heard used by one particularly ignorant individual several years ago.  Gibbon showed the way of approaching and writing about history.  His work would not pass editorial review today, but the reason why he is so valued is that he is right in many of his observations and theses.  The same cannot be said for Morton, who seemed motivated by the politics of justifying black slavery, which is why Gould treats him so roughly, particularly given that some of Morton’s ideas still find comfort in many places in our own time.  In light of subsequent research, especially the human genome project, Gould proves out right which, after all, is the measure that counts.

But that is just the appetizer.  Gould then takes on the basis of IQ (intelligence quotient), g factor (the general intelligence factor), and the heritability of intelligence to imply human determinism, especially generalized among groups.  He traces the original application of IQ tests developed by Alfred Binet and Theodore Simon to the introduction of universal education in France and the need to identify children with learning disabilities and those who required remediation by grade and age group.  He then surveys how psychologist Lewis Terman of Stanford modified the test and transformed its purpose in order to attempt to find an objective basis for determining human intelligence.  In critiquing this transformation Gould provides examples of the more obviously (to modern eyes) biased questions on the test, and then effectively destroys the statistical basis for the correlations of the test in being able to determine any objective measure of g.  He demonstrates that the correlations established by the psychological profession to establish “g” are both statistically and logically questionable and that they commit the logical fallacy of reification–that is, they take an abstract measure and imbue it with a significance that it cannot possess as if it were an actual physical entity or “thing.”

Gould demonstrates that the variability of the measurements within groups, the clustering of results within the tests that identify distinct aptitudes, and the variability of results across time for the same individuals given changes in circumstances of material condition, education, and emotional maturity, renders “g” an insignificant measure.  The coup de grace in the original edition–a trope still often pulled out as the last resort by defenders of human determinism and IQ–is in Gould’s analysis of the work of Cyril Burt, the oft-cited researcher of twin studies, who published fraudulent works that asserted that IQ was highly heritable and not affected by environment.  That we still hear endless pontificating on “nature vs. nurture” debates, and that Stanford-Binet and other tests are still used as a basis for determining a measure of “intelligence” owes more to societal bias, and the still pseudo-scientific methodologies of much of the psychological profession, than scientific and intellectual honesty.

The core of Gould’s critique is to effectively discredit the concept of biological determinism which he defines as “the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.”

What Stephen Jay Gould demonstrates in The Mismeasure of Man most significantly then, I think, is that human beings–particularly those with wealth, power, and influence, or who are part of a societally favored group–demonstrate an overwhelming desire to differentiate themselves from others and will go to great lengths to do so to their own advantage.  This desire includes the misuse of science, whatever the cost to truth or integrity, in order to demonstrate that there is an organic or heritable basis for their favored position relative to others in society when, in reality, there are more complex–and perhaps more base and remedial–reasons.  Gould shows how public policy, educational focus, and discriminatory practices were influenced by the tests of immigrant and minority groups to deny them access to many of the benefits of the economic system and society.  Ideology was the driving factor in the application of these standardized tests, which served the purposes of societal and economic elites to convince disenfranchised groups that they deserved their inferior status.  The label of “science” provided these tainted judgments with just the right tinge of respectability that they needed to overcome skepticism and opposition.

A few years after the publication of Gould’s work a new example of the last phenomenon described above emerged with the publication of the notorious The Bell Curve (1994) by Richard Herrnstein and Charles Murray–the poster child of the tradition harking back to Herbert Spencer’s Social Statics of elites funding self-serving pseudo-science and–another word will not do–bullshit.  While Spencer could be forgiven his errors given his time and scientific limitations, Herrnstein and Murray, who have little excuse, used often contradictory and poorly correlated (and causative) statistical methods to argue for a race-based argument of biological determinism.  Once again, Gould in the 1996 revision to his original work, dealt with these fallacies directly, demonstrating in detail the methodological errors in their work and the overreach inherit in their enterprise–another sad example of bias misusing knowledge as the intellectual basis to oppress other people and, perhaps more egregiously, to abandon coming to terms with the disastrous actions that American society has had on one specific group of people because of the trivial difference of skin color.

With the yeoman work of Stephen Jay Gould to discredit pseudo-scientific ideas and the misuse of statistical methodology to pigeonhole and classify people–to misuse socio-biology and advance self-serving theories of human determinism–the world has been provided the example that even the best financed and well entrenched elites cannot stop the advance of knowledge and information.  They will try–using more sophisticated methods of disinformation and advertising–but over time those efforts will be defeated.  It will happen because scientific projects like the Human Genome Project have already demonstrated that there is only one race–the human race–and that we are all tied together by common ancestors.  The advantages that we realize over each other at any point in time is ephemeral.  The knowledge regarding variability in the human species acknowledges differences in the heritable characteristics in individuals, but that knowledge implies nothing about our relative worth to one another, nor is it a moral judgment rendered from higher authority that justifies derision, stigma, ridicule, discrimination, or reduced circumstances.  It will happen because in our new age information, once transmitted, cannot be retracted–it is out there forever.  There is much wisdom here.  It is up to each of us to recognize it, and inform our actions as a result of it.

Sunday Contemplation — Finding Wisdom — Charles Darwin

Charles Darwin

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

The human species owes a debt of gratitude to Charles Darwin that can never be adequately paid.  The young Darwin struggled against being categorized in a society and a time that very much needed to categorize everything and everyone.  His early education demonstrated his keen, inquisitive, and initially undisciplined mind, the last aspect of his character that he himself noted and worked to overcome.

The grandson of two prominent British abolitionists, Erasmus Darwin and Josiah Wedgwood, he was born to a outwardly conventional but inwardly nurturing and intellectually stimulating family.  He was an avid amateur naturalist as a boy and studied to follow in his father’s footsteps as a physician.  He attended medical school but his other interests caused him to neglect his studies.  Frustrated with what they viewed as his lack of prospects, his family enrolled him in divinity school to become an Anglican pastor.  Darwin studied little but found his passion in the then craze of beetle collecting and was influenced by the Cambridge naturalists that pursued what was then known as natural theology–the proposition that the best way to know the deity was to understand its creation.  His main studies focused on what we now identify as botany, geology as well as biology.

After receiving his degree Darwin proceeded to take literally the remonstrance of Alexander von Humboldt to travel widely in order to gain new knowledge.  Upon the recommendation of his mentor at Cambridge, John Stevens Henslow, he was taken aboard the HMS Beagle’s South American surveying expedition as a self-financed naturalist.  This voyage was a transforming one for Darwin and it is best to use his own words from his autobiography in order to describe the nature of that transformation.

“…Whilst on board the Beagle I was quite orthodox, and I remember being heartily laughed at by several of the officers… for quoting the Bible as an unanswerable authority on some point of morality… But I had gradually come by this time, i.e., 1836 to 1839, to see that the Old Testament from its manifestly false history of the world, with the Tower of Babel, the rainbow at sign, &c., &c., and from its attributing to God the feelings of a revengeful tyrant, was no more to be trusted than the sacred books of the Hindoos, or the beliefs of any barbarian.

…By further reflecting that the clearest evidence would be requisite to make any sane man believe in the miracles by which Christianity is supported, (and that the more we know of the fixed laws of nature the more incredible do miracles become), that the men at that time were ignorant and credulous to a degree almost uncomprehensible by us, that the Gospels cannot be proved to have been written simultaneously with the events, that they differ in many important details, far too important, as it seemed to me, to be admitted as the usual inaccuracies of eyewitnesses; by such reflections as these, which I give not as having the least novelty or value, but as they influenced me, I gradually came to disbelieve in Christianity as a divine revelation. The fact that many false religions have spread over large portions of the earth like wild-fire had some weight with me. Beautiful as is the morality of the New Testament, it can be hardly denied that its perfection depends in part on the interpretation which we now put on metaphors and allegories.

But I was very unwilling to give up my belief… Thus disbelief crept over me at a very slow rate, but was at last complete. The rate was so slow that I felt no distress, and have never since doubted even for a single second that my conclusion was correct. I can indeed hardly see how anyone ought to wish Christianity to be true; for if so the plain language of the text seems to show that the men who do not believe, and this would include my Father, Brother and almost all of my friends, will be everlastingly punished.

And this is a damnable doctrine.”

Having thrown off his preconceived beliefs it is during the voyage of the Beagle that Charles Darwin became the modern scientist that we recognize today–the author of On the Origin of the Species and The Descent of Man. Much has been made of the theological nature of his origins and how they influenced his thinking, arguing that the construction of his scientific hypotheses and theories are simply an extension of a type of belief–what today is called “scientism.”  But this is ignorance and the term cannot exist except in the minds of those making the assertion.  It is only when Darwin freed himself from the shackles of his mind that he was able to perceive nature as it is, not as human society would have it.

It is obvious to us now as we read his narrative that he had not completely freed himself from the prejudices of his time.  But such is the nature of human advancement.  I was told early on as an historian that I would learn more about the prejudices of 18th century Britain by reading Gibbon’s Decline and Fall of the Roman Empire than I would learn of the Roman Empire–and it turned out that my mentor was correct.

But, unlike Gibbon, Darwin’s influence transcends his time because of the enforced discipline that he imposed on himself and his method.  After that seminal voyage it took years of study and the weight of evidence before Darwin felt confident to publish his findings–and then only under great pressure since other scientists were coming to the same conclusions and threatened to precede him on his life’s work.  His  theory is an elegant one and the weight of its elegance is found in his overview of it in the introduction to the Origin:

“As many more individuals of each species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form.”

Darwin’s observations and theory–which is supported by over a century and a half of observation and confirmation–is one of the key insights in our understanding of ourselves and our position in the universe.  This insight is the basis of all other wisdom and in my opinion, without it, there can be no human knowledge that reaches the level of wisdom that means anything.  For all of the knowledge that we have amassed since that time–in geology, astronomy, biology, physics, neuroscience psychology–in virtually every area of learning–is informed by this one core insight into human existence and what we define as life on our planet.  To understand the evolution of species through the agent of natural selection one must understand the age of the universe, of the earth, the dynamics of geology, and the common origins and interconnection of all life.

As such, its implications transcend science in the same way as its implications transcended biology.  In 1995 the cognitive scientist and philosopher Daniel Dennett wrote his famous work summarizing the influence of Darwin’s theory on modern science and society in the late 20th century.  He gave the book the title Darwin’s Dangerous Idea.  As I sit here in the year 2014 it is apparent that this is still the case, not because what he observed was dangerous to know, but because it is an idea that undermines its opposite–the belief that the strong have a right to dominate the weak, that people can be categorized with some intrinsically superior and others inferior, and that economics and its handmaiden philosophy trumps all other insights when it comes to human society and conduct.

Many evolutionary biologists and others in the sciences with whom I have corresponded and discussed their bewilderment and frustration at the resistance, particularly in parts of the United States, to the essential wisdom in Darwinian observation.  It is, I think, because they do not see the historical and societal implications which is explained in their own theory.  It is dangerous not only because of its transcending of theological explanations of the universe and human existence, but also because it challenges the structure of social control and hierarchy upon which so many societies have been built in the modern era.  In understanding our own biology as primates, our instinctual feelings of tribalism, kinship, and hierarchy are still too strong in many areas to fully liberate us from our self-imposed shackles. Darwinian insight challenges the primacy of these feelings.

So dangerous was (and is) Darwin’s idea that Herbert Spencer published an alternative evolutionary theory based on earlier, pre-scientific evolutionary beliefs, known as Lamarckian evolution, which came to be known as Social Statics and has since been misnamed Social Darwinism.  This competing theory, most recently given new clothes by politicians and followers of the writer Ayn Rand, is without scientific merit, socially abhorrent, ethically indefensible, and sociopathically cruel.  So old is this meme that Darwin himself challenged this twisting of evolutionary theory:

“It is not the strongest or the most intelligent who will survive but those who can best manage change.”

Regarding the societal implications of his theory he wrote in his work The Voyage of the Beagle:

“If the misery of the poor be caused not by the laws of nature, but by our institutions, great is our sin.”

Darwin at first avoided addressing the more controversial aspects of his theory and it took him some time to decide to publish The Descent of Man.  From this work his theory of sexual selection alone stirred more than a little backlash.  As such, we see only glimpses of his view that the understanding of the nature of life would be a liberating force, not only in the sciences but in society at large.  But Darwin struggled with the questions of the “ought” as opposed to the “is” and, in the end demurred. It is only now that his descendents in the sciences have broached the topic once again, most significantly in the book The Moral Landscape, by the neuroscientist Sam Harris.

In the end, though, Darwin’s most significant contribution may result in the survival of our species.  The common origins that we all share and the combined threats of Global Warming, nuclear proliferation, and other weapons of mass destruction threaten our very existence, not to mention the extra-planetary threats from asteroids and comets.  The insights of Darwin and his descendents in the sciences may very well prevent our own self-destructive tendencies and ignorance from causing our extinction from this tiny planet.

Sunday Web blogging on Tuesday — Finding Wisdom — Carl Sagan and Ann Druyan

Our televisions are alight with a new and updated version of the series Cosmos.  In the relatively short span of time since the airing of that original series, humankind’s knowledge about the universe has increased many fold.  What has not advanced as quickly is our ability to use that knowledge in healthy and productive ways that advance human flourishing.  The world is careening between extremes, of most importance at the moment, with Russia in a Back to the Future Soviet Union moment.

CarlSagan_NASA

Carl Sagan was not only a popularizer of science, mainly in the realm of astronomy, but also a first rate astronomer, astrophysicist, and cosmologist in his own right.  I first came upon him in 1967, as an eager 12 year old with a sometimes overpowering hunger for scientific knowledge, especially in the areas of astronomy, geology, and biology.  The book that sparked my lifetime interest and occasional formal education in the sciences, was Intelligent Life in the Universe, which he co-authored with I. S. Shklovskii.  What Dr. Sagan instilled in me from this one book was not to be afraid to ask questions–even those that on the surface may seem obvious or outlandish–and to imagine the possible alternatives elsewhere to the type of life found here on earth, given an extremely old and expansive universe that, despite the then popular TV program, Star Trek, would ensure that we would never be able to travel the stars to completely confirm our speculations, warp drive and all.  (At least, sadly, not in my lifetime).  The subtext to his message to a voraciously curious 12 year old was not to be afraid; that intellectual honesty and integrity is more important than societal acceptance of what are proper questions and knowledge, that sometimes asking those questions and then pursuing them will actually lead to real answers.

Writer Ann Druyan is also worthy of mention here because, probably more than anyone else, she contributed to making Carl Sagan the popularizer that he became. One of three writers for the first Cosmos series, she later married Sagan and became his associate, helping him write several books on the subject of the scientific method and critical thinking.  Most prominent of the works that she assisted in bringing to print is The Varieties of Scientific Experience. which consists of an edited version of a series of Sagan’s Gifford Lectures given in 1985.

The Gifford Lectures were established in the U.K. in 1888, and consist of the selection of a prominent thinkers to promote the study of what was called “natural theology” and are held at various Scottish universities. Over the years the lectures have hosted some of the most prominent scientists and thinkers of the time, including such notaries as Hannah Arendt, Freeman Dyson, William James, John Dewey, Albert Schweitzer, Niels Bohr, Arnold Toynbee, Iris Murdoch, J. B. S. Haldane, Werner Heisenberg, Roger Penrose, and many others.

“Natural theology” is a philosophical approach to theology that is very old.  It is the concept that, as opposed to “revealed” theology, that the best way to understand the nature of the creator is through reason and experience.  In the 19th century it became the hope of many individuals that the steady advance of scientific knowledge could be reconciled with theological belief.  Over time, especially in the lectures, it has become apparent that such a reconciliation is becoming less and less likely, unless the various revealed theological definitions of “god” is changed as a result of our knowledge.

In choosing the title of the book, Ann Druyan meant to harken back to William James’ The Varieties of Religious Experience, based on his own Gifford Lectures given in the years 1901 and 1902 at Edinburgh.  To James, the psychological study of religion and the religious experience was an important aspect in understanding human nature.  Religion in his definition included “the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine.”  Thus James’ definition is more expansive than that of a particular set of religious beliefs or dogma.  In our own time we would define James’ definition as “spirituality.” 

At the time that he gave his lectures, not unlike our own, the world was divided by dogmatic religious interpretations of “god” and those who considered such beliefs to be a type of psychological defect.  James proposed a different path, positing that the act of faith and revelation–whatever its basis–was an artifact of human nature that warranted study.  He thus advocated for a tolerant attitude to these beliefs, regardless of the fact that the originators may have been unhinged in some way, given that oftentimes a positive effect resulted.  The danger, of course, is as George Santayana wrote, that taking James’ approach too far leads to a “tendency to disintegrate the idea of truth, to recommend belief without reason and to encourage superstition.”  I think this critique goes too far in its misunderstanding of James’ American pragmatist views.  To James, these beliefs were of utility only so far as they advanced a good, which he would define as the health of the individual and society.

Thus we come to Sagan’s work.  Ann Druyan in the introduction to her husband’s book states: “My variation on James’s title is intended to convey that science opens the way to levels of consciousness that are otherwise inaccessible to us; that, contrary to our cultural bias, the only gratification that science denies to us is deception.”  The intent here is to extend and inform James’ work and to incorporate Santayana’s warning; that it is still possible to feel wonder and connectedness to creation while eschewing deception.  Among our contemporaries, the neuroscientist Sam Harris has followed this path of inquiry.  But, I think, Sagan’s lectures go farther in their intent and it is the same message that he conveyed to me as a curious 12 year old:  that there are no taboo questions, that all aspects of human experience are open to inquiry.  James opens us to this same line of inquiry from an earlier foundation in a form of language that is obscure to us today: that this includes all forms of human expression.  The recent work of Daniel Dennett has also explored this territory.

Sagan opens his lectures with the following passage:

The word “religion” comes from the Latin for “binding together,” to connect that which has been sundered apart. It’s a very interesting concept. And in this sense of seeking the deepest interrelations among things that superficially appear to be sundered, the objectives of religion and science, I believe, are identical or very nearly so. But the question has to do with the reliability of the truths claimed by the two fields and the methods of approach.
By far the best way I know to engage the religious sensibility, the sense of awe, is to look up on a clear night. I believe that it is very difficult to know who we are until we understand where and when we are. I think everyone in every culture has felt a sense of awe and wonder looking at the sky. This is reflected throughout the world in both science and religion. Thomas Carlyle said that wonder is the basis of worship. And Albert Einstein said, “I maintain that the cosmic religious feeling is the strongest and noblest motive for scientific research.” So if both Carlyle and Einstein could agree on something, it has a modest possibility of even being right….

He then explores the fear that lies at the root of most of our hopes that there is something more than ourselves; our mortality:

All that we have seen is something of a vast and intricate and lovely universe. There is no particular theological conclusion that comes out of an exercise such as the one we have just gone through. What is more, when we understand something of the astronomical dynamics, the evolution of worlds, we recognize that worlds are born and worlds die, they have lifetimes just as humans do, and therefore that there is a great deal of suffering and death in the cosmos if there is a great deal of life….and perhaps even intelligence is a cosmic commonplace, then it must follow that there is massive destruction, obliteration of whole planets, that routinely occurs, frequently, throughout the universe. Well, that is a different view than the traditional Western sense of a deity carefully taking pains to promote the well-being of intelligent creatures. It’s a very different sort of conclusion that modern astronomy suggests. There is a passage from Tennyson that comes to mind: “I found Him in the shining of the stars, / I mark’d Him in the flowering of His fields.” So far pretty ordinary. “But,” Tennyson goes on, “in His ways with men I find Him not…. Why is all around us here / As if some lesser god had made the world, / but had not force to shape it as he would…?”

supermassive-black-hole-close-encounter-g2-cloud-7

 

Taking the reality of the universe into account, he then leads to a new view of what constitutes spirituality by leading with the observations of Thomas Paine:

“From whence, then, could arise the solitary and strange conceit that the Almighty, who had millions of worlds equally dependent on his protection, should quit the care of all the rest, and come to die in our world because, they say, one man and one woman ate an apple? And, on the other hand, are we to suppose that every world in the boundless creation had an Eve, an apple, a serpent, and a redeemer?”
Paine is saying that we have a theology that is Earth-centered and involves a tiny piece of space, and when we step back, when we attain a broader cosmic perspective, some of it seems very small in scale. And in fact a general problem with much of Western theology in my view is that the God portrayed is too small. It is a god of a tiny world and not a god of a galaxy, much less of a universe…. If a Creator God exists, would He or She or It or whatever the appropriate pronoun is, prefer a kind of sodden blockhead who worships while understanding nothing? Or would He prefer His votaries to admire the real universe in all its intricacy? I would suggest that science is, at least in part, informed worship.

In the final lecture Sagan then explains clearly why there are no bad questions that seek understanding:

If Newton were restricted, in working through the theory of gravitation, to apples and forbidden to look at the motion of the Moon or the Earth, it is clear he would not have made much progress. It is precisely being able to look at the effects down here, look at the effects up there, comparing the two, which permits, encourages, the development of a broad and general theory. If we are stuck on one planet, if we know only this planet, then we are extremely limited in our understanding even of this planet. If we know only one kind of life, we are extremely limited in our understanding even of that kind of life. If we know only one kind of intelligence, we are extremely limited in knowing even that kind of intelligence. But seeking out our counterparts elsewhere, broadening our perspective, even if we do not find what we are looking for, gives us a framework in which to understand ourselves far better.
I think if we ever reach the point where we think we thoroughly understand who we are and where we came from, we will have failed. I think this search does not lead to a complacent satisfaction that we know the answer, not an arrogant sense that the answer is before us and we need do only one more experiment to find it out. It goes with a courageous intent to greet the universe as it really is, not to foist our emotional predispositions on it but to courageously accept what our explorations tell us.

 

 

Sunday Contemplation — Finding Wisdom — Werner Heisenberg

Modern education seems to be failing us, but we seem to be at a loss as to why that is the case.  I would posit that it is because a large portion of the populace is ignorant of the most exciting discoveries and insights of the late 20th and early 21st centuries.  My Sunday contemplation has focused on that literature that offers wisdom regarding human insight, but what of insights into our universe that point into larger ones that include the human condition and our social structures and perceptions?

Werner Heisenberg, the father of modern quantum mechanics, whose concept of the origins of the universe and the contingent nature of cause-and-effect at the level of quanta proved to be the correct theory over Einstein’s unified theory.  This is the context of the oft used Einstein quote that “God does not play dice with the universe.”  Einstein was wrong–the universe is not fully predictable, there is uncertainty in outcomes.  At our level of existence we measure this amount of “free will” by probabilities: outcomes based on the condition of the universe at any particular point in what our brains interpret as “time.”  This is a concept that is often misinterpreted by polemicists and others.  The universe and its processes, such as evolution, are not based on “randomness.”  The universe is deterministic but with some variation in prediction.

werner heisenberg

What marks Professor Heisenberg for mention today is not only his insight into the technical aspects of the physical universe but understanding how these discoveries inform the human condition.

The source of this wisdom comes from his book Physics and Philosophy.  It is a fairly slight tome and a good book for the layman interested in a survey of the physical sciences written by the man responsible for many of the 20th century’s most important discoveries from the point just prior to the next wave of discoveries that would confirm, strengthen, and advance them.  He writes on the history of the theory of quantum theory and how it has changed our view of the universe and the older philosophical traditions that were either displaced or modified by it.  His exposition regarding other areas of our knowledge begins on page 60 speaking from the perspective of 1959, in which he speculates on things that still need to be proven in the other natural sciences and the role of human language in understanding nature (bold for emphasis added by me).

“…(T)he structure of present-day physics the relation between physics and other branches of natural science may be discussed. The nearest neighbor to physics is chemistry. Actually through quantum theory these two sciences have come to a complete union. But a hundred years ago they were widely separated, their methods of research were quite different, and the concepts of chemistry had at that time no counterpart in physics….When the theory of heat had been developed by the middle of the last century scientists started to apply it to the chemical processes, and ever since then the scientific work in this field has been determined by the hope of reducing the laws of chemistry to the mechanics of the atoms. It should be emphasized, however, that this was not possible within the framework of Newtonian mechanics. In order to give a quantitative description of the laws of chemistry one had to formulate a much wider system of concepts for atomic physics. This was finally done in quantum theory, which has its roots just as much in chemistry as in atomic physics. Then it was easy to see that the laws of chemistry could not be reduced to Newtonian mechanics of atomic particles, since the chemical elements displayed in their behavior a degree of stability completely lacking in mechanical systems. But it was not until Bohr’s theory of the atom in 1913 that this point had been clearly understood. In the final result, one may say, the concepts of chemistry are in part complementary to the mechanical concepts. If we know that an atom is in its lowest stationary state that determines its chemical properties we cannot at the same time speak about the motion of the electrons in the atom.

The present relation between biology, on the one side, and physics and chemistry, on the other, may be very similar to that between chemistry and physics a hundred years ago. The methods of biology are different from those of physics and chemistry, and the typical biological concepts are of a more qualitative character than those of the exact sciences.  Concepts like life, organ, cell, function of an organ, perception have no counterpart in physics or chemistry. On the other hand, most of the progress made in biology during the past hundred years has been achieved through the application of chemistry and physics to the living organism, and the whole tendency of biology in our time is to explain biological phenomena on the basis of the known physical and chemical laws. Again the question arises, whether this hope is justified or not.

Just as in the case of chemistry, one learns from simple biological experience that the living organisms display a degree of stability which general complicated structures consisting of many different types of molecules could certainly not have on the basis of the physical and chemical laws alone. Therefore, something has to be added to the laws of physics and chemistry before the biological phenomena can be completely understood.

With regard to this question two distinctly different views have frequently been discussed in the biological literature. The one view refers to Darwin’s theory of evolution in its connection with modern genetics.  According to this theory, the only concept which has to be added to those of physics and chemistry in order to understand life is the concept of history. The enormous time interval of roughly four thousand million years that has elapsed since the formation of the earth has given nature the possibility of trying an almost unlimited variety of structures of groups of molecules.  Among these structures there have finally been some that could reduplicate themselves by using smaller groups from the surrounding matter, and such structures therefore could be created in great numbers.  Accidental changes in the structures provided a still larger variety of the existing structures.  Different structures had to compete for the material drawn from the surrounding matter and in this way, through the `survival of the fittest,’ the evolution of living organisms finally took place.  There can be no doubt that this theory contains a very large amount of truth, and many biologists claim that the addition of the concepts of history and evolution to the coherent set of concepts of physics and chemistry will be amply sufficient to account for all biological phenomena. One of the arguments frequently used in favor of this theory emphasizes that wherever the laws of physics and chemistry have been checked in living organisms they have always been found to be correct; there seems definitely to be no place at which some `vital force’ different from the forces in physics could enter….

    When one compares this order with older classifications that belong to earlier stages of natural science one sees that one has now divided the world not into different groups of objects but into different groups of connections.  In an earlier period of science one distinguished, for instance, as different groups minerals, plants, animals, men.  These objects were taken according to their group as of different natures, made of different materials, and determined in their behavior by different forces.  Now we know that it is always the same matter, the same various chemical compounds that may belong to any object, to minerals as well as animals or plants; also the forces that act between the different parts of matter are ultimately the same in every kind of object.  What can be distinguished is the kind of connection which is primarily important in a certain phenomenon. For instance, when we speak about the action of chemical forces we mean a kind of connection which is more complicated or in any case different from that expressed in Newtonian mechanics. The world thus appears as a complicated tissue of events, in which connections of different kinds alternate or overlap or combine and thereby determine the texture of the whole.

    When we represent a group of connections by a closed and coherent set of concepts, axioms, definitions and laws which in turn is represented by a mathematical scheme we have in fact isolated and idealized this group of connections with the purpose of clarification.  But even if complete clarity has been achieved in this way, it is not known how accurately the set of concepts describes reality.

     These idealizations may be called a part of the human language that has been formed from the interplay between the world and ourselves, a human response to the challenge of nature.  In this respect they may be compared to the different styles of art, say of architecture or music.  A style of art can also be defined by a set of formal rules which are applied to the material of this special art.  These rules can perhaps not be represented in a strict sense by a set of mathematical concepts and equations, but their fundamental elements are very closely related to the essential elements of mathematics.  Equality and inequality, repetition and symmetry, certain group structures play the fundamental role both in art and in mathematics.  Usually the work of several generations is needed to develop that formal system which later is called the style of the art, from its simple beginning to the wealth of elaborate forms which characterize its completion.  The interest of the artist is concentrated on this process of crystallization, where the material of the art takes, through his action, the various forms that are initiated by the first formal concepts of this style.  After the completion the interest must fade again, because the word `interest’ means: to be with something, to take part in a process of life, but this process has then come to an end.  Here again the question of how far the formal rules of the style represent that reality of life which is meant by the art cannot be decided from the formal rules.  Art is always an idealization; the ideal is different from reality — at least from the reality of the shadows, as Plato would have put it — but idealization is necessary for understanding.

    This comparison between the different sets of concepts in natural science with different styles of art may seem very far from the truth to those who consider the different styles of art as rather arbitrary products of the human mind. They would argue that in natural science these different sets of concepts represent objective reality, have been taught to us by nature, are therefore by no means arbitrary, and are a necessary consequence of our gradually increasing experimental knowledge of nature.  About these points most scientists would agree; but are the different styles of art an arbitrary product of the human mind?  Here again we must not be misled by the Cartesian partition.  The style arises out of the interplay between the world and ourselves, or more specifically between the spirit of the time and the artist.  The spirit of a time is probably a fact as objective as any fact in natural science, and this spirit brings out certain features of the world which are even-independent of time, are in this sense eternal.  The artist tries by his work to make these features understandable, and in this attempt he is led to the forms of the style in which he works. Therefore, the two processes, that of science and that of art, are not very different.  Both science and art form in the course of the centuries a human language by which we can speak about the more remote parts of reality, and the coherent sets of concepts as well as the different styles of art are different words or groups of words in this language….

Here is a truly beautiful mind grounded not just in mathematics and scientific theory, but informed by human experience.  In the rest of the work Heisenberg outlines the philosophical implications of modern physics on the history of human thought.  His conclusion speaks to our own time, 55 years from where he stood.  Though his primary concern was in the conflict between the West and the Communist dictatorships–and the possible use of nuclear weapons for which modern physics, he felt, bore a great deal of responsibility–he also foresaw a different type of conflict.  This was coming conflict originating from those parts of society upon whose foundations relied on, to use his term, narrow doctrines of understanding which would feel threatened as the coming discoveries in modern physics would reveal new knowledge of the universe and humanity’s place in it.  His final note is hopeful but what other choice did he have but to be hopeful?  The alternative is the extinction of the human species, and perhaps it is that–self-preservation–that will bring about, in the end, his final sentiment.

“…Finally, modern science penetrates into those large areas of our present world in which new doctrines were established only a few decades ago as foundations for new and powerful societies.  There modern science is confronted both with the content of the doctrines, which go back to European philosophical ideas of the nineteenth century (Hegel and Marx), and with the phenomenon of uncompromising belief.  Since modern physics must play a great role in these countries because of its practical applicability, it can scarcely be avoided that the narrowness of the doctrines is felt by those who have really understood modern physics and its philosophical meaning.  Therefore, at this point an interaction between science and the general trend of thought may take place.  Of course the influence of science should not be overrated; but it might be that the openness of modern science could make it easier even for larger groups of people to see that the doctrines are possibly not so important for the society as had been assumed before.  In this way the influence of modern science may favor an attitude of tolerance and thereby may prove valuable.

On the other hand, the phenomenon of uncompromising belief carries much more weight than some special philosophical notions of the nineteenth century.  We cannot close our eyes to the fact that the great majority of the people can scarcely have any well-founded judgment concerning the correctness of certain important general ideas or doctrines. Therefore, the word `belief’ can for this majority not mean `perceiving the truth of something’ but can only be understood as `taking this as the basis for life.’  One can easily understand that this second kind of belief is much firmer, is much more fixed than the first one, that it can persist even against immediate contradicting experience and can therefore not be shaken by added scientific knowledge.  The history of the past two decades has shown by many examples that this second kind of belief can sometimes be upheld to a point where it seems completely absurd, and that it then ends only with the death of the believer.  Science and history can teach us that this kind of belief may become a great danger for those who share it.  But such knowledge is of no avail, since one cannot see how it could be avoided, and therefore such belief has always belonged to the great forces in human history.  From the scientific tradition of the nineteenth century one would of course be inclined to hope that all belief should be based on a rational analysis of every argument, on careful deliberation; and that this other kind of belief, in which some real or apparent truth is simply taken as the basis for life, should not exist.  It is true that cautious deliberation based on purely rational arguments can save us from many errors and dangers, since it allows readjustment to new situations, and this may be a necessary condition for life.  But remembering our experience in modern physics it is easy to see that there must always be a fundamental complementarity between deliberation and decision.  In the practical decisions of life it will scarcely ever be possible to go through all the arguments in favor of or against one possible decision, and one will therefore always have to act on insufficient evidence.  The decision finally takes place by pushing away all the arguments – both those that have been understood and others that might come up through further deliberation – and by cutting off all further pondering.  The decision may be the result of deliberation, but it is at the same time complementary to deliberation; it excludes deliberation.  Even the most important decisions in life must always contain this inevitable element of irrationality.  The decision itself is necessary, since there must be something to rely upon, some principle to guide our actions.  Without such a firm stand our own actions would lose all force.  Therefore, it cannot be avoided that some real or apparent truth form the basis of life; and this fact should be acknowledged with regard to those groups of people whose basis is different from our own.

Coming now to a conclusion from all that has been said about modern science, one may perhaps state that modern physics is just one, but a very characteristic, part of a general historical process that tends toward a unification and a widening of our present world.  This process would in itself lead to a diminution of those cultural and political tensions that create the great danger of our time. But it is accompanied by another process which acts in the opposite direction. The fact that great masses of people become conscious of this process of unification leads to an instigation of all forces in the existing cultural communities that try to ensure for their traditional values the largest possible role in the final state of unification.  Thereby the tensions increase and the two competing processes are so closely linked with each other that every intensification of the unifying process — for instance, by means of new technical progress — intensifies also the struggle for influence in the final state, and thereby adds to the instability of the transient state.  Modern physics plays perhaps only a small role in this dangerous process of unification.  But it helps at two very decisive points to guide the development into a calmer kind of evolution.  First, it shows that the use of arms in the process would be disastrous and, second, through its openness for all kinds of concepts it raises the hope that in the final state of unification many different cultural traditions may live together and may combine different human endeavors into a new kind of balance between thought and deed, between activity and meditation.