Saturday, July 7, 2012

How Our Cognitive Shortcomings Drive Utopianism

Humanity's attempts at utopia have ostensibly failed.  Instead, hubris has been the ultimate outcome of most human attempts at social engineering.  We have a marked tendency, as a species to embark on grandiose projects driven not by rationality or data, but rather by ideology or magical thinking.  Often, these ideologies are masked by pseudo-scientific rationale or portrayed as necessary, albiet painful steps towards a public good.  Some of these projects can exist for cultural reasons: the ridiculous opulence of Dubai, for example, while others may simply be utopian thinking: Biosphere 2 or Soviet, (and later Chinese), efforts at agricultural collectivism spring to mind.  The question becomes, what drives much of this ideology?  Where are the rationale checks that should prevent our leaders from pressing the self destruct button?  Two observed tendencies in neuroscience seem to hold some form of explanation.

The first of these tendencies is that noted by Cognitive Psychologist Daniel Kahneman, in his recent book: Thinking, Fast and Slow.  Kahneman won the Nobel Prize in Economics for proving (along with his now late collaborator, Amos Tversky), that people do not make rationale decisions under uncertain conditions, particularily in markets (market rationality has long been regarded as a necessary condition for markets to function by many economists).  Kahneman notes that our thinking is determined by what he calls two systems. System 1, which is our instinctual, immediate response tends to predominate, while System 2, which is our rationale decision making system, has to be cajoled or forced to activate.  Because of this, though we tend to delude ourselves into believing that we are rationale actors, instead, we tend to make most of our decisions based on immediate, emotional or intuitive reasoning rather than using reason.  As a result, the types of ideas that are likely to appeal to us are those that seem to hold a form of intuitive logic.  We are very poor at thinking statistically and as a result, we have a hard time of making decisions in the face of data.  As a result, ideas that may appeal on emotional grounds but be difficult to verify via data have greater pull societally.  

Kahnemans work also indicates why planning, and particularily social planning, are so difficult for us as a species.  It explains why Sir Peter Hall's wonderful history of planning policy, Cities of Tomorrow, can almost be read as a history of failed ideas and why necessary actions, such as the development of a comprehensive climate management plan, or even some system of governance to comprehensively manage resources is likely beyond us as a species.  Sadly, how we make decisions and engage in planning seems to further make a case for a deeply Malthusian view of the world and society.  In the face of our cognitive inabilities to engage in long effective range planning, we act as little more than bacteria, actively overusing resources until such a point that we manage to make the environment we operate in sceptic to ourselves. As Kahneman himself laments, despite all the work his on the subject, he has been unable to dramatically change the way he intuits the world and is as susceptible to over-use or over-dependency on System 1 thinking as the rest of us.

The second notable explanation for our tendency towards persisting with magical thinking comes from the researchers Brenda Nyhan and Jason Reifler.  In their study, When Corrections Fail: The persistence of political misperceptions,  Nyah and Reifler look at what is called the "verification bias" and its impact on political thinking.  The verification bias indicates that when given data, no matter how compelling, that contradicts a strongly held viewpoint, rather than rationality integrate this data into our understanding of the world and use it to change our beliefs, we instead have a tendency to simply reject the data and then use this act of rejection to actually strengthen our initial underlying misconception. As a result, many political arguments are intractable even when one side may have a factual basis for believing what they do, while the other operates solely on conjecture.  This tendency appears to render much meaningful behavior change impossible.  As a result, new fields have spring up, including that of community-based social marketing dedicated to finding alternative means of subverting an embedded ideology. While efforts can be successful, they frequently involve dramatically changing the nature of the discussion around a given issue and can have mixed success. That said, the approach by no means guarantees results and calls for cultivating agents within given ideological communities who are receptive to change in order to cultivate different beliefs within that wider community.  This can, in some instances, be seen as manipulative.

The neurological fallacies outlined above appear to be inextricably linked to how our minds function.  As a result, there appears to be something innately human in attempting to intuit the world around us while rejecting unwelcome truths.  There is a reason, after all, that Aristotle and Plato are the fathers of modern Western philosophy while uber-rationalists such as David Hume and Karl Popper seem to inhabit a lower spot in the pantheon.  It also seems to explain why we will continue to make decisions about society strongly linked to ideological belief systems rather than developing true data driven public policy.  The question becomes, will sufficient self-awareness of these limitations result in dramatic changes in how we see the world?  In light of human history thus far, it would represent a wildly optimistic wager.

No comments: