10.20.2008

Cause, Effect, Coincidence, Association and Chicanery: part 1

Why do people fall for utter nonsense? Marketeers, psychics, con artists, politicians, quacks, pundits, mystics – the list seems endless of people who essentially make their living by lying through their teeth to everyone else. And it's not as if they are particularly crafty about it or that their lies are that sophisticated. There aren't even very sneaky about it. Most, if not all, of the ridiculous claims made by these people can easily be discounted by one or two relatively simple follow-up questions or straight forward logic. Why then are they still so successful at manipulating people? How can a group of our leaders go so far as to ridicule 'the reality-based community' (who prefer to make decisions in a somewhat logical manner) knowing full well that a majority won't know the difference or at least not care enough to be concerned? A corollary question is why people are immune to some of these cons but vulnerable to others? Why can one person see through the absurdity of an ancient mythology and still fall for the slight of hand of a psychic con artist? Since both are selling the same logic-free wares why accept one and reject the other? How do we compartmentalize logic in one arena and throw it away we read a fortune cookie? Why can our species articulate splendid methods of deductive and logical reasoning and yet rarely implement them in day to day practice.

Simple; humans make decisions using heuristic methods that are highly vulnerable to cognitive biases. A thorough assessment of our biases will take a number of installments but a discussion of 'Availability Heuristics' is a good place to start.

What are Heuristics and Availability Heuristics? Heuristics are “strategies using readily accessible, though loosely applicable, information to control problem-solving in human beings and machines” (Pearl, 1983) . They are remembered patterns that can be used to make educated guesses or invoke 'common sense' in decision-making. Availability heuristics are a form of well studied cognitive bias where greatest decision-making weight is applied to the most easily remembered pattern or heuristic rather than the most precise or accurate ones which may be less easily recalled. And that is a bit of a problem because heuristics are often generated through experiences that are colored by whatever circumstances surround the event which triggers them. Why is that such a problem? Because we incorporate coincident associations into these pearls of wisdom which will affect future decisions regardless of whether true causality exists or not. Causality (or true cause and effect linkage from one event to another) is not something that bothered our ancestors trying to avoid perils. Better for them was to recall vivid patterns of information that they 'associated' with perilous circumstances regardless of causality. The price of over calling peril was usually not that great, while the cost of under-triage could be death. This may explain our tendency to read much more into simple coincidences (to the great delight of con artists) than is warranted by reality. All this of course presupposes that our minds are an ad hoc collection of bits and pieces of re-purposed animal behaviors rather than unique souls endowed by a higher being. Holy texts may argue the latter but behavioral studies overwhelmingly support the former.

It is a fact that for every incidence of true causality, there are countless coincident events whose only real association is created, and subsequently faithfully defended in our minds. The human mind is loathe to tolerate uncertainty (another cognitive bias) and readily substitutes coincident events into heuristics and applies them with the same zeal whether they contain any fragments of cause and effect or not. As time goes on we stop trying to use the heuristic to predict future events and begin to search for situations which conform to the circumstances defined by the heuristic. This reinforces its power over our thoughts, whether real or imagined, but also destroys any true predictive value that might have been associated with it in the first place (yes, that last paragraph is essentially my personal definition of religion...). Such is the associative power of available heuristics that almost nothing can change their perceived power shy of death or significant personal cost.

How do Heuristics and Availability play out in real world examples? As it turns out nature provides numerous opportunities to study these phenomena. One of the wonderful examples of evolutionary convergence is mimicry – a species which evolves characteristics that confer survival advantage presumably through similarities to recognizably dangerous species. Theoretically these similarities confer at least some of their advantage through reduced predation. But for this to be the case at least 3 things must be true: 1) predators must depend upon heuristics in prey selection ; 2) these heuristics must be subject to availability bias; and 3) the mimic must have evolved these characteristics after the dangerous species being imitated. Let's defer the discussion of the third criteria for now.

The case of the coral snake and its presumptive mimic the scarlet king snake is a great example of heuristics and availability in action in nature. The coral snake is North America's cobra cousin possessing powerful neurotoxic venom and sporting a jaunty livery of yellow black and red bands. These distinctive bands alternate 'yellow, red, yellow, black, yellow red...' and so on. The scarlet king snake also has yellow, red and black bands but these occur in the pattern of 'yellow, black, red, black, yellow...'. In other words the venomous reptile's red bands touch only yellow bands and the harmless one's red bands only touch black ones. This mimicry is clearly only an approximation to the dangerous species. If you see each side by side the differences are easy to spot (preferably through glass ...). This has lead to the heuristic of 'yellow and red and you're dead', etc.

Here's where availability comes into play. Consider 4 possible heuristics regarding these snakes: 1) avoid snakes with alternating yellow, red and black bands where the black bands are bordered in yellow; 2) yellow and red and you're dead; 3) avoid yellow, red and black stripped snakes; or 4) avoid snakes. Number '4' is clearly the simplest to remember (and probably the people's choice...) so it benefits from 'availability'. For a species that routinely eats snakes, '3' would probably be the heuristic of choice and an explanation as to why the mimicry need not be that precise since animals in nature are unlikely to pull out a book on identifying features of prey. Yeah, you may pass on an occasional meal but you live to eat another day. Choices '1 or 2' would only be of interest to herpetologists or the occasional amorous king snake.

Remember that logical assessment and deliberate consideration of options is a very recent luxury enjoyed, as far as we can tell, only by technologically advanced humans. Most decisions in nature need to be rapid and decisive if one wishes to avoid being eaten or killed. Close is generally good enough in horse shoes, hand grenades and almost all decisions in nature. (As an aside it is interesting to hypothesize that the general human aversion to snakes might be a behavioral artifact inherited from our African ancestors who more routinely encountered nasty's like the cobra, puff adder, mamba, etc. as they traveled the African plains and a general serpent aversion heuristic might have been a good strategy. With no particular survival advantage associated with a love of snakes, this trait may well have persisted. Could this be the instinctive origin of Satan as a serpent?)

Doubt the significance of availability heuristics to cognition? Try to recall the pattern difference between the coral and king snakes. Which one(s) come immediately to mind? Availability heuristics are heavily leveraged in all of our intensive training programs. How many time have you heard military trainers talk about repetitive training so that recruits will 'act instinctively' in combat. The goal of these programs is to overcome existing availability heuristics (such as self-preservation) and replace them with military useful ones. (In heuristic terms 'cowardice' might be better termed either inadequate reprogramming of availability or reversion to primary heuristics under severe stress.) Everyone in government may not believe in evolution but they surely leverage its implications to accomplish their goals... Think of the GOP strategy of creating availability heuristics like, McCain = patriot and security, while Obama = socialist, terrorist, Muslim. The persistence of such are clear examples of the power and persistence of availability heuristics to decision-making including such important topics as national leadership. Marketeers may not study neurochemistry and human evolution but they put availability to work creating all those little ditties that clog our minds and create that brand loyalty that translates to a healthy corporate bottom line.

So why should any of this be of any interest beyond behavioral psychologists and a few geeks, such as myself, working in machine intelligence research? Nothing could be more important. We are facing modern problems and complex global decision environments with cognitive processes evolved to handle snap decisions in situations where the stakes were that of individual (or small cohort) survival. Our technological advantages over predators and many historical environmental dangers reduces the need for these snap decisions. But the cost is that these same technical advantages require the application of deliberate logic approaches in order to avoid more generalized and extensive species dangers created by these technologies and we don't have time to evolve them. If we are to survive we must understand how our genes guide our cognition and create societal memes to mitigate our limitations before we make a snap decision to destroy ourselves.

16 comments:

pboyfloyd said...

Good stuff!

Sorry, can't think of anything constructive to say.

Saint Brian the Godless said...

Pliny:

Fascinating read. Interesting way to look at it. I'll offer more substantive commentary tomorrow.

You're a smart one, aren't you?
:-)

Saint Brian the Godless said...

Pliny, I added your place here to my favorite bloglist on my own place. Hope you don't mind.

Saint Brian the Godless said...

If we are to survive we must understand how our genes guide our cognition and create societal memes to mitigate our limitations before we make a snap decision to destroy ourselves.
-------------------
So what you're really saying is, we're all doomed.

Pliny-the-in-Between said...

So what you're really saying is, we're all doomed. SteBtG
---------
- with the people I have encountered in some of these blogs my prediction for survival has improved pretty remarkably. Nov 3 will be a major milestone in determining our overall chances!

Thanks for the link as your site has evolved (or was it intelligently designed ;)) into a great discussion forum.

GearHedEd said...

Pliny,

Great posts in your blog. I'll be adding it to my favs as well.

I've heard (mostly from reading sci-fi from Asimov and Clarke) that the main hurdle to overcome in artificial intelligence creation is a heuristic problem; i.e. that it's currently close to impossible to write algorithms that can make useful decisions with incomplete information. Are you one of the guys working toward a solution, and what's your take on cautionary tales such as the 'Terminator' movies, '2001, A Space Odyssey', 'I, Robot', etc. where thinking machines start to make decisions that we didn't intend and that are harmful to us? Is that just sci-fi, or is there real concern over these issues?

Saint Brian the Godless said...

I am new to this thing, heuristics. I've often wondered how they would ever teach a computer to make complex decisions such as understanding colloquial english and slang in a voice-interface. Is that the sort of thing we're talking about here?

As to your comments in the post itself, I guess I see it all as the age-old conflict between BELIEF and THOUGHT. Beliefs are inherently bad for ya, due to the fact that they aren't update-able like THOUGHTS are. So Im pro-thought.


Love your coral snake-king snake example, btw. Mimicry has always fascinated me. I appreciate your ability to not only understand your field of endeavor but also to be able to have an understanding of the natural world in your arsenal as well. Well-rounded of you.

On mimicry: I like to consider myself an ameteur naturalist. My particular arees of interest and familiarity are in entomology and herpetology, so your snake example was dear to my heart.

About ten years ago I was in Brazil on a business trip. (Relating to gemstone buying)
We were being shown around by a local man, Misael. A very nice guy. A religious man, a Seventh-Day Adventist, but not a prosletyzer at all. I liked him.

Well, he was driving us around in a VW bug, and we went into some museum, and when we got out, there was this big wasp on the roof of the VW.

I pointed to it and asked "Is that dangerous? Does it sting?

He replied in accented English Oh Yes, very painful. Don't touch it.

Now, there were four other people watching this, so it was particularly enjoyable to me...

I picked up the wasp.

Misael's eyes bugged out and he said "What are you doing?" but along about then he noticed that I wasn't in any pain.

I opened my hand and let it fly away and then told him "I read a lot about insects. It's not a wasp. It's a kind of moth, that is mimicking a wasp so that the birds think it's one and don't eat it."

I just loved the looks on their faces when I palmed that "wasp..." Too funny.

Pliny-the-in-Between said...

Dear GHE, and SteBG

Thanks for the insights and questions. Your 'wasp' example is great. Nature is so amazing, I've never understood why people expend so much time on metaphysics when the natural world has so much to offer if they ever went outside and enjoyed it.

As to the discussion of Machine Intelligence, I'd like to defer that until my next posting - which will be on that very subject - or my take on it at least if that sounds interesting in any way.

As a prelude, my specific research in the area has been to reject attempts at mimicking human thought for some of the reasons mentioned in this posting - human thought evolved as a series of ad hoc responses to new conditions using re-purposed parts of 'animal brains'. Many of today's problems may be better addressed by new approaches that are not limited by the 'reptile' parts of our brain.

The T2 question is an interesting one that I will cover in the next post.

Pliny-the-in-Between said...

I see it all as the age-old conflict between BELIEF and THOUGHT. Beliefs are inherently bad for ya, due to the fact that they aren't update-able like THOUGHTS are. So Im pro-thought. (SteBG)

---------------------

I agree but what I would like to see us attempt (as much as possible) is to move the national conversation from beliefs vs logical thought toward incorporation of behavioral psychology and research into cognitive biases. A big part of my reasoning for that is because one man's thoughts are another man's beliefs - at least in their respective minds if not in reality. As was well demonstrated in D'Souza's blog, many people 'know' things in absolute terms. The question is for the rest of us is whether their 'knowing' has any relationship to reality.

Admittedly this is a very mechanistic view of cognition (which doesn't mean it' wrong of course ;)). Unsubstantiated beliefs can probably (I admit the probably here) be described and studied as examples of a combination of cognitive biases. That is the sort of thing advocated by Daniel Dennett, etc. (maybe that's a topic worthy of a future posting?)

Thanks for your insights.

Pliny-the-in-Between said...

Clarification:

In discussing mimicry in nature; "these heuristics must be subject to availability bias", it occurs to me that I did not really explain why this should be the case.

(The chief assumption here is that mimicry has evolved to reduce predation on the ersatz species.)

Mimicry must be the result of natural selection. It is extremely improbable that a single mutation or set of mutations would result in an ideal mimic. This suggests that the eventual form we see is the result of several incremental changes which confer an increasing advantage in order to become common in the animal's gene pool. Therefore it seems logical that intermediate forms of mimicry must confer at least some form of advantage. This would not be true if the predator species were able to routinely identify subtle variations. Therefore, if heuristics are important to prey selection, then these heuristics are biased by availability - i.e. the simplest patterns dominate. Otherwise incremental variation would be unlikely to drive evolution toward better mimics.

Saint Brian the Godless said...

On mimicry and related mechanisms:

I was thinking about eyespots. Many moths and butterflies have eyespots on their wings, spots that look like eyes. And not insectoid eyes, either. Reptilian or mammalian eyes by their appearance. So this means that the predators of these moths, probably mostly birds, recognize a pair of eyes as a danger signal, and avoid the moths that have them, thinking they're a reptile or mammal lying in wait.

Saint Brian the Godless said...

Also coming to mind is that bay in Japan where people have been crab fishing for centuries. Over the years whenever a fisherman caught a crab that seemed to have a design on it's back that somewhat resmbled a person's face, they threw it back out of superstition. Now after all this time the crabs in the bay FREQUENTLY have fairly clear human faces on their backs. They've evolved now to have the design that most protects them from humans, because humans have culled out most of the ones that don't have a facelike design and eaten them.

Pliny-the-in-Between said...

...that the main hurdle to overcome in artificial intelligence creation is a heuristic problem; i.e. that it's currently close to impossible to write algorithms that can make useful decisions with incomplete information. Are you one of the guys working toward a solution,... GHE

---------
As it turns out it isn't impossible to create these kinds of algorithms - merely really hard...

The heuristic problem has been a big one both from the aspect of making decisions with incomplete data and from the standpoint of the 'garbage in, garbage out' problem that plagues the classic Bayesian systems where incorrect heuristics are learned and later referenced (a problem such systems share with humans).

This particular problem may well have been solved (at least in well defined knowledge domains) though there is a lot of testing that needs to be done before that claim can be verified. Initial work is promising with very good machine concordance with decisions made by domain experts with limited data. As for the garbage in problem this experimental system is designed to be extremely 'skeptical'. It performs extensive causality assessments on candidate heuristics which must be rigorously met before it recommends that they be incorporated into its programming. If these criteria aren't met, it will either reject the heuristic or use it provisionally in order to track trends. (Don't ya wish voters were as methodical...)

Yes, I am one of those geeks working on this problem and the above system is my pride and joy.

Pliny-the-in-Between said...

SteBG

The 'Samurai crab' case is probably the most detailed example of incremental evolution in response to predator selection bias I can think of. These crabs have benefited from both an availability bias (human heuristics which see faces in almost anything even with minimal data) and our metaphysical predilections.

pboyfloyd said...

People are gullible and easily manipulated towards their own core beliefs and they are 'wise-against' and practically incorrigible when it comes to information that goes against their core beliefs.(worldview?)

I think that individual's opinions are rooted in their community's opinions.

Stories of kids going off to college and returning home with 'crazy' ideas comes to mind.

The manipulators know their target audiences want to believe what they say, they are practically conning themselves.

The 'trick' is in the wording/execution.

For example, no one would vote for a politician who says the his platform involves disregarding the voters needs entirely.

But that politician can speak positively about how he 'cares' for the country and 'small business' and 'hard working people', a series of meaningless platitudes.

The country and small businesses being 'better off' because people are working hard is not necessarilly a good thing for 'you' personally if you are one paycheck away from becoming homeless.

I would suggest that most heuristics are based on the core idea that people have 'free-will' and that they have effective choices, neither of which is necessarilly true.

Pliny-the-in-Between said...

People are gullible and easily manipulated towards their own core beliefs and they are 'wise-against' and practically incorrigible when it comes to information that goes against their core beliefs.(worldview?)

I think that individual's opinions are rooted in their community's opinions.

pboyfloyd
------------------
I think what you say is correct. Certainly the study of human heuristic mechanisms suggests that we have a lot more hard wiring than we would like to admit and our responses are more mechanistic or pre-ordained than we would like. Hindsight bias studies are particularly telling.

" From an article by Eliezer Yudkowsky, the following experiment was described.

"Kamin and Rachlinski (1995) asked two groups to estimate the probability of flood damage caused by blockage of a city-owned
drawbridge. The control group was told only the background information known to the
city when it decided not to hire a bridge watcher. The experimental group was given this
information, plus the fact that a flood had actually occurred. Instructions stated the city
was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the
control group concluded the flood was so unlikely that no precautions were necessary;
57% of the experimental group concluded the flood was so likely that failure to take
precautions was legally negligent. A third experimental group was told the outcome and also explicitly instructed to avoid hindsight bias, which made no difference: 56%
concluded the city was legally negligent. Judges cannot simply instruct juries to avoid
hindsight bias; that debiasing manipulation has no significant effect."

So much for free will.