Natural Election Redux

I’ve never been a fan of political litmus tests. I suspect that being President is a challenging enough without having your hands tied behind your back by a bunch of idiotic pledges to avoid this or that specific thing that is the sole purpose for being for some special interest group. It’s kind of like asking a pilot to promise that he or she will never turn in this direction or go above this altitude before they take off. Sometimes having the freedom to act as is appropriate to the situation is the best way to avoid that mountain up ahead when the weather changes suddenly. I tend to see the world as a pretty complex assortment of gray areas and I always hope for someone who’s smart, can be calm and deliberate when it’s warranted, and decisive when it’s not. The exact character we need in the White House often ends up being a retrospective assessment based upon the events the President ends up facing.

It’s that last part that’s gotten me to reconsider the idea of a political litmus test. A candidate’s stand on Evolutionary theory is being bandied about as such a test by a number of blogs I frequent. I think it’s a good one and here’s why: A President may be required to make critical decisions that affect us all. Many of those decisions will be based on less than ideal information. I need to be confident that my President is committed to making those decisions on the best evidence at his or her disposal. Rejecting evolution is a clear case of just the opposite. The facts of evolution are some of the most solid knowledge in science and it’s hard to find anything else at once so elegant and simple and yet so powerful and broad in its explanations of life on earth. At this time in history there are only 4 reasons one can reject the fact of evolution: religious preference over the facts, ignorance of the basic science, political pandering, being an out and out liar, or some combination of the four.

None of those four reasons are characteristics I’m seeking in a President. If you can't see the overwhelming evidence of evolution then I can't trust you to see the truth in anything less clear cut when the time comes. If you think the jury’s still out on evolution then this judge says the verdict on your qualifications as President is clear. You’ll never get my vote.


Descartes Blanche

Democrats: I think, therefore I feel guilty

Scalia: I think, therefore it should be law

The poor: I think, therefore I know I’m hungry

Republicans: I think, therefore F#%# off

Fundi’s: They thought for me, so F$%^$ off

Obama: I think, therefore let’s talk

Cheney (and libertarians): Who cares, F%^%&% off


Bottoms Up! (a break from the basic science)

One thing about which scientists, skeptics, philosophers and mystics can agree, is that science cannot explain the existence of a consciousness without anatomy.

Those abstract thinkers on the right (the philosophers and mystics) believe this demonstrates a limitation of science. After all, they have defined the attributes of a disembodied mind and science doesn’t explain it to their satisfaction. The roll their eyes in condescension and accuse skeptics and scientists of that dreaded paragon of concrete thinking, reductionism. After all, it’s intuitively obvious that human minds are very special in a way that cannot be defined by the observed anatomy.

It’s time for skeptics and scientists to stop granting the machinations of philosophers and mystics, preferred status.

The issue is not skeptical reductionism. It is, as it has always been, a problem of philosophical overreach and imagined assumptions, matched with an a priori respect afforded such. Science cannot demonstrate the mystical properties of mind for the simple reason, that these are made up constructs to begin with. Philosophers and mystics must demonstrate a sound basis for these claims (other than word games or intuition) or we should just ignore them. Science is not reductionism it is coherent modeling. Starting from basic reproducible observations of reality, science helps to build models of reality that fit the observable universe with greater and greater precision and accuracy. That these models cannot provide any support for the existence of a soul for example is less a problem with the model and more a problem with the philosophy. Philosophers can debate nonexistent realms until the end of time since they seem more concerned with sounding convincing or defending their intuition than with being right about the true nature of existence.

Coherent modeling is superior to philosophy in this regard for a simple reason; modeling builds a ground up view of existence while philosophy starts from the top down and as such is largely based upon a foundation of intuition. A ground up model is always superior because this method is most easily consistent with Occam’s Razor. When building from the ground up you reach a point when observed reality is well approximated by the model. This end point may not be intuitive but it will be the simplest explanation of the observed facts. (Often it isn’t intuitive, but intuition is highly over rated in science.) At this point, a scientist can be fairly comfortable that the model reflects objective reality. The simplest explanation is reached in this manner. Based upon details of observed reality, a coherent model is essentially complete once these details have been accounted for. There is no need to account for any missing parts based upon intuition or baseless assumptions.

The top down philosopher however has the added burden of accounting for their initial suppositions. Starting from a series of assumptions that may well have no basis in reality, the philosopher has to account for the disparity between coherent models of reality and their hypothesis. The result? The philosopher declares victory since science has revealed a gap between their assumptions and demonstrable reality. They announce that this proves the existence of something beyond science! No it doesn't. It's far, far more likely to prove that the assumptions of the philosopher were groundless in the first place. That such gaps are the arbitrary creations of the philosopher seems to be of little consequence. Rare seems to be the philosopher who concedes that their intuition is faulty. No, it must be a problem with the scientific method which is not good enough to detect the transcendent parts of reality that only they can intuit. Despite the abysmal track record of intuitive thinking to successfully describe objective reality, philosophers cling to it as a badge of honor. Which, I suppose is why they continue to debate Aquinas while the rest of the word has moved forward 7 centuries.

If science supports a view that the evolutionary development of our senses as a means to successfully navigate objective reality in concert with the growth in our brain’s complexity led to the perceptions we now embody with mystical powers, rather than the other way around (a view which depends upon the existence of abstract consciousness, a thing never seen to exist), then that’s just the way it is. It may be fun to imagine that this were not so, but it’s high time we stopped treating this as anything beyond speculative fiction.

Any philosophy that depends upon the existence of a mind without an underlying structure, is guilty of such a top down bias. No objective evidence exists in support of such minds. This is the mother of all gods of the gaps. It’s based upon the error of assigning far too much credence to our intuition about such things instead of what we actually observe in nature. The bottom up approach to neuroscience is resulting in coherent models of the human mind that fit real observations. If philosophers of the mind want to be taken seriously, then let them first prove that a disembodied mind can in fact exist outside of their intuition. If not, science needs to stop apologizing for the fact that the special nature of humans that we all love so much exists now, where it always has, only in philosophy texts.


Neuro-Not-So-Basics 2: Late Phase Long Term Potentiation of Excitatory Neurons

To rehash the last segment, axonal action potentials of a sufficient frequency (as in number of depolarizations in a given time interval) trigger plastic effects on the target dendrite in addition to the usual depolarizations associated with nerve conduction. Glutamate released by the axon acts across the synaptic cleft to activate ionotropic receptors (AMPA) in the dendrite cell membrane that cause Sodium ions (Na+) to enter the cell and alter its baseline polarity which may trigger an action potential in the target cell. In addition, the repetitive stimulation prevents the dendrite’s potential from returning to baseline before the next stimulation, resulting in a stacking of charges until a critical threshold is exceeded. Once this threshold is exceeded, changes in the dendrite membrane and glutamate binding to NMDA receptors flushes Magnesium ions (Mg++) out of the receptors allowing Calcium ions (Ca++) to enter the cell. The Calcium activates PKC and Calmodulin kinase (CaMKII) to phosphorylate the AMPA receptors making them more efficient at sodium transport. This lowers the threshold of the cell to future stimulation. Additional AMPA receptors already bound to the dendrite’s cell membrane are recruited into the synaptic cleft, completing the early (independent of protein synthesis) phase of Long Term Potentiation (e-LTP). The major elements of e-LTP are displayed in the first Plinygraph.

This bit covers the late phase of Long Term Potentiation. (Note that not every intermediary in the process is presented nor are all the alternate converging pathways described.) Metabotrophic (second messenger) receptors were briefly mentioned in the last installment but their most significant effects bear on late phase changes in LTP.

Glutamate reversibly binds with the metabotrophic complex (the red bits) which ultimately activate Adenyl cyclase (AC). AC converts ATP from mitochondria into the cytoplasmic second messenger Cyclic AMP (cAMP).

cAMP in combination with a Protein Kinase (PKA) transmits this message to the nucleus of the stimulated neuron. This is but one of a number of cascades available to the cell, including Calmodulin Kinase, growth factors, other neurotransmitters and even cytokines from stress events, that converge within the nucleus to phosphorylate cAMP Response Element-Binding proteins (CREB).

Phosphorylated CREB binds to specific DNA segments in the promotor regions of genes, which can be transcribed into mRNA strands relevant to the translation of the peptide subunits of additional receptors like NMDA. These promotor regions where CREB binds are called cAMP Response Elements (CRE). Bound CREB, in combination with several other proteins, unfolds a segment of DNA and allows RNA Polymerase II to transcribe the relevant mRNA strands.

From the nucleus the mRNA is transported to the endoplasmic reticulum (ER) where protein synthesis generally takes place. ER is one of a number of amazing organelles located within eukaryotic cells. It can be argued that the leap from prokaryote to eukaryote was a far larger change in complexity than the evolution of mammals such as ourselves from single-celled eukaryotes. All of our cells are specialized variations on the basic plan of a eukaryote.

Within the ER, transcribed mRNA copies are translated into new receptor proteins through the interactions of Ribosomes, mRNA, and tRNA-bound amino acids. Technically, the subunit peptides of the proteins get produced in the ER and then are assembled in their final form in the Golgi Apparatus (left off of the diagrams which are complex enough already). There is some controversy regarding whether the new receptors are created within the affected dendrite or in the soma, but an observed property of LTP suggest that it is the soma. More on this after a bit.

These newly minted receptors are then transported (usually via microtubules) to the synaptic membranes of the specific dendrite that triggered this cascade, not any others. The plastic change is limited to the dendrite that was subjected to high frequency stimulation. This dendrite receives an added boost in its responsiveness (in addition to e-LTP changes) to future stimuli through the addition of new receptors generated through protein synthesis.

This characteristic of LTP, which limits the plastic effects to the triggering dendrite demonstrates LTP Selectivity.

But what happens if during this high frequency stimulation, another separate dendrite is experiencing a sub-critical stimulus? Turns out that this other dendrite experiences LTP as well. This argues for soma production of the new receptors, and the presence of a local impulse-driven cofactor that promotes transport of the new receptors since the level of stimulation in the other dendrite is too weak to trigger e-LTP cascades. This phenomenon is called LTP Association. The potential relationship of LTP association to certain cognitive biases will be a topic for the future.

This is LTP in a nutshell. LTP results in enhanced receptor response to future stimuli and more receptors. LTP appears to be a key component of the neuroplastic changes that result in the creation of new memories. But as you might expect, there is a lot more to it... Next time? The creation of new synapses and the deregulation of others. After that, the modulation of these processes by other neurotransmitters, the role of nucleic acids, then it's off to the anatomy of the brain.


Neuro-Not-So-Basics: Diving Into a Bit More Detail on Neuroplasticity

The previous installment on neuronal plasticity involved a lot of wand waving in order to keep it very basic. There is far less of that here. This is a more detailed review of neuroplasticity and one of its components, Long Term Potentiation (early phase) (e-LTP). The second installment with cover Long Term Potentiation (late phase). Certainly not all of the known processes involved in learning and memory will be presented in this review but hopefully it gets us closer to what’s really going on behind our eyeballs. These next two installments will review some of the current knowledge about learning and memory at the synaptic level, and following that, the focus will shift to a review of cellular changes that can create new connections or prune unneeded old ones.

Below is a rendition of an excitatory synapse in the CNS with some of its critical components displayed in their ‘resting state’ Resting state is a misnomer since a great deal of energy is required to maintain the baseline charge gradient against diffusion. Separated by the synaptic cleft, the axon is the input to the synapse, the dendrite is the receiver and output, and the astrocyte is there to support the metabolic needs of the neurons but it also serves some other vital functions as well. The astrocyte provides some of the interlacing structure that holds the synapse together (not shown for simplicity) and it polices the synaptic cleft absorbing neurotransmitters and their byproducts between firings. I’m not going into too much detail on that part but it should be noted that failure of that function by the astrocytes is associated with known diseases including ALS.

The axon terminus contains vacuoles filled with gluamate (glutamic acid or Glu). Glutamate is an amino acid, a preservative in Chinese food and the most common neurotransmitter in the CNS.

The synaptic cleft is maintained with a resting excess of sodium (Na+) and Calcium (Ca++) ions relative to the interior of the cells.

Embedded into the cell membrane of the dendrite are a number of protein complexes that span its lipid bilayer. Among these are AMPA (Alpha -amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) and NMDA (N-methyl-D-aspartate) ionotropic receptors for glutamate (Glu). An inotropic receptor is one that selectively admits ions to the cytoplasm of the cell when activated (in this case by Glu). There are also metabotropic receptors that bind with transmitter. Metabotropic receptors are those that activate intracellular second messenger cascades within the cell. What does that mean? When the transmitter binds to the external portion of the metabotropic complex, it causes a conformational change in the portion of the complex that extends into the cytoplasm. Once this happens this usually results in a configuration of the protein that allows it to catalyse some important reaction to create an intercellular messenger that communicates with structures within the cytoplasm. The most famous metabotropic complex is the one associated with adenyl cyclase (the Western Union of eukaryotic cells) which creates the ubiquitous intracellular messenger, cyclic AMP (which will play a roll in the discussion of the late phase of LTP).

So ionotropic receptors allow ions to pass into the cell, while metabotropic receptors switch on some intracellular machinery. You should also note that not all of these receptors are located within the synapse. This will be of some importance after a bit.

This is pretty much the state of things until an action potential makes its way to the synapse as a wave of membrane depolarization moves toward the terminus of the axon.

Once the terminus deploraizes, the vacuoles containing Glutamate (Glu) bind with the cell membrane and discharge Glu into the synaptic cleft.

Glutamate binds to AMPA which opens these ion channels to the cytoplasm allowing Sodium ions (Na+) to rush into the cell.

This changes the electrical potential of the dendritic cell membrane,

Which may or may not be enough to generate a responding action potential in the target neuron. Absent any additional stimulus within a short period of time, the Glu is reabsorbed and the membrane potentials return to baseline. This can occur over and over again without any noticeable plastic change to the synapse.

However, the situation changes considerably if the synapse is subjected to high frequency bombardment. Imagine situation like below, where the axon is being stimulated over and over again with such frequency that the dendrite cannot return to baseline polarity before the next signal hits.  Since the dendrite is unable to return to baseline between firings, the potentials begin to stack one onto the next until a critical electrical threshold is exceeded.

Once this threshold is exceeded several things happen as part of what is called Long Term Potentiation (early phase) (e-LTP).
  1. The electrical charge changes are sufficient to dislodge magnesium ions (Mg++) that normally block the ion channels of NMDA.
  2. Glutamate binds to NMDA and Calcium ions (Ca++) enter the cell.
  1. There is plenty of surplus Glu about to bind to the metabotropic receptors in the synapse to activate second messengers in the cytoplasm. (not shown)
  2. Calcium ions in conjunction with the second messengers activate a number of intracellular cascades including Protein Kinase (PKC) and Calmodulin kinase II (CaMKII).
This activated complex does a couple of things. First, it phosphorylates AMDA which dramatically improves each molecule's ability to transmit Sodium ions to the cytoplasm. This change can persist for some time even if no other changes occur. The enhanced ion channels are far more sensitive to future stimulation and may even promote neuron depolarization at signal frequencies that previously would have been below the threshold for response. So this is a plastic change to the synapse.

The second effect is that AMPA receptors outside of the synapse are drawn into it. The result is more available receptors without the need for additional protein synthesis further strengthening the synapse's responsiveness to future stimulation.

Protein synthesis is vital to the next stage of the process, which is the late phase of Long Term Potentiation (LTP) which fixes and extends these changes. But that is a story for another day. Hope this is interesting.


Neuro Basics, Part 3: the magnitude of neuronal connectivity

In parts one and two we touched on the basic building blocks of the CNS and a couple of the mechanisms that allow our biological neural networks to develop with experience. This part is mercifully short but important (should have doe this one before plasticity). Before starting into the higher levels of organization within the CNS it's good to really nail down the level of complexity that exists at the most basic level. One of the hang ups people have about a biological explanation for thought is an under appreciation for just how extensive our neural networks really are.

Part one covered very basic structures of the neuron with the picture below.

We covered the fact that the level of connectivity of a real neuron was considerably higher, but how high is it really? The image below is closer to reality. This image looks much more complex than the simplified one above. Surely, it's an accurate illustration of a real neuron.

No, not really. This illustration shows 800 dendrites. Eight hundred individual connections with other neurons to a single neuron. And it's still just a simplification. The average neuron in the average adult human brain has 7000+ connections. So this complex looking picture is 8-10x too simple to represent the average human neuron.

Now imagine the connections to this one neuron.

In this image stimulating neurons are ones with purple dendrites and regulating ones are in green. Some neurons are tasked with suppressing the firing of a neuron and others with stimulating it to pass its signal to those up its chain. This is just the connectivity of one amongst 10- 11 billion others. Each of which is connected to a similar number of others.

Looks very complex doesn't it. The only problem is that these last two pictures would have to include 260x more connections to be an accurate picture of an average neuron. Do the math. Is it really that hard to imagine that some extraordinary processes that at times look magical can be accomplished with the numbers of combinations or permutations that actually exist in our brains?


Basic Science Saturday on the QT

WooHOO! Basic Science Saturday. Be warned! This is a very simplified description and illustration of a very important concept: Quantum Tunneling. It's a curious thing but I think it’s fairly easy to understand qualitatively. As long as we avoid the usual bad analogies and the math which can be more than a bit dicey.

I’m going to forgo the usual bad analogies because I think they add more confusion than they’re worth because of the orders of magnitude that separate us from the quantum world. What works for a single particle is one thing, but what works for an object made up of a trillion trillion particles is something else.

Just how removed the world of one subatomic particle from our macro world is something I hope to illustrate below. How big is a proton? It’s roughly 10-13 cm in diameter, although the diameter of a proton is sort of a misnomer to begin with since it’s not like a little rubber ball with discrete boundaries. It’s hard to contextualize that small of an object. We need a frame of reference. So let’s, for once find a use for Charlie Brown to illustrate the size issues.

Take Charlie Brown (Somebody please take him! How much longer must we endure recycled comics that weren’t funny to begin with. But I digress)

But let’s imagine mister whinny, boring not with his usual round head which I have arbitrarily estimated at 38 cm (It’s a big fat head. Not really a blockhead but more of a big fat ball of self loathing) but with a head the size of our sun. Assuming we preserve the relative proportions of the sun to CB’s usual blockhead, how big would a proton be?

Would you believe it would be about 3.7 microns?! Even if whimpering simp Charlie Brown’s head were the size of the orbit of Neptune, the proton would still only be a hair shy of an inch in diameter. So we are talking teensy here. So far removed is this world from our macro one, that we should never expect bodies with our mass and numbers of particles to behave in the manner we’re about to describe. Statistically, it’s essentially impossible.

But down deep at the level of the proton or electron, things are strange. Turns out that’s a really good thing for us.

A proton isn't a discrete particle. It is something that is explained as having particle - wave duality. We have a wave function to describe a proton, and there is the possibility of a discrete solution to the equation at a specific time and place - that’s the particle part. For better or worse I tend to think of subatomic particles as a specific observed instance of a wave function. This actually helps me to mentally accept the duality. Unless we are seeing a specific instance, the proton or electron behaves as a wave function and therefore even a single electron or proton can appear to be many places at once - just like a wave. On those occasions when a specific instance is being observed, we see more discrete particle behavior. But generally it works to think of it as a fuzzy cloud like the diagram below. Remember this is just a means to illustrate this.

The fuzzy cloud is the wave function which tells us the likelihood of finding the particle solution or a specific instance of the thing at a particular location in space. The denser areas of the cloud correspond to areas where the particle is most commonly found. But it can be found anywhere in this cloud at some times and if we don’t insist on measuring its precise location, it acts as if it’s everywhere in this cloud at once.

The next two diagrams illustrates that second to last point. The particle can be located anywhere in the cloud of the defined wave function though the probability changes with respect to a particular region. Because the little bugger is so tiny, we can never know both its precise location and its velocity. This concept sometimes gives people fits but it really is fairly simple to grasp. Imagine you are in a pitch black room with a superball bouncing around and you have a strobe light to get a picture of it. (Ok, I have succumbed to the siren's call of bad macro metaphors as well. Someone should have lashed me to the mast.) With the lights out you can hear the thing bouncing around seemingly everywhere but hit the strobe and you can snap a picture of exactly where it is at that instant. Of course there is a problem when you look at the photo you took with the strobe. You see the superball in all its glory but you can't tell where it's headed or how fast. It's just a picture of a ball in the air. You can't tell where it's headed, what it may hit, or how hard. Now as to why this is a bad metaphor goes back to the issues of scale we mentioned earlier. Light from a strobe hitting the superball imparts negligible force on the velocity of the ball. At the level of a proton, however, the energy of a single photon used to detect its whereabouts imparts tremendous force upon the proton altering its momentum and path. Measuring its position always influences its path. That's the uncertainty principle.

Now let’s imagine this very tiny wave function approaching some kind of energy barrier (an electric field for example)

If the barrier is great enough, the particle represented by the wave function will bounce off as we’d expect. The behavior here is very similar to what would be described by classical mechanics. No surprises yet.

But what if the barrier (field) is not so great in size? This next proton approaches a barrier as before. But notice that the region defined by the wave function actually extends beyond the width of the barrier. In other words, in this situation there is some probability that if we collapsed the wave function, the particle would be on the other side of the barrier. It might bounce off as before - then again it might not.

Keeping in mind that this is just an illustration, if the width of the energy barrier is small enough with respect to the wave function of the particle in question, something pretty strange can occur.

The result? The particle may appear to have tunneled through the barrier appearing on the other side with the exact same energy it started with. It’s not magic, it’s just math - really really hard math...

This isn’t just theoretical. It is observable in the behavior of prisms and in an advanced form of microscopy. It has relevance to superconductors, radioactive decay, processor design limits, and even to fusion reactions in the core of our beloved sun to name a few.

The sun you say! Turns out, if quantum tunneling wasn’t real, we wouldn’t exist. The sun requires it to be a fact for it to generate light and heat as it does. One of the problems scientists and engineers are facing here on earth when trying to develop a working fusion reactor is that they have to raise the temperature and pressures to levels far higher than exist in the sun’s core. That’s because the normal conditions within the sun’s core are insufficient to overcome the repulsion of like charges when trying to bring two hydrogen nuclei together long enough to fuse. That seems like a potential problem (badda boom!). The like charges create an ‘energy barrier’ (see where this is going?) that prevents this from happening. But because our dear sun is so very very huge (channeling Michael Palin), nuclei can fuse because a certain percentage ‘tunnel’ through the barrier of like charges and fuse. Science doesn’t get any better than that.

Due to the scale involved and the numbers of particles in question, it should also be apparent that this doesn’t mean we can walk through walls from time to time. We’ll just have to be content with the fact that we get to live because of it and stick to the doors.


The Comets Tale

Back in the 13th century while some amazing guys were creating all the big ideas a modern philosopher would ever need, people had some pretty screwy ideas about comets. Among other things they were bad omens - and not the the strike the Yucatan and annihilate the dinosaurs kind of bad omen. No, comets were cosmic pink slips from God. That slow progression across the heavens with a fiery tail was pretty upsetting. It would be centuries before anyone understood what they really are. Far from being messages of doom (but still very much potential agents of same...) they were dirty snowballs with particularly eccentric orbits.

And as people studied them they discovered some other interesting facts. One of these facts was that the tail always pointed away from the sun. That was curious at first. Why would the evaporation products of the coma always form a pattern away from the sun? It suggested the presence of a force or an agent that was pushing the cometary tail particles away from the sun. Turns out that this is the solar wind. That expanding impulse of particles always being ejected from the sun's surface. Invisible, subtle, but not undetectable. An effect first detectable through observation; then a process imagined to explain it; then the mechanism validated through experimentation. Like every other process in nature the solar wind left signs of its existence long before we had any means of either imagining its existence or detecting it. But what was true elsewhere in nature was true here as well: if it caused some effect, then it left a mark.

Is this one of my usual digressions? I don't think so. Time and again science finds that causes leave fingerprints on their effects in this universe of ours. So why would consciousness be any different? Philosophers who insist on some special status for human cognition not explained by our neuroanatomy, must show us the marks left by these metaphysical processes. Even (usually) subtle forces like the solar wind leave a trail of breadcrumbs. Evidence not words are what is required.


The Starting Gun

  1. What is the difference between consciousness and instinct? Or better yet, why do we label very similar observed behaviors volitional or conscious acts in humans and instinct in another mammal?
  2. How do you know that someone loves you? Is it a 'feeling' or is it a perception built upon a foundation of little behaviors?