FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG

OCCUPY THE SCIENTIFIC METHOD


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Thursday, December 22, 2011

Does the “Goddamn” Higgs Particle Portend the End of Physics?

Science News



Cross-Check
Critical views of science in the news
Cross-Check HomeAboutContact

Does the “Goddamn” Higgs Particle Portend the End of Physics?

December 17, 2011


What does it say about particle physics that the Higgs boson has generated so much hullaballoo lately? Physicists at the Large Hadron Collider in Switzerland have reportedly glimpsed “tantalizing hints” of the Higgs, which might confer mass to quarks, electrons and other building blocks of our world. Not actual “evidence,” mind you, but “hints” of evidence. “Physicists around the world have something to celebrate this Christmas,” the physicist Michio Kaku exults in The Wall Street Journal.


Actually, the Higgs has long been a mixed blessing for particle physics. In the early 1990s, when physicists were pleading—ultimately in vain–with Congress not to cancel the Superconducting Supercollider, which was sucking up tax dollars faster than a black hole, the Nobel laureate Leon Lederman christened the Higgs “the God particle.” This is scientific hype at its most outrageous. If the Higgs is the “God Particle,” what should we call an even more fundamental particle, like a string? The Godhead Particle? The Mother of God Particle?

Lederman himself confessed that “the Goddamn Particle” might have been a better name for the Higgs, given how hard it had been to detect “and the expense it is causing.” A more fundamental problem is that discovering the Higgs would be a modest, even anti-climactic achievement, relative to the grand ambitions of theoretical physics. The Higgs would serve merely as the capstone of the Standard Model of particle physics, which describes the workings of electromagnetism and the strong and weak nuclear forces. The Standard Model, because it excludes gravity, is an incomplete account of reality; it is like a theory of human nature that excludes sex. Kaku concedes as much, calling the Standard Model “rather ugly” and “a theory that only a mother could love.”

Our best theory of gravity is still general relativity, which does not mesh mathematically with the quantum field theories that comprise the Standard Model. Over the past few decades, theorists have become increasingly obsessed with finding a unified theory, a “theory of everything” that wraps all of nature’s forces into one tidy package. Hearing all the hoopla about the Higgs, the public might understandably assume that it represents a crucial step toward a unified theory–and perhaps at least tentative confirmation of the existence of strings, branes, hyperspaces, multiverses and all the other fantastical eidolons that Kaku, Stephen Hawking, Brian Greene, Lisa Randall and other unification enthusiasts tout in their bestsellers.

But the Higgs doesn’t take us any closer to a unified theory than climbing a tree would take me to the Moon. As I’ve pointed out previously, string theory, loop-space theory and other popular candidates for a unified theory postulate phenomena far too minuscule to be detected by any existing or even conceivable (except in a sci-fi way) experiment. Obtaining the kind of evidence of a string or loop that we have for, say, the top quark would require building an accelerator as big as the Milky Way.

Kaku asserts in The Wall Street Journal that finding the Higgs “is not enough. What is needed is a genuine theory of everything, which can simply and beautifully unify all the forces of the universe into a single coherent whole—a goal sought by Einstein for the last 30 years of his life.” He insists that we are at “the beginning, not the end of physics. The adventure continues.” Maybe. But I’m not hopeful. Whether or not physicists find the Goddamn Particle, the quest for unification, which has given physics its glitter over the past half century, looks increasingly like a dead end.

Almost 10 years ago, I put my money where my mouth is. The Long Now Foundation, a nonprofit that encourages long-term thinking, asked a bunch of people to make bets about trends in science, technology and other realms of culture. I bet Kaku $1,000 that by the year 2020, “no one will have won a Nobel Prize for work on superstring theory, membrane theory or some other unified theory describing all the forces of nature.” (Lee “loop space” Smolin was my original counter-bettor but backed out at the last minute, the big chicken.)

Kaku and I each put up $1,000 in advance, which the Long Now Foundation keeps in escrow. If civilization–or more importantly, the Long Now Foundation–still exists in 2020, it will give $2,000 to a charity designated by me (the Nature Conservancy) or Kaku (National Peace Action). In defending my bet, I stated:

“The dream of a unified theory, which some evangelists call a ‘theory of everything,’ will never be entirely abandoned. But I predict that over the next twenty years, fewer smart young physicists will be attracted to an endeavor that has vanishingly little hope of an empirical payoff. Most physicists will come to accept that nature might not share our passion for unity. Physicists have already produced theories–Newtonian mechanics, quantum mechanics, general relativity, nonlinear dynamics–that work extraordinarily well in certain domains, and there is no reason why there should be a single theory that accounts for all the forces of nature. The quest for a unified theory will come to be seen not as a branch of science, which tells us about the real world, but as a kind of mathematical theology.”

I added, however—and this is both mawkish tripe and the truth–that “I would be delighted to lose this bet.”

Image courtesy Wikimedia Commons.

About the Author: Every week, John Horgan takes a puckish, provocative look at breaking science. A former staff writer at Scientific American, he is the author of four books, including The End of Science (Addison Wesley, 1996) and The End of War (McSweeney's Books, January 2012).

The views expressed are those of the author and are not necessarily those of Scientific American.

The 'God Particle' and the Origins of the Universe

The Wall Street Journal

The 'God Particle' and the Origins of the Universe

The search for a unifying theory is nowhere near over.


Physicists around the world have something to celebrate this Christmas. Two groups of them, using the particle accelerator in Switzerland, have announced that they are tantalizingly close to bagging the biggest prize in physics (and a possible Nobel): the elusive Higgs particle, which the media have dubbed the "God particle." Perhaps next year, physicists will pop open the champagne bottles and proclaim they have found this particle.

Finding this missing Higgs particle, or boson, is big business. The European machine searching for it, the Large Hadron Collider, has cost many billions so far and is so huge it straddles the French-Swiss border, near Geneva. At 17 miles in circumference, the colossal structure is the largest machine of science ever built and consists of a gigantic ring in which two beams of protons are sent in opposite directions using powerful magnetic fields.

The collider's purpose is to recreate, on a tiny scale, the instant of genesis. It accelerates protons to 99.999999% the speed of light. When the two beams collide, they release a titanic energy of 14 trillion electron volts and a shower of subatomic particles shooting out in all directions. Huge detectors, the size of large apartment buildings, are needed to record the image of this particle spray.

Then supercomputers analyze these subatomic tracks by, in effect, running the video tape backwards. By reassembling the motion of this spray of particles as it emerges from a single point, computers can determine if various exotic subatomic particles were momentarily produced at the instant of the collision.

The theory behind all these particles is called the Standard Model. Billions of dollars, and a shelf full of Nobel Prizes along the way, have culminated in the Standard Model, which accurately describes the behavior of hundreds of subatomic particles. All the pieces of this jigsaw puzzle have been painstakingly created in the laboratory except the last, missing piece: the Higgs particle.

It is a crucial piece because it is responsible for explaining the various masses of the subatomic particles. It was introduced in 1964 by physicist Peter Higgs to explain the wide variation. Until then, a theory of subatomic particles had to assume that the masses of these particles are zero in order to obtain sensible mathematical results. This was a puzzling, disturbing result, since particles like the electron and proton have definite masses. Mr. Higgs showed that by introducing this new particle, one could preserve all the correct mathematical properties and still have non-zero masses for the particles.

While physicists cannot yet brag that they have found the Higgs particle, they have now narrowed down the range of possible masses, between 114 and 131 billion electron volts (over a hundred times more massive than the proton). With 95% confidence, physicists can rule out various masses for the Higgs particle outside this range.

Will finding the Higgs boson be the end of physics? Not by a long shot. The Standard Model only gives us a crude approximation of the rich diversity found in the universe. One embarrassing omission is that the Standard Model makes no mention of gravity, even though gravity holds the Earth and the sun together. In fact, the Standard Model only describes 4% of the matter and energy of the universe (the rest being mysterious dark matter and dark energy).

From a strictly aesthetic point of view, the Standard Model is also rather ugly. The various subatomic particles look like they have been slapped together haphazardly. It is a theory that only a mother could love, and even its creators have admitted that it is only a piece of the true, final theory.

So finding the Higgs particle is not enough. What is needed is a genuine theory of everything, which can simply and beautifully unify all the forces of the universe into a single coherent whole—a goal sought by Einstein for the last 30 years of his life.

The next step beyond the Higgs might be to produce dark matter with the Large Hadron Collider. That may prove even more elusive than the Higgs. Yet dark matter is many times more plentiful than ordinary matter and in fact prevents our Milky Way galaxy from flying apart.

So far, one of the leading candidates to explain dark matter is string theory, which claims that all the subatomic particles of the Standard Model are just vibrations of a tiny string, or rubber band. Remarkably, the huge collection of subatomic particles in the Standard Model emerge as just the first octave of the string. Dark matter would correspond roughly to the next octave of the string.

So finding the Higgs particle would be the beginning, not the end of physics. The adventure continues.

Mr. Kaku, a professor of theoretical physics at CUNY, is author of "Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by 2100" (Doubleday, 2011).

Saturday, December 3, 2011

Project Camelot and the 1960s Epistemological Revolution Rethinking the Politics-patronage-social Science Nexus


SOCIAL STUDIES OF SCIENCE

Project Camelot and the 1960s Epistemological Revolution

Rethinking the Politics-patronage-social Science Nexus

  1. Mark Solovey1

+ Author Affiliations

  1. 125 Irving Terrace, #7, Cambridge, Massachusetts 02138, USA; fax (at Harvard): +1 617 495 3344; solovey@asu.edu

Abstract

Project Camelot, a military-sponsored, social science study of revolution, was cancelled in 1965 amidst international and national discussion about the study's political implications. Subsequently, Camelot became the focus of a wide-ranging controversy about the connections between Cold War politics, military patronage, and American social science. This paper argues that following Camelot's demise, efforts to rethink the politics-patronage-social science nexus became an important part of what historian Peter Novick has called `the epistemological revolution that began in the 1960s'. Novick claims that `strictly academic' considerations provided the categories of analysis that challenged the scholarly mainstream's commitment to objectivity and related ideals, like value-neutrality and professional autonomy. In contrast, my analysis - which discusses post-WWII military patronage for the social sciences, Camelot's origins and cancellation, the ensuing controversy, and some long-term implications of this controversy - underscores the centrality of political developments and political concerns in that epistemological revolution.

Articles citing this article

Thursday, October 27, 2011

Transient Global Amnesia

serendip

Transient Global Amnesia


Miranda White

A little while ago, my father and grandfather were driving in our car together. All of a sudden, my grandfather said that he was feeling dizzy and thought the beginnings of a migraine were coming on. My grandfather is extremely healthy and has an amazing memory, so my father was shocked when not long after, when grandfather asked where Ruthy, his recently deceased wife, was. When my father reminded him that she had died of cancer last year, my grandfather broke into tears, as if he was being told for the first time. In addition, he couldn't even remember what he had just eaten for dinner or any other events of the day. My father drove him straight to the emergency room, worried that he had perhaps just suffered a minor stoke. By the time that he got to the hospital, he was already beginning to regain some of the memories that had been lost. The doctors reassured him that it was not a stroke, but rather a memory disorder called transient global amnesia.

Transient global amnesia (TGA) is a type of amnesia involving the sudden, temporary disturbance in an otherwise healthy person's memory. The other main kinds of amnesia are called anterograde and retrograde amnesia. Anterograde amnesia is a type of memory loss associated with a trauma, disease, or emotional events. It is characterized by the inability to remember new information. (1) Retrograde amnesia is associated with the loss of distant memories usually preceding a given trauma. (2) In transient global amnesia, generally both distant memories and immediate recall are retained, as are language function, attention, visual-spatial and social skills. However, during the period of amnesia, people suffering from the disorder cannot remember recent occurrences nor can they retain any new visual or verbal information for more than a couple minutes. (3) Though patients generally remember their own identities, they are often very confused by their surroundings and the people around them. They continuously ask questions about events that are transpiring, for example where they are, who is with them, what is happening. However, once they are told, they immediate forget the answer, and repeat the question again. (4)

The period of amnesia can last anywhere from one to twenty-four hours. Some people suffer from a headache, dizziness, and nausea while others have only memory loss. TGA generally affects fifty to eighty-year-old men, about 3.4 to 5.2 people per 100,000 per year. (5) People afflicted with transient global amnesia always recover and can remember the memories that were lost during the episode. (6) Once they regain their memory, some people, such as my grandfather, can recall both the episode and the feeling of not being able to remember. However, others never recover the memories of the attack nor the events immediately before.
The cause of TGA remains in dispute. There is convincing evidence that external emotional stresses, such as sexual intercourse, immersion in cold water, or strenuous physical exertion, can trigger the associated loss of memory. (7) For example, my grandfather suffered from TGA directly after taking his sister to the hospital. TGA may be the result of a transient ischemic attack, a "mini-stroke." Transient ischemic attacks are caused by a temporary interruption of the blood flow to the brain. (8) Another possible cause of transient global amnesia is a basilar artery migraine, a type of migraine caused by the abnormal constriction and dilatation of vessel walls. (9)

Patients suffering from transient global amnesia have undergone medical imaging techniques, for example magnetic resonance imaging (MRI) and positron emission topography (PT), in order to find out what biological changes cause a temporary lapse in memory. The symptoms of transient global amnesia seem to be the result of dysfunction in such regions of the brain as the diencephalon and medial temporal lobes. (4) The diencephalon is composed of the thalamus, epithalamus, subthalamus, and hypothalamus. The thalamus is associated with memory, and changes in its structure have been proven to result in amnesia. (10) Some MRIs have shown evidence of changes in the medial temporal lobes, indicating that patients had suffered from a transient ischemic attack. Nonetheless, many people that have undergone such tests have not shown any changes in the functioning of their brains. (4)

These findings are in line with our neurobiological understanding of memory. Under normal functioning, there are three kinds of memory: working memory, declarative memory, and procedural memory. Working memory allows for short-term recollection, for example, it is responsible for your being able to remember the gist of the sentence you just read. It is associated with the temporary storage of verbal and visual information. The verbal working memory is localized to the frontal regions of the left hemisphere, while spatial working memory involves mainly the right hemispheres. Procedural memory is responsible for cognitive and motor skills, all learned, habitual actions, for example, my ability to type this paper without looking at the keyboard or my ability to ride a bicycle. (12) The anatomical basis for procedural memory appears to be the basil ganglia, thalamus, and the frontal lobes. Declarative memory, associated with the hippocampus, is all experiences and conscious memory, including people, events, objects, facts, figures, and names. The region of the brain termed the medial frontal lobe is particularly responsible for declarative memory function.

There is much evidence proving that damage to the medial frontal lobe, severely affects a person's ability to recall and form long-term memories. The most well-known clinical example involves a patient called H.M. H.M. was afflicted with epilepsy. Surgeons removed both of his medial temporal lobes in an attempt to cure him from his disease. However, in so doing, they profoundly damaged his memory. He could no longer form new memories, though all his memories from before the surgery were retained - in other words he had anterograde amnesia. (11) Therefore, it appears that the lack of functioning and blood supply of the medial temporal lobe produces the symptoms of transient global amnesia, and results in the inability to make and recall autobiographical memories.

Transient global amnesia fortunately has a very positive prognosis - its effects are never permanent and the episodes last for a relatively short period of time. However, the inability to remember can be extraordinarily frightening. It is a natural experiment because it shows fairly clearly that certain parts of the brain are involved with certain kinds of memory. We often see ourselves as unitary beings, but in fact we are made up of many different processes that make up who we are. Although much of the neurobiology associated with memory remains quite mysterious, transient global amnesia helps highlight the particular machinery of our personal narratives.

References

1)Anterograde amnesia

2)HealthyMe Amnesia

3)E Medicine, Transient Global Amnesia

4)Transient Global Amnesia Case Studies

5)Neuroland, TGA

6)Transient Global Amnesia

7)HealingWell, What Happened to Afterglow

8)Transient Ischemic Attack

9)Basilar Artery Migraine Page

10)The Diencephalon

11)Medial Temporal Lobe

12)The Cognitive and Habit Subsystems , A great image of the anatomy of the brain.

Wednesday, October 26, 2011

The philosophy of “The Matrix”

from molecules to mind

Neurophilosophy

The philosophy of “The Matrix”

Keith R Laws

thematrix.jpgIn The Matrix (Andy and Larry Wachowski, 1999) Keanu Reeves plays a computer programmer who leads a double life as a hacker called “Neo”. After receiving cryptic messages on his computer monitor, Neo begins to search for the elusive Morpheus (Laurence Fishburn), the leader of a clandestine resistance group, who he believes is responsible for the messages. Eventually, Neo finds Morpheus, and is then told that reality is actually very different from what he, and most other people, perceives it to be.

Morpheus tells Neo that human existence is merely a facade. In reality, humans are being ‘farmed’ as a source of energy by a race of sentient, malevolent machines. People actually live their entire lives in pods, wtih their brains being fed sensory stimuli which give them the illusion of leading ‘ordinary’ lives. Morpheus explains that, up until then, the “reality” perceived by Neo is actually “a computer-generated dreamworld…a neural interactive simulation” called the matrix.

The Matrix is based on a philosophical question posed by the 17th Century French philosopher and mathematician René Descartes. One of Descartes’s most important theses was intellectual autonomy, or the ability to think for oneself. For Descartes, this entails not just having a “good mind”, but also “applying it well”.

Descartes knew that his sensory experiences did not always match reality, and used the Wax Argument to demonstrate how unreliable the senses are: the senses inform us that a piece of wax has a specific shape, texture, smell, etc. But these characteristics soon change when the wax is brought near a flame.

Everything I have accepted up to now as being absolutely true and assured, I have learned from or through the senses. But I have sometimes found that these senses played me false it is prudent never to trust entirely those who have once deceived usThus what I thought I had seen with my eyes, I actually grasped solely with the faculty of judgment, which is in my mind.

Descartes was therefore suspicious of his percepts, the knowledge he obtained through his senses, and all his own beliefs. He became convinced that one must use one’s mind, rather than one’s senses, to obtain information about the world. In the system of knowledge constructed by Descartes, perception is unreliable as means of gathering information, and the mental process of deduction is the only way to acquire real knowledge of the world.

In Meditations on First Philosophy, published in 1641, he takes this idea to its limits, and comes to the conclusion that perhaps all of his experiences are being conjured up by this evil demon:

…firmly implanted in my mind is the long-standing opinion that there is an omnipotent God who made me the kind of creature that I am. How do I know that he has not brought it about that there is no earth, no sky, no extended thing, no shape, no size, no place, while at the same time ensuring that all these things appear to me to exist just as they do now? What is more, just as I consider that others sometimes go astray in cases where they think they have the most perfect knowledge, how do I know that God has not brought it about that I too go wrong every time I add two and three or count the sides of a square, or in some even simpler matter, if that is imaginable? [but] since he is said to be supremely good…I will suppose…[that] some malicious demon of the utmost power and cunning has employed all his energies in order to deceive me. I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgment.

Descartes therefore approached all knowledge, including his own, from a highly skeptical perspective. Despite his skepticism, Descartes, was certain that one could not be fooled about one’s own existence, hence his famous dictum cogito ergo sum (“I think, therefore I am”). With this, Descartes meant that the only thing he did not doubt was his own existence, because the act of thinking about, and doubting, the reality of his perceptions was affirmation of his existence. By saying “I think therefore I am”, he was defining ‘truth’ in terms of doubt.

Descartes’s argument is an epistemological one. It questions the nature, limits and validity of human knowledge. Instead of inquiring into the nature of reality, Descartes questions his own knowledge and interpretation of reality. Using methodological skepticism, Descartes doubted anything that could be doubted, in order to lay a foundation for genuine knowledge. In terms of epistemology, much of our acquired knowledge is adequate to explain the world, but there is no such thing as “absolute” truth.

A modern version of Descartes’ conundrum is a thought experiment called the ‘brain in a vat’. This is Hilary Putnam‘s version of the argument:

imagine that a human being…has been subjected to an operation by an evil scientist. The person’s brain…has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a… computer which causes the person…to have the illusion that everything is perfectly normal. There seem to be people, objects, the sky, etc.; but really, all the person…is experiencing is the result of electronic impulses travelling from the computer to the nerve endings. The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to ‘see’ and ‘feel’ the hand being raised. Moreover, by varying the program, the evil scientist can cause the victim to ‘experience’ (or hallucinate) any situation or environment the evil scientist wishes. He can also obliterate the memory of the brain operation, so that the victim will seem to himself to have always been in this environment. It can even seem to the victim that he is sitting and reading these very words about the amusing but quite absurd supposition that there is an evil scientist who removes people’s brains from their bodies and places them in a vat of nutrients which keep the brains alive.

The brain in a vat, although just a rehash of the argument by Descartes, is more directly related to The Matrix. In the film, the pods in which humans spend their lives represent the vat. The only difference is that, instead of just containing disembodied brains, the pods contain the entire body.

In theory, computers could simulate reality if the sensory stimuli corresponding to human experience could be determined and ‘executed’ as a computer program, which could ‘run’ in some kind of advanced brain implant. In practice, however, even if the exact computations required to generate a simulated constant stream of consciousness were determined, there is no computer in the world that is powerful enough to perform the necessary calculations. The world’s most powerful supercomputer is not powerful enough to process the visual information entering the eye of a fruit fly over a period of one second, let alone generate a stream of consciousness. Some people would, at this point say that, with the acceleration of processing speed and advances in quantum computing, computers may well have the power to simulate human consciousness in the foreseeable future. But that is another argument, which is beyond the scope of this post.

The notion, propounded by Descartes, that all our perceptions are false at first seems ridiculous, but it is, in fact, impossible to disprove. And Descartes was right to distrust his senses. Optical illusions are a good example of sensory stimuli that produce a discrepancy between what we see and what we experience, and there are numerous other examples, such as psychiatric conditions in which visual of auditory hallucinations are symptoms. In the case of opitcal illusions, we are aware of the discrepancy, but, otherwise, we do not normally question our senses. For Descartes, even the most basic assumption about reality was to be doubted.

Even when we are not looking at optical illusions, and perceive the world as we should be perceiving it, we are still being fooled by our senses. In neurobiological terms, “reality” is little more than a representational model of the world, a construct generated by multiple neural circuits acting in parallel. This model is based on sensory experiences received by the brain via the senses, which can detect only the narrowest range of stimuli. The human eye, for example, is sensitive to electromagnetic radiation with a wavelength of between approximately 400-750 nm (nanometers, billionths of a meter), an infinitesimal fraction of the entire spectrum. In that respect, the other senses are not much different.

Plato alludes to the narrow limits of the senses in this passage from The Republic:

Imagine human beings living in an underground, cavelike dwelling, with an entrance a long way up, which is both open to the light and as wide as the cave itself. They’ve been there since childhood, fixed in the same place, with their necks and legs fettered, able to see only in front of them, because their bonds prevent them from turning their heads around. Light is provided by a fire burning far above and behind them. Also behind them, but on higher ground, there is a path stretching between them and the fire. Imagine that along this path a low wall has been built, like the screen in front of puppeteers above which they show their puppets…Then also imagine that there are people along the wall, carrying all kinds of artifacts that project above it – statues of people and other animals, made out of stone, wood, and every material. And, as you’d expect, some of the carriers are talking, and some are silent.

The cave-dwellers get a hint of reality from the shadows on the walls. They may see a shadow of an object, and construct a mental representation of that object. But, according to Plato, knowing the form of the object is not sufficient to have a full understanding of it, which can only be obtained by more direct experience. For him, the world as we perceive it is no more or less real than that perceived by the people in The Matrix, as neither we, nor they, actually have any direct experience of that world.

In The Marriage of Heaven and Hell, Blake reiterates Plato’s argument, and refers directly to the cave allegory:

If the doors of perception were cleansed every thing would appear to man as it is, infinite. For man has closed himself up, till he sees all things thro’ narrow chinks of his cavern.

For Blake, we are deceiving ourselves about our understanding of reality. Plato’s argument, like that of Descartes, involves deception by other entities. Whereas Descartes believes he is being deceived by his demon about the nature of reality, the cave-dwellers are being deceived by the mysterious puppeteers behind the wall. ‘Reality’ for the cave-dwellers is little more than the shadows dancing on the walls. These are mere impressions of what lies behind the wall, yet the cave-dwellers use them to construct their models of the world, because it is the only information they have.

While we need not be as skeptical as Descartes, we should bear in mind that he was, to a certain extent, correct. But there are no malevalent forces deceiving us about the nature of reality. It is our senses and our brains which deceive us, the former by providing the extremely limited information on which our perception of reality is based, and the latter by using that information to construct models of the world. The truth – believe it or not – is that we all live in a matrix, albeit one composed of several hundred billion neurons and the quadrillion (1024) or so synapses formed by them.

__________________

The Matrix is both thought-provoking and hugely entertaining. It shamelessly borrows from earlier sci-fi films, such as Blade Runner and The Terminator, but at the same time manages to remain original. For me, it is the hints of philosophy that make The Matrix so watchable. Like some others, I think that the film has more style than substance. The quality of that substance makes up for the quantity, but, like Roger Ebert, I “wanted to be challenged even more” by the film. I would have liked the Wachowski brothers to explore the philosophical aspects a little further, but perhaps the film would have lost its commercial appeal if they had done so.

This documentary, which features Dan Dennett, David Chalmers and Ray Kurzweil, examines other philosophical aspects of The Matrix, such as simulated computerized environments, virtual reality, and artificial intelligence.

http://youtu.be/9q1jHx29C70?hd=1


Reconstructive memory: Confabulating the past, simulating the future

from molecules to mind

Neurophilosophy

Reconstructive memory: Confabulating the past, simulating the future

Keith R Laws

The term ‘Rashomon effect’ is often used by psychologists in situations where observers give different accounts of the same event,and describes the effect of subjective perceptions on recollection. The phenomenon is named after a 1950 film by the great Japanese director Akira Kurosawa. It was with Rashōmon that Western cinema-goers discovered both Kurosawa and Japanese film in general – the film won the Golden Lion at the Venice Film Festival in 1951, as well as the Academy Award for Best Foreign Language film the following year.

Rashōmon is an adaptation of two short stories by Akutagawa Ryunosuke. Set in the 12th century, the film depicts the trial of a notorious bandit called Tajomaru (played by Kurosawa’s frequent collaborator Toshirô Mifune), who is alleged to have raped a woman and killed her samurai husband. In flashbacks, the incident is recalled by four different witnesses – a woodcutter, a priest, the perpetrator and, via a medium, the murder victim. Each of the testimonies is equally plausible, yet all four are in mutual contradiction with each other.

The film is an examinantion of human nature and the nature of reality. It compels the viewer to seek the truth. Each testimony is influenced by the intentions, experiences and self-perceptions of the witness. They all tell their own ‘truth’, but it is distorted by their past and by their future. Under Kurosawa’s masterful direction, the characters start off happy in the knowledge that they know exactly what happened between the samaurai, his wife and the bandit. One by one, each character begins to doubt their own account of the incident. In the end, both the cast and the viewer are left in a state of confusion and bewilderment.

rashomon_poster.jpg
The idea that we do not remember things as they actually happened is usually attributed to Sir Frederick Bartlett (1886-1969), who spent much of his professional career at Cambridge University, where he became head of the psychology department. He describes the process of memory in his classic 1932 book, Remembering: A Study in Experimental and Social Psychology:

Remembering is not a completely independent function, entirely distinct from perceiving, imaging, or even from constructive thinking, but it has intimate relations with them all…One’s memory of an event reflects a blend of information contained in specific traces encoded at the time it occurred, plus inferences based on knowledge, expectations, beliefs, and attitudes derived from other sources.

According to Bartlett, memories are organized within the historical and cultural frameworks (which Bartlett called ‘schemata’) of the individual, and the process of remembering involves the retrieval of information which has been unknowingly altered in order that it is compatible with pre-existing knowledge.

Bartlett’s ideas about how memory works came to him during a game of Chinese whispers, in which a short story is relayed through a chain of people, each of whom makes minor retrieval errors, such that the final retelling may be completely different from the original. One of his experiments involved asking subjects to read a Native American folk story called The War of the Ghosts, and then recall it several times, sometimes up to a year later. He chose it because the cultural context in which it is set was unfamiliar to the participants in his experiments.

Bartlett found that upon recall, the subjects altered the narrative of the story to make it fit in with their existing schemata. Participants omitted information they regarded as irrelevant, changed the emphasis to points they considered to be significant, and rationalized the parts that did not make sense, to make the story more comprehensible to themselves. In other words, memory is reconstructive rather that reproductive.

Although Remembering was largely ignored upon its publication, it is today highly influential. Elizabeth Loftus, a professor of psychology and law at the University of California, Irvine, has devoted her career to studying the reconstructive nature of memory in relation to eyewitness testimony.

Loftus is concerned mainly with how the recollections of eyewitnesses can be deliberately manipulated by misinformation. In extreme cases, this can lead to completely false memories of events that did not take place. One of Loftus’s more famous studies addresses the use of ‘leading’ questions in the courtroom. In the study, students were shown film clips of a car accident, and then asked a question about the accident. Those asked “About how fast were the cars going when they smashed into each other?” gave answers which averaged about 39 mph, whereas those asked “About how fast were the cars going when they contacted each other?” gave answers with an average speed of 32 mph.

Loftus’s research, like that of Bartlett’s, shows that our memories are quite often not as accurate as we would like to think they are. The knowledge that memory is to some extent confabulation has very serious implications for the use in the courtroom of eyewitness testimonies, because if eyewitness testimonies can be unreliable, then the validity of criminal convictions based upon them is open to question.

As well as confabulating the past, the brain also envisages events that have not yet occurred. The process of anticipating oneself attending a future event probably involves drawing on past experiences to generate a ‘simulation’ of the future event. In an essay in this week’s issue of Nature, Daniel Schacter argues that this ‘episodic-future’ thinking is entirely dependent on reconstructive memory:

…future events are not exact replicas of past events, and a memory system that simply stored rote records would not be well-suited to simulating future events. A system built according to constructive principles may be a better tool for the job: it can draw on the elements and gist of the past, and extract, recombine and reassemble them into imaginary events that never occurred in that exact form. Such a system will occasionally produce memory errors, but it also provides considerable flexibility.

Most of the evidence that reconstructive memory may be essential for envisioning future events comes from amnesic patients who also have difficulties picturing themselves in the future, and now there is also some experimental evidence. For example, in a paper published in advance on the Proceedings of the National Academy of Sciences website earlier this week, Szpunar et al describe functional neuroimaging studies which show that some of the brain regions that are activated when recalling a personal memory – the posterior cingulate gyrus, parahippocampal gyrus and left occipital lobe – are also active when thinking about a future event.

Related posts:

References:

Hassabis, et al. (2007). Patients with hippocampal amnesia cannot imagine new experiences. PNAS DOI: 10.1073/pnas.0610561104

Szpunar, K. K., et al. (2007). Neural substrates of envisioning the future. PNAS DOI: 10.1073/pnas.0610082104.

Schacter, D. L. & Addis, D. R. (2007). Constructive memory: The ghosts of past and future. Nature 445: 27-29.

Loftus, E. F. (2003). Our chaneable memories: legal and practical implications. Nature Rev. Neurosci. 4: 231-234.

Loftus, E. F. (1975). Leading questions and the eyewitness report. Cognitive Psychology 7: 560-572.

The dissociative fugue state: Forgetting one’s own identity

from molecules to mind
Neurophilosophy

The dissociative fugue state: Forgetting one’s own identity


The New York Times has an interesting article about a rare and poorly-understood form of amnesia called dissociative fugue, in which some or all memories of one’s identity become temporarily inaccessible:

Last year a Westchester County lawyer – a 57-year-old husband and father of two, Boy Scout leader and churchgoer – left the garage near his office and disappeared. Six months later he was found living under a new name in a homeless shelter in Chicago, not knowing who he was or where he came from.

Library searches and contact with the Chicago police did not help the man. His true identity was uncovered through an anonymous tip to “America’s Most Wanted.” But when he was contacted by his family, he had no idea who they were.

The fugue state is one of a number of dissociative memory disorders, all of which are characterized by an interruption of, or dissociation from, fundamental aspects of one’s everyday life, such as personal identity and personal history. During the fugue state – which can last several hours or a few months – an individual forgets who they are and takes leave of his or her usual physical surroundings. In a minority of cases, the individual can assume a new identity. Often, the fugue state remains undiagnosed until the individual has emerged from it and can recall their real identity. Upon emerging from the fugue state, the individual is usually surprised to find themselves in unfamiliar surroundings.

The prevalence of dissociative fugue is about 1 in 2,000, but the condition is more prevalent in war veterans or those who have experienced natural disasters or similar traumatic events. For example, the lawyer mentioned in the quote above was a veteran of the Vietnam war, and had walked in between the twin towers of the World Trade Center just minutes before the first aircraft struck the north tower on September 11, 2001.

Unlike most forms of amnesia, which are associated with damage to specific parts of the brain (such as the hippocampus), dissociative fugue has no known physical cause. Typically, the memory loss is triggered by a traumatic life event; subsequently, the individual enters the fugue state, during which the retrieval of memories associated with the event is somehow prevented. Thus, the fugue state is psychogenic: psychological factors impinge upon the neurobiological bases of memory retrieval. The memory loss is, however, reversible; once the individual emerges from the fugue state, he or she is once again capable of retrieving the “lost” memories.

Related:

Alien abduction, reincarnation & memory errors

from molecules to mind

Neurophilosophy

Alien abduction, reincarnation & memory errors


Keith R Laws

We rarely remember things as they actually happened. Rather, as memories are encoded, they are altered in order to be made compatible with our existing knowledge; upon retrieval, memories are reconstructed rather than reproduced. Because the extent to which this reconstruction occurs can vary, some memories are very accurate while others are a mixture of fact and fantasy. Yet others – claims of highly implausible events such as alien abduction and reincarnation, for example - are completely fabricated.

Maarten Peters and his colleagues, of the Department of Experimental Psychology at Maastricht University in the Netherlands, investigated the propensity for memory errors in people who make implausible claims. In many cases of false memories, it is very difficult to determine whether or not the perceived events actually occurred – that is, the “ground truth” can not be established. The experimental group was therefore chosen on the basis of a highly implausible claim – the group consisted of 11 women and 2 men, all of whom claimed to have recollections of a previous life. The performance of this group on a memory task was compared to that of a control group. It was found that people who claim to have lived past lives were more prone than the control group to memory errors. The findings were published recently in the journal Consciousness and Cognition.

The researchers used a modified version of a well established laboratory procedure for eliciting false recall of words. Participants were first asked to read aloud a list of 40 names of non-famous people. Two hours later, they were presented with another list, containing the old non-famous names (those that had been included in the previous list), new non-famous names, and the names of well-known actors, writers and politicians. They were then asked to make “fame judgements” on the names in the second list; that is, they were asked to determine whether or not each of the people on the list was famous or not. Those in the experimental group were found to be more susceptible than the control group to the “false fame illusion” – they were about twice as likely to identify the old non-famous names as famous names more than those in the control group.

How might such false memories form? In the study by Peters et al, the misidentification of names occurs because the previous encounter with the non-famous names is mistakenly taken as an indication that those people must be famous. This is an example of what is called a source monitoring error. Recollection of an event involves using information from various sources – one’s own memory, for example, and other peoples’ accounts of the event. When source monitoring is impaired, one has difficulty attributing where and how information was acquired, but the origins of unreliable pieces of information are unquestioned nevertheless; the unreliable information is therefore easily incorporated into a memory.

Clearly, people can, to a greater or lesser extent, be influenced by the suggestions of others. If one becomes convinced that a suggested event is plausible, one may start to believe that the event has actually taken place. Reiteration of the false memory leads to what is known as the “illusion of truth”. Furthermore, such claims can seem even more realistic when corroborated by other people. Hence, a group of people who share in common claims of an implausible event – such as being abducted by aliens, for example – will be convinced that their claims are real, because they will corroborate each other. The use of leading questions can also result in the inadvertent planting of false memories in people undergoing interrogation or cross-examination. Psychopathology is another cause of false memories; it is well documented that schizophrenics are prone to source monitoring errors in memory and other cognitive processes; thus, schizophrenics believe that the auditory or visual hallucinations they experience are real.

It is now also acknowledged that false memories can be deliberately planted. Psychiatrists have been known to implant completely false memories in their patients during therapy. Most commonly, these false memories are of events such as childhood sexual abuse or participation in Satanic rituals. In some cases, psychiatrists have been sued for malpractice, and defendants have been awarded millions of dollars in damages for the traumas they have experienced as a result.

Peters and his colleagues did not deliberately implant false memories in their participants, but other researchers have shown how easy it is to distort peoples’ recollections of real events, or to coax people into “remembering” entire events that did not happen. In one experiment, led by Elizabeth Loftus, a professor of psychology at the University of California, Irvine, a false memory of a plausible and mildly traumatic event – getting lost in a shopping mall or a large department store as a child – was falsely implanted in experimental subjects. The subjects were asked to recall childhood events that had been recounted to them by their parents, older siblings or other close relatives. A booklet containing a paragraph about each of the recalled events, and one false memory (of being lost in a shopping mall) was then prepared for each of the participants. The participants were then asked to read each story in their booklet, and to write down what they remembered about each event. It was found that 5 out of the 24 subjects thought that they had experienced the false event; some claimed to remember it only partially, while others reported that they remembered it fully. Work by other research groups shows that memories of highly implausible events, and even impossible events, such as alien abduction and reincarnation, can be planted just as easily.

Reference:

Peters, M. J. V., et al. (2007). The false fame illusion in people with memories about a previous life. Consc. Cog. 16: 162-169. [Abstract]

Related:

  • Reconstructive memory: Confabulating the past, simulating the future

  • Keith R Laws University of Hertfordshire

    Professor of Cognitive Neuropsychology, research interests are Schizophrenia, Alzheimers, Multitasking, Sex differences, Meta-Analysis, Research Methods/Stats

    Saturday, October 15, 2011

    The Big Picture: Getting Skeptical About Global Warming Skepticism

    Skeptical Science
    Getting Skeptical About Global Warming Skepticism

    The Big Picture

    Posted on 24 September 2010 by dana1981

    Oftentimes we get bogged down discussing one of the many pieces of evidence behind man-made global warming, and in the process we can't see the forest for the trees. It's important to every so often take a step back and see how all of those trees comprise the forest as a whole. Skeptical Science provides an invaluable resource for examining each individual piece of climate evidence, so let's make use of these individual pieces to see how they form the big picture.

    The Earth is warming

    We know the planet is warming from surface temperature stations and satellites measuring the temperature of the Earth's surface and lower atmosphere. We also have various tools which have measured the warming of the Earth's oceans. Satellites have measured an energy imbalance at the top of the Earth's atmosphere. Glaciers, sea ice, and ice sheets are all receding. Sea levels are rising. Spring is arriving sooner each year. There's simply no doubt - the planet is warming.

    And yes, the warming is continuing. The 2000s were hotter than the 1990s, which were hotter than the 1980s, which were hotter than the 1970s. 2010 is on pace to be at least in the top 3 hottest calendar years on record. In fact, the 12-month running average global temperature broke the record 3 times in 2010, according to NASA GISS data. Sea levels are still rising, ice is still receding, spring is still coming earlier, there's still a planetary energy imbalance, etc. etc. Contrary to what some would like us to believe, the planet has not magically stopped warming.

    Humans are causing this warming

    There is overwhelming evidence that humans are the dominant cause of this warming, mainly due to our greenhouse gas emissions. Based on fundamental physics and math, we can quantify the amount of warming human activity is causing, and verify that we're responsible for essentially all of the global warming over the past 3 decades. In fact we expect human greenhouse gas emissions to cause more warming than we've thus far seen, due to the thermal inertia of the oceans (the time it takes to heat them). Human aerosol emissions are also offsetting a significant amount of the warming by causing global dimming.

    There are numerous 'fingerprints' which we would expect to see from an increased greenhouse effect (i.e. more warming at night, at higher latitudes, upper atmosphere cooling) that we have indeed observed. Climate models have projected the ensuing global warming to a high level of accuracy, verifying that we have a good understanding of the fundamental physics behind climate change.

    Sometimes people ask "what would it take to falsify the man-made global warming theory?". Well, basically it would require that our fundamental understanding of physics be wrong, because that's what the theory is based on. This fundamental physics has been scrutinized through scientific experiments for decades to centuries.

    The warming will continue

    We also know that if we continue to emit large amounts of greenhouse gases, the planet will continue to warm. We know that the climate sensitivity to a doubling of atmospheric CO2 from the pre-industrial level of 280 parts per million by volume (ppmv) to 560 ppmv (we're currently at 390 ppmv) will cause 2–4.5°C of warming. And we're headed for 560 ppmv in the mid-to-late 21st century if we continue business-as-usual emissions.

    The net result will be bad

    There will be some positive results of this continued warming. For example, an open Northwest Passage, enhanced growth for some plants and improved agriculture at high latitudes (though this will require use of more fertilizers), etc. However, the negatives will almost certainly outweigh the positives, by a long shot. We're talking decreased biodiversity, water shortages, increasing heat waves (both in frequency and intensity), decreased crop yields due to these impacts, damage to infrastructure, displacement of millions of people, etc.

    Arguments to the contrary are superficial

    One thing I've found in reading skeptic criticisms of climate science is that they're consistently superficial. For example, the criticisms of James Hansen's 1988 global warming projections never go beyond "he was wrong", when in reality it's important to evaluate what caused the discrepancy between his projections and actual climate changes, and what we can learn from this. And those who argue that "it's the Sun" fail to comprehend that we understand the major mechanisms by which the Sun influences the global climate, and that they cannot explain the current global warming trend. And those who argue "it's just a natural cycle" can never seem to identify exactly which natural cycle can explain the current warming, nor can they explain how our understanding of the fundamental climate physics is wrong.

    There are legitimate unresolved questions

    Much ado is made out of the expression "the science is settled." My personal opinion is that the science is settled in terms of knowing that the planet is warming dangerously rapidly, and that humans are the dominant cause.

    There are certainly unresolved issues. There's a big difference between a 2°C and a 4.5°C warming for a doubling of atmospheric CO2, and it's an important question to resolve, because we need to know how fast the planet will warm in order to know how fast we need to reduce our greenhouse gas emissions. There are significant uncertainties in some feedbacks which play into this question. For example, will clouds act as a net positive feedback (by trapping more heat, causing more warming) or negative feedback (by reflecting more sunlight, causing a cooling effect) as the planet continues to warm?

    These are the sorts of questions we should be debating, and the issues that most climate scientists are investigating. Unfortunately there is a large segment of the population which is determined to continue arguing the resolved questions for which the science has already been settled. And when climate scientists are forced to respond to the constant propagation of misinformation on these settled issues, it just detracts from our investigation of the legitimate, unresolved, important questions.

    The Big Picture

    The big picture is that we know the planet is warming, humans are causing it, there is a substantial risk to continuing on our current path, but we don't know exactly how large the risk is. However, uncertainty regarding the magnitude of the risk is not an excuse to ignore it. We also know that if we continue on a business-as-usual path, the risk of catastrophic consequences is very high. In fact, the larger the uncertainty, the greater the potential for the exceptionally high risk scenario to become reality. We need to continue to decrease the uncertainty, but it's also critical to acknowledge what we know and what questions have been resolved, and that taking no action is not an option.


    Newcomers, Start Here

    Posted on 15 August 2010 by John Cook

    Skeptical Science is based on the notion that science by its very nature is skeptical. Genuine skepticism means you don't take someone's word for it but investigate for yourself. You look at all the facts before coming to a conclusion. In the case of climate science, our understanding of climate comes from considering the full body of evidence.

    In contrast, climate skepticism looks at small pieces of the puzzle while neglecting the full picture. Climate skeptics vigorously attack any evidence for man-made global warming yet uncritically embrace any argument, op-ed, blog or study that refutes global warming. If you began with a position of climate skepticism then cherrypick the data that supports your view while fighting tooth and nail against any evidence that contradicts that position, I'm sorry but that's not genuine scientific skepticism.

    So the approach of Skeptical Science is as follows. It looks at the many climate skeptic arguments, exposes how they focus on small pieces of the puzzle and then puts them in their proper context by presenting the full picture. The skeptic arguments are listed by popularity (eg - how often each argument appears in online articles). For the more organised mind, they're also sorted into taxonomic categories.

    Good starting points for newbies

    If you're new to the climate debate (or are of the mind that there's no evidence for man-made global warming), a good starting point is 10 Indicators of Global Warming which lays out the evidence that warming is happening and the follow-up article, 10 Human Fingerprints on Climate Change which lays out the evidence that humans are the cause. More detail is available in empirical evidence that humans are causing global warming. Contrary to what you may have heard, the case for man-made global warming doesn't hang on models or theory - it's built on direct measurements of many different parts of the climate, all pointing to a single, coherent answer.

    Smart Phone Apps

    For smart phone users, the rebuttals to all the skeptic arguments are also available on a number of mobile platforms. The first Skeptical Science app was an iPhone app, released in February 2010. This is updated regularly with the latest content from the website and very accessible in a beautifully designed interface by Shine Technologies. Shine Tech then went on to create a similar Android app which has some extra features missing from the iPhone version. A Nokia app was also created by Jean-François Barsoum (this was one of the 10 finalists in the Calling All Innovators competition).

    As well as the list of rebuttals, Skeptical Science also has a blog where the latest research and developments are examined and discussed. Comments are welcome and the level of discussion is of a fairly high quality thanks to a fairly strict Comments Policy. You need to register a user account to post comments. One thing many regulars are not aware of is you can edit your user account details (to get to this page, click on your username in the left margin).

    Keep up to date by email, RSS, Facebook or Twitter

    To keep up to date on latest additions to the website, sign up to receive new blog posts by email. There's an RSS feed for blog posts and for the engaged commenter, a feed for new user comments. I recommend you follow the Skeptical Science Twitter page as I not only tweet latest blog posts but also any other interesting climate links I happen upon throughout the day. New blog posts are also added to our Facebook page.

    About John Cook

    Lastly, for those wondering about who runs Skeptical Science, the website is maintained by John Cook. I studied physics at the University of Queensland but currently, I'm not a professional scientist - I run this website as a layman. People sometimes wonder why I spend so much time on this site and which group backs me. No group funds me. I receive no funding other than the occasional Paypal donations. As the lack of funding limits how much time I can spend developing the site, donations are appreciated. My motivations are two-fold: as a parent, I care about the world my daughter will grow up in and as a Christian, I feel a strong obligation to the poor and vulnerable who are hardest hit by climate change. Of course these are very personal reasons - I'm sure everyone comes at this from different angles. I go more deeply into my motivations in Why I care about climate change.

    The SkS Team

    However, there are many more who make invaluable contributions to Skeptical Science. There are a number of authors who write blog posts and are currently in the process of writing all the rebuttals in plain English. Translators from all over the world have translated the rebuttals into 15 different languages. There have been contributors to the one-line responses to skeptic arguments, proofreaders, technical support from boffins who understand computers a lot better than myself and commenters whose feedback have helped improve and hone the website's content. Skeptical Science has evolved from a small blog into a community of intelligent, engaged people with a commitment to science and our climate.

    Soft sciences are often harder than hard sciences

    The University of Alabama

    Soft sciences are often harder than hard sciences
    Discover (1987, August) by Jared Diamond


    n ''The overall correlation between frustration and instability (in 62 countries of the world) was 0.50.'' --Samuel Huntington, professor of government, Harvard

    n ''This is utter nonsense. How does Huntington measure things like social frustration? Does he have a social-frustration meter? I object to the academy's certifying as science what are merely political opinions.'' -- Serge Lang, professor of mathematics, Yale

    n ''What does it say about Lang's scientific standards that he would base his case on twenty-year-old gossip?'' . . . ''a bizarre vendetta'' . . . ''a madman . . .'' -- Other scholars, commenting on Lang's attack

    For those who love to watch a dogfight among intellectuals supposedly above such things, it's been a fine dogfight, well publicized in Time and elsewhere. In one corner, political scientist and co-author of The Crisis of Democracy, Samuel Huntington. In the other corner, mathematician and author of Diophantine Approximation on Abelian Varieties with Complex Multiplication, Serge Lang. The issue: whether Huntington should be admitted, over Lang's opposition, to an academy of which Lang is a member. The score after two rounds: Lang 2, Huntington 0, with Huntington still out.

    Lang vs. Huntington might seem like just another silly blood-letting in the back alleys of academia, hardly worth anyone's attention. But this particular dogfight is an important one. Beneath the name calling, it has to do with a central question in science: Do the so-called soft sciences, like political science and psychology, really constitute science at all, and do they deserve to stand beside ''hard sciences,'' like chemistry and physics?

    The arena is the normally dignified and secretive National Academy of Sciences (NAS), an honor society of more than 1,500 leading American scientists drawn from almost every discipline. NAS's annual election of about 60 new members begins long before each year's spring meeting, with a multi- stage evaluation of every prospective candidate by members expert in the candidate's field. Challenges of candidates by the membership assembled at the annual meeting are rare, because candidates have already been so thoroughly scrutinized by the appropriate experts. In my eight years in NAS, I can recall only a couple of challenges before the Lang-Huntington episode, and not a word about those battles appeared in the press.

    At first glance, Huntington's nomination in 1986 seemed a very unlikely one to be challenged. His credentials were impressive: president of the American Political Science Association; holder of a named professorship at Harvard; author of many widely read books, of which one, American Politics: The Promise of Disharmony, got an award from the Association of American Publishers as the best book in the social and behavioral sciences in 1981; and many other distinctions. His studies of developing countries, American politics, and civilian-military relationships received the highest marks from social and political scientists inside and outside NAS. Backers of Huntington's candidacy included NAS members whose qualifications to judge him were beyond question, like Nobel Prize winning computer scientist and psychologist Herbert Simon.

    If Huntington seemed unlikely to be challenged, Lang was an even more unlikely person to do the challenging. He had been elected to the academy only a year before, and his own specialty of pure mathematics was as remote as possible from Huntington's specialty of comparative political development. However, as Science magazine described it, Lang had previously assumed for himself ''the role of a sheriff of scholarship, leading a posse of academics on a hunt for error,'' especially in the political and social sciences. Disturbed by what he saw as the use of ''pseudo mathematics'' by Huntington, Lang sent all NAS members several thick mailings attacking Huntington, enclosing photocopies of letters describing what scholar A said in response to scholar B's attack on scholar C, and asking members for money to help pay the postage and copying bills. Under NAS rules, a candidate challenged at an annual meeting is dropped unless his candidacy is sustained by two-thirds of the members present and voting. After bitter debates at both the 1986 and 1987 meetings, Huntington failed to achieve the necessary two-thirds support.

    Much impassioned verbiage has to be stripped away from this debate to discern the underlying issue. Regrettably, a good deal of the verbiage had to do with politics. Huntington had done several things that are now anathema in U.S. academia: he received CIA support for some research; he did a study for the State Department in 1967 on political stability in South Vietnam; and he's said to have been an early supporter of the Vietnam war. None of this should have affected his candidacy. Election to NAS is supposed to be based solely on scholarly qualifications; political views are irrelevant. American academics are virtually unanimous in rushing to defend academic freedom whenever a university president or an outsider criticizes a scholar because of his politics. Lang vehemently denied that his opposition was motivated by Huntington's politics. Despite all those things, the question of Huntington's role with respect to Vietnam arose repeatedly in the NAS debates. Evidently, academic freedom means that outsiders can't raise the issue of a scholar's politics but other scholars can.

    It's all the more surprising that Huntington's consulting for the CIA and other government agencies was an issue, when one recalls why NAS exists. Congress established the academy in 1863 to act as official adviser to the U.S. government on questions of science and technology. NAS in turn established the National Research Council (NRC), and NAS and NRC committees continue to provide reports about a wide range of matters, from nutrition to future army materials. As is clear from any day's newspaper, our government desperately needs professionally competent advice, particularly about unstable countries, which are one of Huntington's specialties. So Huntington's willingness to do exactly what NAS was founded to do -- advise the government -- was held against him by some NAS members. How much of a role his politics played in each member's vote will never be known, but I find it unfortunate that they played any role at all.

    I accept, however, that a more decisive issue in the debates involved perceptions of the soft sciences -- e.g., Lang's perception that Huntington used pseudo mathematics. To understand the terms soft and hard science, just ask any educated person what science is. The answer you get will probably involve several stereotypes: science is something done in a laboratory, possibly by people wearing white coats and holding test tubes; it involves making measurements with instruments, accurate to several decimal places; and it involves controlled, repeatable experiments in which you keep everything fixed except for one or a few things that you allow to vary. Areas of science that often conform well to these stereotypes include much of chemistry, physics, and molecular biology. These areas are given the flattering name of hard science, because they use the firm evidence that controlled experiments and highly accurate measurements can provide.

    We often view hard science as the only type of science. But science (from the Latin scientia -- knowledge) is something much more general, which isn't defined by decimal places and controlled experiments. It means the enterprise of explaining and predicting -- gaining knowledge of -- natural phenomena, by continually testing one's theories against empirical evidence. The world is full of phenomena that are intellectually challenging and important to understand, but that can't be measured to several decimal places in labs. They constitute much of ecology, evolution, and animal behavior; much of psychology and human behavior; and all the phenomena of human societies, including cultural anthropology, economics, history, and government.

    These soft sciences, as they're pejoratively termed, are more difficult to study, for obvious reasons. A lion hunt or revolution in the Third World doesn't fit inside a test tube. You can't start it and stop it whenever you choose. You can't control all the variables; perhaps you can't control any variable. You may even find it hard to decide what a variable is. You can still use empirical tests to gain knowledge, but the types of tests used in the hard sciences must be modified. Such differences between the hard and soft sciences are regularly misunderstood by hard scientists, who tend to scorn soft sciences and reserve special contempt for the social sciences. Indeed, it was only in the early 1970s that NAS, confronted with the need to offer the government competent advice about social problems, began to admit social scientists at all. Huntington had the misfortune to become a touchstone of this widespread misunderstanding and contempt.

    While I know neither Lang nor Huntington, the broader debate over soft versus hard science is one that has long fascinated me, because I'm among the minority of scientists who work in both areas. I began my career at the hard pole of chemistry and physics, then took my Ph.D. in membrane physiology, at the hard end of biology. Today I divide my time equally between physiology and ecology, which lies at the soft end of biology. My wife, Marie Cohen, works in yet a softer field, clinical psychology. Hence I find myself forced every day to confront the differences between hard and soft science. Although I don't agree with some of Lang's conclusions, I feel he has correctly identified a key problem in soft science when he asks, ''How does Huntington measure things like social frustration? Does he have a social-frustration meter?'' Indeed, unless one has thought seriously about research in the social sciences, the idea that anyone could measure social frustration seems completely absurd.

    The issue that Lang raises is central to any science, hard or soft. It may be termed the problem of how to ''operationalize'' a concept. (Normally I hate such neologistic jargon, but it's a suitable term in this case.) To compare evidence with theory requires that you measure the ingredients of your theory. For ingredients like weight or speed it's clear what to measure, but what would you measure if you wanted to understand political instability? Somehow, you would have to design a series of actual operations that yield a suitable measurement -- i.e., you must operationalize the ingredients of theory.

    Scientists do this all the time, whether or not they think about it. I shall illustrate operationalizing with four examples from my and Marie's research, progressing from hard science to softer science.

    Let's start with mathematics, often described as the queen of the sciences. I'd guess that mathematics arose long ago when two cave women couldn't operationalize their intuitive concept of ''many.'' One cave woman said, ''Let's pick this tree over here, because it has many bananas.'' The other cave woman argued, ''No, let's pick that tree over there, because it has more bananas.'' Without a number system to operationalize their concept of ''many,'' the two cave women could never prove to each other which tree offered better pickings.

    There are still tribes today with number systems too rudimentary to settle the argument. For example, some Gimi villagers with whom I worked in New Guinea have only two root numbers, iya = 1 and rarido = 2, which they combine to operationalize somewhat larger numbers: 4 = rarido-rarido, 7 = rarido-rarido-rarido-iya, etc. You can imagine what it would be like to hear two Gimi women arguing about whether to climb a tree with 27 bananas or one with 18 bananas.

    Now let's move to chemistry, less queenly and more difficult to operationalize than mathematics but still a hard science. Ancient philosophers speculated about the ingredients of matter, but not until the eighteenth century did the first modern chemists figure out how to measure these ingredients. Analytical chemistry now proceeds by identifying some property of a substance of interest, or of a related substance into which the first can be converted. The property must be one that can be measured, like weight, or the light the substance absorbs, or the amount of neutralizing agent it consumes.

    For example, when my colleagues and I were studying the physiology of hummingbirds, we knew that the little guys liked to drink sweet nectar, but we would have argued indefinitely about how sweet sweet was if we hadn't operationalized the concept by measuring sugar concentrations. The method we used was to treat a glucose solution with an enzyme that liberates hydrogen peroxide, which reacts (with the help of another enzyme) with another substance called dianisidine to make it turn brown, whereupon we measured the brown color's intensity with an instrument called a spectrophotometer. A pointer's deflection on the spectrophotometer dial let us read off a number that provided an operational definition of sweet. Chemists use that sort of indirect reasoning all the time, without anyone considering it absurd.

    My next-to-last example is from ecology, one of the softer of the biological sciences, and certainly more difficult to operationalize than chemistry. As a bird watcher, I'm accustomed to finding more species of birds in a rain forest than in a marsh. I suspect intuitively that this has something to do with a marsh being a simply structured habitat, while a rain forest has a complex structure that includes shrubs, lianas, trees of all heights, and crowns of big trees. More complexity means more niches for different types of birds. But how do I operationalize the idea of habitat complexity, so that I can measure it and test my intuition?

    Obviously, nothing I do will yield as exact an answer as in the case where I read sugar concentrations off a spectrophotometer dial. However, a pretty good approximation was devised by one of my teachers, the ecologist Robert MacArthur, who measured how far a board at a certain height above the ground had to be moved in a random direction away from an observer standing in the forest (or marsh) before it became half obscured by the foliage. That distance is inversely proportional to the density of the foliage at that height. By repeating the measurement at different heights, MacArthur could calculate how the foliage was distributed over various heights.

    In a marsh all the foliage is concentrated within a few feet of the ground, whereas in a rain forest it's spread fairly equally from the ground to the canopy. Thus the intuitive idea of habitat complexity is operationalized as what's called a foliage height diversity index, a single number. MacArthur's simple operationalization of these foliage differences among habitats, which at first seemed to resist having a number put on them, proved to explain a big part of the habitats' differences in numbers of bird species. It was a significant advance in ecology.

    For the last example let's take one of the softest sciences, one that physicists love to deride: clinical psychology. Marie works with cancer patients and their families. Anyone with personal experience of cancer knows the terror that a diagnosis of cancer brings. Some doctors are more frank with their patients than others, and doctors appear to withhold more information from some patients than from others. Why?

    Marie guessed that these differences might be related to differences in doctors' attitudes toward things like death, cancer, and medical treatment. But how on earth was she to operationalize and measure such attitudes, convert them to numbers, and test her guesses? I can imagine Lang sneering ''Does she have a cancer-attitude meter?''

    Part of Marie's solution was to use a questionnaire that other scientists had developed by extracting statements from sources like tape-recorded doctors' meetings and then asking other doctors to express their degree of agreement with each statement. It turned out that each doctor's responses tended to cluster in several groups, in such a way that his responses to one statement in a cluster were correlated with his responses to other statements in the same cluster. One cluster proved to consist of expressions of attitudes toward death, a second cluster consisted of expressions of attitudes toward treatment and diagnosis, and a third cluster consisted of statements about patients' ability to cope with cancer. The responses were then employed to define attitude scales, which were further validated in other ways, like testing the scales on doctors at different stages in their careers (hence likely to have different attitudes). By thus operationalizing doctors' attitudes, Marie discovered (among other things) that doctors most convinced about the value of early diagnosis and aggressive treatment of cancer are the ones most likely to be frank with their patients.

    In short, all scientists, from mathematicians to social scientists, have to solve the task of operationalizing their intuitive concepts. The book by Huntington that provoked Lang's wrath discussed such operationalized concepts as economic well-being, political instability, and social and economic modernization. Physicists have to resort to very indirect (albeit accurate) operationalizing in order to ''measure'' electrons. But the task of operationalizing is inevitably more difficult and less exact in the soft sciences, because there are so many uncontrolled variables. In the four examples I've given, number of bananas and concentration of sugar can be measured to more decimal places than can habitat complexity and attitudes toward cancer.

    Unfortunately, operationalizing lends itself to ridicule in the social sciences, because the concepts being studied tend to be familiar ones that all of us fancy we're experts on. Anybody, scientist or no, feels entitled to spout forth on politics or psychology, and to heap scorn on what scholars in those fields write. In contrast, consider the opening sentences of Lang's paper Diophantine Approximation on Abelian Varieties with Complex Multiplication: ''Let A be an abelian variety defined over a number field K. We suppose that A is embedded in projective space. Let AK be the group of points on A rational over K.'' How many people feel entitled to ridicule these statements while touting their own opinions about abelian varieties?

    No political scientist at NAS has challenged a mathematical candidate by asking ''How does he measure things like 'many'? Does he have a many-meter?'' Such questions would bring gales of laughter over the questioner's utter ignorance of mathematics. It seems to me that Lang's question ''How does Huntington measure things like social frustration?'' betrays an equal ignorance of how the social sciences make measurements.

    The ingrained labels ''soft science'' and ''hard science'' could be replaced by hard (i.e., difficult) science and easy science, respectively. Ecology and psychology and the social sciences are much more difficult and, to some of us, intellectually more challenging than mathematics and chemistry. Even if NAS were just an honorary society, the intellectual challenge of the soft sciences would by itself make them central to NAS.

    But NAS is more than an honorary society; it's a conduit for advice to our government. As to the relative importance of soft and hard science for humanity's future, there can be no comparison. It matters little whether we progress with understanding the diophantine approximation. Our survival depends on whether we progress with understanding how people behave, why some societies become frustrated, whether their governments tend to become unstable, and how political leaders make decisions like whether to press a red button. Our National Academy of Sciences will cut itself out of intellectually challenging areas of science, and out of the areas where NAS can provide the most needed scientific advice, if it continues to judge social scientists from a posture of ignorance.

    COPYRIGHT 1987 Discover
    COPYRIGHT 2004 Gale Group