FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG

OCCUPY THE SCIENTIFIC METHOD


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Saturday, February 20, 2010

Brain at the breaking point: the mechanics behind traumatic brain injuries




Study that stretched and strained neural connections could yield insights into traumatic injury

Web edition : 1:12 pm

access
Broken axonSudden forces cause microtubules running inside axons to break (arrow), leading to axon swelling and damage, a new study shows. The work may have implications for understanding traumatic brain injury.D. Smith

SAN DIEGO — Rigid pathways in brain cell connections buckle and break when stretched, scientists report, a finding that could aid in the understanding of exactly what happens when traumatic brain injuries occur.

Up to 20 percent of combat soldiers and an estimated 1.4 million U.S. civilians sustain traumatic brain injuries each year. But the mechanics behind these injuries have remained mysterious.

New research, described February 19 at the annual meeting of the American Association for the Advancement of Science, suggests exactly how a blow to the brain disrupts this complex organ.

The brain “is not like the heart. If you lose a certain percentage of your heart muscle, then you’ll have a certain cardiac output,” says Geoffrey Manley, a neurologist at the University of California, San Francisco. Rather, the brain is an organ of connections. Car crashes, bomb blasts and falls can damage these intricate links, and even destroying a small number of them can cause devastating damage.

“You can have very small lesions in very discrete pathways which can have phenomenal impact,” says Manley, who did not participate in the study. One of the challenges brain injury researchers face, he says, is that “we’re not really embracing this idea of functional connectivity.

Recently, researchers have found that sudden blows can cause damage to the long fibers that extend from brain cells called axons, sometimes breaking the links between brain cells. But researchers didn’t know exactly what inside the axon snapped. The new research, conducted by Douglas Smith of the University of Pennsylvania and colleagues, finds that tiny tracks called microtubules are damaged inside axons by forces similar to those that cause traumatic brain injury.

Microtubules extend down the length of axons and serve as “superhighways of protein transfer,” says Smith. Brain cells rely on microtubules to move important cellular material out to the end of the axons. When Smith and colleagues quickly stretched brain cells growing on a silicone membrane, the microtubules inside the axons immediately buckled and broke, spilling their contents. “This disconnection at various discrete points spells disaster, and things are just dumped out at that site,” Smith says. “Microtubules are the stiffest component in axons, and they can’t tolerate that rapid, dynamic stretch.”

Smith points out that the duration of the stress applied is crucial to how well the axons — and microtubules — withstand damage. Like Silly Putty pulled apart slowly, axons can adjust to gradual stretching, Smith says. But sudden forces, like those that happen in blasts and car crashes, would cause the Silly Putty to snap.

In their lab dish experiments with brain cells on silicone, the researchers were able to minimize microtubule damage with a drug called Taxol, commonly used to treat cancer. But it’s too early to say whether the drug would work in people with traumatic brain injuries.

Figuring out exactly what happens in traumatic brain injuries could lead to new ways to help patients, Manley says. Currently, traumatic brain injury research is in “the abyss between bench and bedside,” he says.

Thursday, February 18, 2010

Healing touch: the key to regenerating bodies

New Scientist

Health


Healing touch: the key to regenerating bodies



Video: Regenerating bodies

YOU started life as a single cell. Now you are made of many trillions. There are more cells in your body than there are stars in the galaxy. Every day billions of these cells are replaced. And if you hurt yourself, billions more cells spring up to repair broken blood vessels and make new skin, muscle or even bone.

Even more amazing than the staggering number of cells, though, is the fact that, by and large, they all know what to do - whether to become skin or bone and so on. The question is, how?

"Cells don't have eyes or ears," says Dennis Discher, a biophysical engineer at the University of Pennsylvania in Philadelphia. "If you were blind and deaf, you'd get around by touch and smell. You'd feel a soft chair to sit on, a hard wall to avoid, or whether you're walking on carpet or concrete."

Until recently, the focus was all on "smell": that is, on how cells respond to chemical signals such as growth factors. Biologists thought of cells as automatons that blindly followed the orders they were given. In recent years, however, it has started to become clear that the sense of touch is vital as well, allowing cells to work out for themselves where they are and what they should be doing. Expose stem cells to flowing fluid, for instance, and they turn into blood vessels.

Simply expose stem cells to flowing fluid and they turn into blood vessels

What is emerging is a far more dynamic picture of growth and development, with a great deal of interplay between cells, genes and our body's internal environment. This may explain why exercise and physical therapy are so important to health and healing - if cells don't get the right physical cues when you are recovering from an injury, for instance, they won't know what to do. It also helps explain how organisms evolve new shapes - the better cells become at sensing what they should do, the fewer genetic instructions they need to be given.

The latest findings are also good news for people who need replacement tissues and organs. If tissue engineers can just provide the right physical environment, it should make it easier to transform stem cells into specific tissues and create complex, three-dimensional organs that are as good as the real thing. And doctors are already experimenting with ways of using tactile cues to improve wound healing and regeneration.

Biologists have long suspected that mechanical forces may help shape development. "A hundred years ago, people looked at embryos and saw that it was an incredibly physical process," says Donald Ingber, head of Harvard University's Wyss Institute for Biologically Inspired Engineering. "Then when biochemistry and molecular biology came in, the baby was thrown out with the bath water and everybody just focused on chemicals and genes."

While it was clear that physical forces do play a role - for example, astronauts living in zero gravity suffer bone loss - until recently there was no way to measure and experiment with the tiny forces experienced by individual cells. Only in the past few years, as equipment like atomic force microscopes has become more common, have biologists, physicists and tissue engineers begun to get to grips with how forces shape cells' behaviour.

One of the clearest examples comes from Discher and his colleagues, who used atomic force microscopy to measure the stiffness of a variety of tissues and gel pads. Then they grew human mesenchymal stem cells - the precursors of bone, muscle and many other tissue types - on the gels. In each case, the cells turned into the tissue that most closely matched the stiffness of the gel.

The softest gels, which were as flabby as brain tissue, gave rise to nerve cells. In contrast, gels that were 10 times stiffer - like muscle tissue - generated muscle cells, and yet stiffer gels gave rise to bone (Cell, vol 126, p 677). "What's surprising is not that there are tactile differences between one tissue and another," says Discher. After all, doctors rely on such differences every time they palpate your abdomen. "What's surprising is that cells feel that difference."

The details of how they do this are now emerging. Most cells other than blood cells live within a fibrous extracellular matrix. Each cell is linked to this matrix by proteins in its membrane called integrins, and the cell's internal protein skeleton is constantly tugging on these integrins to create a taut, tuned whole. "There's isometric tension that you don't see," says Ingber. In practice, this means changes in external tension - such as differences in the stiffness of the matrix, or the everyday stresses and strains of normal muscle movement - can be transmitted into the cell and ultimately to the nucleus, where they can direct the cell's eventual fate.

Since stem cells have yet to turn into specific cell types, biologists expected them to be extra sensitive to the environment, and this does indeed seem to be the case. Ning Wang, a bioengineer at the University of Illinois at Urbana-Champaign, found that the embryonic stem cells of mice are much softer than other, more specialised cells. This softness means that tiny external forces can deform the cells and influence their development (Nature Materials, vol 9, p 82).

For instance, if stem cells are exposed to flowing fluid, they turn into the endothelial cells that line the inner surface of blood vessels. In fact, fluid flow - particularly pulses that mimic the effect of a beating heart - is proving crucial for growing replacement arteries in the laboratory. The rhythmic stress helps align the fibres of the developing artery, making them twice as strong, says Laura Niklason, a tissue engineer at Yale University. A biotech company Niklason founded, called Humacyte, has begun animal testing on arteries grown this way.

Surprisingly, pulsatile motion can help heal injuries in situ too. At Harvard, Ingber and his colleague Dennis Orgill are treating patients with difficult-to-heal wounds by implanting a small sponge in the wound and connecting this to a pump. The pump sucks the cells surrounding the wound in and out of the sponge's pores, distorting them by about 15 to 20 per cent - an almost ideal stimulus for inducing the cells to grow and form blood vessels and thus boost the healing process, says Ingber.

Meanwhile, tissue engineers are finding that they can grow far better bone and cartilage by mimicking the stresses that the tissues normally experience in the body. For instance, human cartilage grown in the lab is usually nowhere near as strong as the real thing. Recently, however, Clark Hung, a biomedical engineer at Columbia University in New York City, has grown cartilage that matches its natural counterpart strength for strength. The secret, he has found, is rhythmically squeezing the cartilage as it grows to mimic the stress of walking.

The secret of growing cartilage that is as strong as the real thing is to mimic the effects of walking

Hung says this is partly because the pressure helps to pump nutrients into cartilage, which has no blood vessels. But his experiments suggest that the loading alone also plays an important role. His team hopes the engineered cartilage will eventually be used to resurface arthritic human joints.

Even relatively mild stresses make a big difference. Attempts to grow replacement bone by placing stem cells in a culture chamber of the desired shape have not been very successful, with the cells often dying or producing only weak bone. But Gordana Vunjak-Novakovic, a biomedical engineer also at Columbia, has found that mimicking the internal flow of fluid that growing bones normally experience helps maximise strength. Last year, her team used this approach to successfully grow a replica of part of the temporomandibular joint in the jaw from human stem cells, producing a naturally shaped, fully viable bone after just five weeks.

"If you don't stimulate bone cells, they don't do much," says Vunjak-Novakovic. "But if you do, they wake up and start making bone at a higher rate."

There is still a long way to go, however. The replica bone lacks the thin layer of cartilage that lines the real bone, and it also lacks a blood supply, so it begins to starve as soon as it is removed from the culture chamber.

Again, though, the answer could be to provide the cells with the right physical cues. For example, Vunjak-Novakovic has used lasers to drill channels in the scaffolds used to grow heart muscle in the lab. When fluid begins flowing through these channels, endothelial cells move in to line the channels while muscle cells move away. "Each of the cells will find its own niche," she says. Her team is now testing to see whether stem cells will turn into endothelial cells in the channels and into muscle cells elsewhere. Early results suggest that they will.

Even small differences in forces can influence development. Christopher Chen of the University of Pennsylvania grew flat sheets of mesenchymal stem cells and exposed them to a mixture of growth factors for bone and marrow development. The cells on the edges of the sheets, which were exposed to the greatest stresses, turned into bone cells, while those in the middle turned into the fat cells found in marrow, as in real bone (Stem Cells, vol 26, p 2921).

If this kind of sorting-out according to physical forces is widespread in development, it could be very good news for tissue engineers. Instead of having to micromanage the process of producing a replacement organ, they need only to provide the right cues and let the cells do the rest.

If tissue engineers provide the right physical cues when growing organs, cells will do the rest

Indeed, it makes a lot of sense for some developmental decisions to be "devolved" to cells. The growth of tissues like muscles, bone, skin and blood vessels has to be coordinated as our bodies develop and adapt to different activities and injuries. A rigid genetic programme could easily be derailed, whereas using tactile cues as guides allows tissues to adapt quickly as conditions change - for instance, carrying heavy loads will make our bones grow stronger.

This kind of plasticity may play a vital role in evolution as well as during the lifetime of individuals. When the ancestors of giraffes acquired mutations that made their necks longer, for instance, they did not have to evolve a whole new blueprint for making necks. Instead, the nerves, muscles and skin would have grown proportionately without needing further changes in instructions. The result of this plasticity is a developmental programme that is better able to cope with evolutionary changes, says Ingber.

There is, however, a drawback. When disease or injury changes the stiffness of a tissue, things can go awry. Some researchers suspect that tissue stiffening plays a role in multiple sclerosis, in which nerves lose their protective myelin sheath (Journal of Biology, vol 8, p 78). It may also play a role in some cancers (see "Lumps and bumps").

It could also explain why many tissues fail to heal perfectly after an injury. To prevent infection, the body needs to patch up wounds as quickly as possible. So it uses a form of collagen that is easier to assemble than the normal one. "It's a quick patch, things are sealed off and you go on - but it's not perfect regeneration," says Discher. The quick-fix collagen is stiffer than normal tissue, as anyone with a large scar will tell you.

After a heart attack, for example, the dead portion of the heart muscle scars over. Why, Discher wondered, don't heart muscle cells then replace the scar tissue? To find out, he and his colleagues grew embryonic heart cells on matrixes of differing stiffness. When the matrix was the same stiffness as healthy heart muscle, the cells grew normally and beat happily. But if the matrix was as stiff as scar tissue, the cells gradually stopped beating (Journal of Cell Science, vol 121, p 3794).

The constant work of trying to flex the stiffer matrix wears the cells out, Discher thinks. "It's like pushing on a brick wall. Finally, they give up."

Discher believes the solution may lie in finding a way to soften the scar tissue so that heart cells can repopulate it. Several enzymes, such as matrix metalloproteinases and collagenases, might do the job, but overdoing it could be risky. "If you degrade the matrix too much, you lose the patch," he warns.

The stiffness of scar tissue may also prevent regeneration in nerve injury, because nerve cells prefer the softest of surroundings. "It might just be that the growing tip of the axon senses that there's a stiff wall ahead of it and doesn't grow through because of that," speculates Jochen Guck, a biophysicist at the University of Cambridge in the UK.

There is still a long way to go before we fully understand how cells sense and respond to the forces on them. But it is becoming clear that the touchy-feely approach could be the key to regenerating the body.

Lumps and bumps

Many tumours are stiffer than the tissues in which they form - after all, doctors often first detect many cancers of organs such as the breast and prostate by feeling a hard lump. Some researchers now suspect that this stiffness is not always just a consequence of the cancer. It may be a cause as well.

A team led by Paul Janmey, a biophysicist at the University of Pennsylvania in Philadelphia, has found that the cycle of cell division in breast cells stops when they are grown on a soft gel, keeping them in a quiescent state (Current Biology, vol 19, p 1511). Anything that signals stiffness - even just touching a cell with a rigid probe - can be enough to start it dividing again.

Similarly, when Valerie Weaver, a cancer biologist at the University of California at San Francisco, and her team used chemicals to soften the extracellular matrix in which breast cells were growing in the lab they found the cells were less likely to become malignant (Cell, vol 139, p 891). If her findings are confirmed, they could explain why women with denser breast tissue are more likely to develop breast cancer.

Some researchers, too, have reported seeing tumours form around the scars from breast-implant surgery. "This needs to be looked at again," says Weaver. If the link is confirmed, it might be possible to block tumour growth by interfering with the way cells detect stiffness.

Bob Holmes is a consultant for New Scientist based in Edmonton, Canada

Chromosome caps presage the brain's decline, Longer telomeres associated with multivitamin use

New Scientist

Health

by Anil Ananthaswamy

Chromosome caps presage the brain's decline

Longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing.

A SIGN of a cell's age could help predict the onset of dementia. Elderly people are more likely to develop cognitive problems if their telomeres - the stretches of DNA that cap the ends of chromosomes - are shorter than those of their peers.

The shortening of telomeres is linked to reduced lifespan, heart disease and osteoarthritis. Telomeres naturally shorten with age as cells divide, but also contract when cells experience oxidative damage linked to metabolism. Such damage is associated with cognitive problems like dementia. Thomas von Zglinicki at Newcastle University, UK, showed in 2000 that people with dementia not caused by Alzheimer's tended to have shorter telomeres than people without dementia.

To see if healthy individuals with short telomeres are at risk of developing dementia, Kristine Yaffe at the University of California, San Francisco, and colleagues, followed 2734 physically fit adults with an average age of 74.

Yaffe's team tracked them for seven years and periodically assessed memory, language, concentration, attention, motor and other skills. At the start, the researchers measured the length of telomeres in blood cells and grouped each person according to short, medium or long telomeres.

After accounting for differences in age, race, sex and education, the researchers found that those with long telomeres experienced less cognitive decline compared to those with short or medium-length telomeres (Neurobiology of Aging, DOI: 10.1016/j.neurobiolaging.2009.12.006).

Von Zglinicki calls the work a "carefully done, large study", but notes that short telomeres by themselves are not enough to predict whether an individual will get dementia.

The key, says Ian Deary at the University of Edinburgh, UK, will be to combine telomere length with other biomarkers. "Most likely, longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing."

Longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing.

___________________________________________________

A study conducted by researchers at the National Institutes of Health has provided the first epidemiologic evidence that the use of multivitamins by women is associated with longer telomeres: the protective caps at the ends of chromosomes that shorten with the aging of a cell. The study was reported online on March 11, 2009 in the American Journal of Clinical Nutrition.

Telomere length has been proposed as a marker of biological aging. Shorter telomeres have been linked with higher mortality within a given period of time and an increased risk of some chronic diseases.

For the current research, Honglei Chen and colleagues evaluated 586 participants aged 35 to 74 in the Sister Study, an ongoing prospective cohort of healthy sisters of breast cancer patients. Dietary questionnaires completed upon enrollment collected information concerning food and nutritional supplement intake. Stored blood samples were analyzed for leukocyte (white blood cell) DNA telomere length.

Sixty-five percent of the participants reported using multivitamin supplements at least once per month, and 74 percent consumed them daily. Eighty-nine percent of all multivitamin users consumed one a day multivitamin formulas, 21 percent consumed antioxidant combinations, and 17 percent were users of "stress-tabs" or B complex vitamins.

The researchers found 5.1 percent longer telomeres on average in daily users of multivitamins compared with nonusers. Increased telomere length was associated with one a day and antioxidant formula use, but not with stress-tabs or B complex. Individual vitamin B12 supplements were associated with increased telomere length and iron supplements with shorter telomeres. When nutrients from food were analyzed, vitamins C and E emerged as protective against telomere loss.

In their discussion of the findings, the authors explain that telomeres are particularly vulnerable to oxidative stress. Additionally, inflammation induces oxidative stress and lowers the activity of telomerase, the enzyme that that is responsible for maintaining telomeres. Because dietary antioxidants, B vitamins, and specific minerals can help reduce oxidative stress and inflammation, they may be useful for the maintenance of telomere length. In fact, vitamins C and E have been shown in cell cultures to retard telomere shortening and increase cellular life span.

"Our study provides preliminary evidence linking multivitamin use to longer leukocyte telomeres," the authors conclude. "This finding should be further evaluated in future epidemiologic studies and its implications concerning aging the etiology of chronic diseases should be carefully evaluated."

Iran showing fastest scientific growth of any country

by Debora MacKenzie

It might be the Chinese year of the tiger, but scientifically, 2010 is looking like Iran's year.

Scientific output has grown 11 times faster in Iran than the world average, faster than any other country. A survey of the number of scientific publications listed in the Web of Science database shows that growth in the Middle East – mostly in Turkey and Iran – is nearly four times faster than the world average.

Science-Metrix, a data-analysis company in Montreal, Canada, has published a detailed report (PDF) on "geopolitical shifts in knowledge creation" since 1980. "Asia is catching up even more rapidly than previously thought, Europe is holding its position more than most would expect, and the Middle East is a region to watch," says the report's author, Eric Archambault.

World scientific output grew steadily, from 450,000 papers a year in 1980 to 1,500,000 in 2009. Asia as a whole surpassed North America last year.

Nuclear, nuclear, nuclear

Archambaut notes that Iran's publications have emphasised inorganic and nuclear chemistry, nuclear and particle physics and nuclear engineering. Publications in nuclear engineering grew 250 times faster than the world average – although medical and agricultural research also increased.

Science-Metrix also predicts that this year, China will publish as many peer-reviewed papers in natural sciences and engineering as the US. If current trends continue, by 2015 China will match the US across all disciplines – although the US may publish more in the life and social sciences until 2030.

China's prominence in world science is known to have been growing, but Science-Metrix has discovered that its output of peer-reviewed papers has been growing more than five times faster than that of the US.

Euro-puddings

Meanwhile, "European attitudes towards collaboration are bearing fruit", writes Archambaut. While Asia's growth in output was mirrored by North America's fall, Europe, which invests heavily in cross-border scientific collaboration, held its own, and now produces over a third of the world's science, the largest regional share. Asia produces 29 per cent and North America 28 per cent.

Scientific output fell in the former Soviet Union after its collapse in 1991 and only began to recover in 2006. Latin America and the Caribbean together grew fastest of any region, although its share of world science is still small. Growth in Oceania, Europe and Africa has stayed at about the same rate over the past 30 years. Only North American scientific output has grown "considerably slower" than the world as a whole.

"The number of papers is a first-order metric that doesn't capture quality," admits Archambaut. There are measures for quality, such as the number of times papers are cited, and "Asian science does tend to be less cited overall".

But dismissing the Asian surge on this basis is risky, he feels. "In the 1960s, when Japanese cars started entering the US market, US manufacturers dismissed their advance based on their quality" – but then lost a massive market share to Japan. The important message, he says, is that "Asia is becoming the world leader in science, with North America progressively left behind".

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

Wednesday, February 17, 2010

HERETICAL THOUGHTS ABOUT SCIENCE AND SOCIETY

EDGE
The Third Culture



HERETICAL THOUGHTS ABOUT SCIENCE AND SOCIETY


By Freeman Dyson

My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.



FREEMAN DYSON is professor of physics at the Institute for Advanced Study, in Princeton. His professional interests are in mathematics and astronomy. Among his many books are Disturbing the Universe, Infinite in All Directions Origins of Life, From Eros to Gaia, Imagined Worlds, and The Sun, the Genome, and the Internet. His most recent book, Many Colored Glass: Reflections on the Place of Life in the Universe (Page Barbour Lectures), is being published this month by University of Virgina Press.


1. The Need for Heretics

In the modern world, science and society often interact in a perverse way. We live in a technological society, and technology causes political problems. The politicians and the public expect science to provide answers to the problems. Scientific experts are paid and encouraged to provide answers. The public does not have much use for a scientist who says, “Sorry, but we don’t know”. The public prefers to listen to scientists who give confident answers to questions and make confident predictions of what will happen as a result of human activities. So it happens that the experts who talk publicly about politically contentious questions tend to speak more clearly than they think. They make confident predictions about the future, and end up believing their own predictions. Their predictions become dogmas which they do not question. The public is led to believe that the fashionable scientific dogmas are true, and it may sometimes happen that they are wrong. That is why heretics who question the dogmas are needed.

As a scientist I do not have much faith in predictions. Science is organized unpredictability. The best scientists like to arrange things in an experiment to be as unpredictable as possible, and then they do the experiment to see what will happen. You might say that if something is predictable then it is not science. When I make predictions, I am not speaking as a scientist. I am speaking as a story-teller, and my predictions are science-fiction rather than science. The predictions of science-fiction writers are notoriously inaccurate. Their purpose is to imagine what might happen rather than to describe what will happen. I will be telling stories that challenge the prevailing dogmas of today. The prevailing dogmas may be right, but they still need to be challenged. I am proud to be a heretic. The world always needs heretics to challenge the prevailing orthodoxies. Since I am heretic, I am accustomed to being in the minority. If I could persuade everyone to agree with me, I would not be a heretic.

We are lucky that we can be heretics today without any danger of being burned at the stake. But unfortunately I am an old heretic. Old heretics do not cut much ice. When you hear an old heretic talking, you can always say, “Too bad he has lost his marbles”, and pass on. What the world needs is young heretics. I am hoping that one or two of the people who read this piece may fill that role.

Two years ago, I was at Cornell University celebrating the life of Tommy Gold, a famous astronomer who died at a ripe old age. He was famous as a heretic, promoting unpopular ideas that usually turned out to be right. Long ago I was a guinea-pig in Tommy’s experiments on human hearing. He had a heretical idea that the human ear discriminates pitch by means of a set of tuned resonators with active electromechanical feedback. He published a paper explaining how the ear must work, [Gold, 1948]. He described how the vibrations of the inner ear must be converted into electrical signals which feed back into the mechanical motion, reinforcing the vibrations and increasing the sharpness of the resonance. The experts in auditory physiology ignored his work because he did not have a degree in physiology. Many years later, the experts discovered the two kinds of hair-cells in the inner ear that actually do the feedback as Tommy had predicted, one kind of hair-cell acting as electrical sensors and the other kind acting as mechanical drivers. It took the experts forty years to admit that he was right. Of course, I knew that he was right, because I had helped him do the experiments.

Later in his life, Tommy Gold promoted another heretical idea, that the oil and natural gas in the ground come up from deep in the mantle of the earth and have nothing to do with biology. Again the experts are sure that he is wrong, and he did not live long enough to change their minds. Just a few weeks before he died, some chemists at the Carnegie Institution in Washington did a beautiful experiment in a diamond anvil cell, [Scott et al., 2004]. They mixed together tiny quantities of three things that we know exist in the mantle of the earth, and observed them at the pressure and temperature appropriate to the mantle about two hundred kilometers down. The three things were calcium carbonate which is sedimentary rock, iron oxide which is a component of igneous rock, and water. These three things are certainly present when a slab of subducted ocean floor descends from a deep ocean trench into the mantle. The experiment showed that they react quickly to produce lots of methane, which is natural gas. Knowing the result of the experiment, we can be sure that big quantities of natural gas exist in the mantle two hundred kilometers down. We do not know how much of this natural gas pushes its way up through cracks and channels in the overlying rock to form the shallow reservoirs of natural gas that we are now burning. If the gas moves up rapidly enough, it will arrive intact in the cooler regions where the reservoirs are found. If it moves too slowly through the hot region, the methane may be reconverted to carbonate rock and water. The Carnegie Institute experiment shows that there is at least a possibility that Tommy Gold was right and the natural gas reservoirs are fed from deep below. The chemists sent an E-mail to Tommy Gold to tell him their result, and got back a message that he had died three days earlier. Now that he is dead, we need more heretics to take his place.

2. Climate and Land Management

The main subject of this piece is the problem of climate change. This is a contentious subject, involving politics and economics as well as science. The science is inextricably mixed up with politics. Everyone agrees that the climate is changing, but there are violently diverging opinions about the causes of change, about the consequences of change, and about possible remedies. I am promoting a heretical opinion, the first of three heresies that I will discuss in this piece.

My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.

There is no doubt that parts of the world are getting warmer, but the warming is not global. I am not saying that the warming does not cause problems. Obviously it does. Obviously we should be trying to understand it better. I am saying that the problems are grossly exaggerated. They take away money and attention from other problems that are more urgent and more important, such as poverty and infectious disease and public education and public health, and the preservation of living creatures on land and in the oceans, not to mention easy problems such as the timely construction of adequate dikes around the city of New Orleans.

I will discuss the global warming problem in detail because it is interesting, even though its importance is exaggerated. One of the main causes of warming is the increase of carbon dioxide in the atmosphere resulting from our burning of fossil fuels such as oil and coal and natural gas. To understand the movement of carbon through the atmosphere and biosphere, we need to measure a lot of numbers. I do not want to confuse you with a lot of numbers, so I will ask you to remember just one number. The number that I ask you to remember is one hundredth of an inch per year. Now I will explain what this number means. Consider the half of the land area of the earth that is not desert or ice-cap or city or road or parking-lot. This is the half of the land that is covered with soil and supports vegetation of one kind or another. Every year, it absorbs and converts into biomass a certain fraction of the carbon dioxide that we emit into the atmosphere. Biomass means living creatures, plants and microbes and animals, and the organic materials that are left behind when the creatures die and decay. We don’t know how big a fraction of our emissions is absorbed by the land, since we have not measured the increase or decrease of the biomass. The number that I ask you to remember is the increase in thickness, averaged over one half of the land area of the planet, of the biomass that would result if all the carbon that we are emitting by burning fossil fuels were absorbed. The average increase in thickness is one hundredth of an inch per year.

The point of this calculation is the very favorable rate of exchange between carbon in the atmosphere and carbon in the soil. To stop the carbon in the atmosphere from increasing, we only need to grow the biomass in the soil by a hundredth of an inch per year. Good topsoil contains about ten percent biomass, [Schlesinger, 1977], so a hundredth of an inch of biomass growth means about a tenth of an inch of topsoil. Changes in farming practices such as no-till farming, avoiding the use of the plow, cause biomass to grow at least as fast as this. If we plant crops without plowing the soil, more of the biomass goes into roots which stay in the soil, and less returns to the atmosphere. If we use genetic engineering to put more biomass into roots, we can probably achieve much more rapid growth of topsoil. I conclude from this calculation that the problem of carbon dioxide in the atmosphere is a problem of land management, not a problem of meteorology. No computer model of atmosphere and ocean can hope to predict the way we shall manage our land.

Here is another heretical thought. Instead of calculating world-wide averages of biomass growth, we may prefer to look at the problem locally. Consider a possible future, with China continuing to develop an industrial economy based largely on the burning of coal, and the United States deciding to absorb the resulting carbon dioxide by increasing the biomass in our topsoil. The quantity of biomass that can be accumulated in living plants and trees is limited, but there is no limit to the quantity that can be stored in topsoil. To grow topsoil on a massive scale may or may not be practical, depending on the economics of farming and forestry. It is at least a possibility to be seriously considered, that China could become rich by burning coal, while the United States could become environmentally virtuous by accumulating topsoil, with transport of carbon from mine in China to soil in America provided free of charge by the atmosphere, and the inventory of carbon in the atmosphere remaining constant. We should take such possibilities into account when we listen to predictions about climate change and fossil fuels. If biotechnology takes over the planet in the next fifty years, as computer technology has taken it over in the last fifty years, the rules of the climate game will be radically changed.

When I listen to the public debates about climate change, I am impressed by the enormous gaps in our knowledge, the sparseness of our observations and the superficiality of our theories. Many of the basic processes of planetary ecology are poorly understood. They must be better understood before we can reach an accurate diagnosis of the present condition of our planet. When we are trying to take care of a planet, just as when we are taking care of a human patient, diseases must be diagnosed before they can be cured. We need to observe and measure what is going on in the biosphere, rather than relying on computer models.

Everyone agrees that the increasing abundance of carbon dioxide in the atmosphere has two important consequences, first a change in the physics of radiation transport in the atmosphere, and second a change in the biology of plants on the ground and in the ocean. Opinions differ on the relative importance of the physical and biological effects, and on whether the effects, either separately or together, are beneficial or harmful. The physical effects are seen in changes of rainfall, cloudiness, wind-strength and temperature, which are customarily lumped together in the misleading phrase “global warming”. In humid air, the effect of carbon dioxide on radiation transport is unimportant because the transport of thermal radiation is already blocked by the much larger greenhouse effect of water vapor. The effect of carbon dioxide is important where the air is dry, and air is usually dry only where it is cold. Hot desert air may feel dry but often contains a lot of water vapor. The warming effect of carbon dioxide is strongest where air is cold and dry, mainly in the arctic rather than in the tropics, mainly in mountainous regions rather than in lowlands, mainly in winter rather than in summer, and mainly at night rather than in daytime. The warming is real, but it is mostly making cold places warmer rather than making hot places hotter. To represent this local warming by a global average is misleading.

The fundamental reason why carbon dioxide in the atmosphere is critically important to biology is that there is so little of it. A field of corn growing in full sunlight in the middle of the day uses up all the carbon dioxide within a meter of the ground in about five minutes. If the air were not constantly stirred by convection currents and winds, the corn would stop growing. About a tenth of all the carbon dioxide in the atmosphere is converted into biomass every summer and given back to the atmosphere every fall. That is why the effects of fossil-fuel burning cannot be separated from the effects of plant growth and decay. There are five reservoirs of carbon that are biologically accessible on a short time-scale, not counting the carbonate rocks and the deep ocean which are only accessible on a time-scale of thousands of years. The five accessible reservoirs are the atmosphere, the land plants, the topsoil in which land plants grow, the surface layer of the ocean in which ocean plants grow, and our proved reserves of fossil fuels. The atmosphere is the smallest reservoir and the fossil fuels are the largest, but all five reservoirs are of comparable size. They all interact strongly with one another. To understand any of them, it is necessary to understand all of them.

As an example of the way different reservoirs of carbon dioxide may interact with each other, consider the atmosphere and the topsoil. Greenhouse experiments show that many plants growing in an atmosphere enriched with carbon dioxide react by increasing their root-to-shoot ratio. This means that the plants put more of their growth into roots and less into stems and leaves. A change in this direction is to be expected, because the plants have to maintain a balance between the leaves collecting carbon from the air and the roots collecting mineral nutrients from the soil. The enriched atmosphere tilts the balance so that the plants need less leaf-area and more root-area. Now consider what happens to the roots and shoots when the growing season is over, when the leaves fall and the plants die. The new-grown biomass decays and is eaten by fungi or microbes. Some of it returns to the atmosphere and some of it is converted into topsoil. On the average, more of the above-ground growth will return to the atmosphere and more of the below-ground growth will become topsoil. So the plants with increased root-to-shoot ratio will cause an increased transfer of carbon from the atmosphere into topsoil. If the increase in atmospheric carbon dioxide due to fossil-fuel burning has caused an increase in the average root-to-shoot ratio of plants over large areas, then the possible effect on the top-soil reservoir will not be small. At present we have no way to measure or even to guess the size of this effect. The aggregate biomass of the topsoil of the planet is not a measurable quantity. But the fact that the topsoil is unmeasurable does not mean that it is unimportant.

At present we do not know whether the topsoil of the United States is increasing or decreasing. Over the rest of the world, because of large-scale deforestation and erosion, the topsoil reservoir is probably decreasing. We do not know whether intelligent land-management could increase the growth of the topsoil reservoir by four billion tons of carbon per year, the amount needed to stop the increase of carbon dioxide in the atmosphere. All that we can say for sure is that this is a theoretical possibility and ought to be seriously explored.

3. Oceans and Ice-ages

Another problem that has to be taken seriously is a slow rise of sea level which could become catastrophic if it continues to accelerate. We have accurate measurements of sea level going back two hundred years. We observe a steady rise from 1800 to the present, with an acceleration during the last fifty years. It is widely believed that the recent acceleration is due to human activities, since it coincides in time with the rapid increase of carbon dioxide in the atmosphere. But the rise from 1800 to 1900 was probably not due to human activities. The scale of industrial activities in the nineteenth century was not large enough to have had measurable global effects. So a large part of the observed rise in sea level must have other causes. One possible cause is a slow readjustment of the shape of the earth to the disappearance of the northern ice-sheets at the end of the ice age twelve thousand years ago. Another possible cause is the large-scale melting of glaciers, which also began long before human influences on climate became significant. Once again, we have an environmental danger whose magnitude cannot be predicted until we know more about its causes, [Munk, 2002].

The most alarming possible cause of sea-level rise is a rapid disintegration of the West Antarctic ice-sheet, which is the part of Antarctica where the bottom of the ice is far below sea level. Warming seas around the edge of Antarctica might erode the ice-cap from below and cause it to collapse into the ocean. If the whole of West Antarctica disintegrated rapidly, sea-level would rise by five meters, with disastrous effects on billions of people. However, recent measurements of the ice-cap show that it is not losing volume fast enough to make a significant contribution to the presently observed sea-level rise. It appears that the warming seas around Antarctica are causing an increase in snowfall over the ice-cap, and the increased snowfall on top roughly cancels out the decrease of ice volume caused by erosion at the edges. The same changes, increased melting of ice at the edges and increased snowfall adding ice on top, are also observed in Greenland. In addition, there is an increase in snowfall over the East Antarctic Ice-cap, which is much larger and colder and is in no danger of melting. This is another situation where we do not know how much of the environmental change is due to human activities and how much to long-term natural processes over which we have no control.

Another environmental danger that is even more poorly understood is the possible coming of a new ice-age. A new ice-age would mean the burial of half of North America and half of Europe under massive ice-sheets. We know that there is a natural cycle that has been operating for the last eight hundred thousand years. The length of the cycle is a hundred thousand years. In each hundred-thousand year period, there is an ice-age that lasts about ninety thousand years and a warm interglacial period that lasts about ten thousand years. We are at present in a warm period that began twelve thousand years ago, so the onset of the next ice-age is overdue. If human activities were not disturbing the climate, a new ice-age might already have begun. We do not know how to answer the most important question: do our human activities in general, and our burning of fossil fuels in particular, make the onset of the next ice-age more likely or less likely?

There are good arguments on both sides of this question. On the one side, we know that the level of carbon dioxide in the atmosphere was much lower during past ice-ages than during warm periods, so it is reasonable to expect that an artificially high level of carbon dioxide might stop an ice-age from beginning. On the other side, the oceanographer Wallace Broecker [Broecker, 1997] has argued that the present warm climate in Europe depends on a circulation of ocean water, with the Gulf Stream flowing north on the surface and bringing warmth to Europe, and with a counter-current of cold water flowing south in the deep ocean. So a new ice-age could begin whenever the cold deep counter-current is interrupted. The counter-current could be interrupted when the surface water in the Arctic becomes less salty and fails to sink, and the water could become less salty when the warming climate increases the Arctic rainfall. Thus Broecker argues that a warm climate in the Arctic may paradoxically cause an ice-age to begin. Since we are confronted with two plausible arguments leading to opposite conclusions, the only rational response is to admit our ignorance. Until the causes of ice-ages are understood, we cannot know whether the increase of carbon-dioxide in the atmosphere is increasing or decreasing the danger.

4. The Wet Sahara

My second heresy is also concerned with climate change. It is about the mystery of the wet Sahara. This is a mystery that has always fascinated me. At many places in the Sahara desert that are now dry and unpopulated, we find rock-paintings showing people with herds of animals. The paintings are abundant, and some of them are of high artistic quality, comparable with the more famous cave-paintings in France and Spain. The Sahara paintings are more recent than the cave-paintings. They come in a variety of styles and were probably painted over a period of several thousand years. The latest of them show Egyptian influences and may be contemporaneous with early Egyptian tomb paintings. Henri Lhote’s book, “The Search for the Tassili Frescoes”, [Lhote, 1958], is illustrated with reproductions of fifty of the paintings. The best of the herd paintings date from roughly six thousand years ago. They are strong evidence that the Sahara at that time was wet. There was enough rain to support herds of cows and giraffes, which must have grazed on grass and trees. There were also some hippopotamuses and elephants. The Sahara then must have been like the Serengeti today.

At the same time, roughly six thousand years ago, there were deciduous forests in Northern Europe where the trees are now conifers, proving that the climate in the far north was milder than it is today. There were also trees standing in mountain valleys in Switzerland that are now filled with famous glaciers. The glaciers that are now shrinking were much smaller six thousand years ago than they are today. Six thousand years ago seems to have been the warmest and wettest period of the interglacial era that began twelve thousand years ago when the last Ice Age ended. I would like to ask two questions. First, if the increase of carbon dioxide in the atmosphere is allowed to continue, shall we arrive at a climate similar to the climate of six thousand years ago when the Sahara was wet? Second, if we could choose between the climate of today with a dry Sahara and the climate of six thousand years ago with a wet Sahara, should we prefer the climate of today? My second heresy answers yes to the first question and no to the second. It says that the warm climate of six thousand years ago with the wet Sahara is to be preferred, and that increasing carbon dioxide in the atmosphere may help to bring it back. I am not saying that this heresy is true. I am only saying that it will not do us any harm to think about it.

The biosphere is the most complicated of all the things we humans have to deal with. The science of planetary ecology is still young and undeveloped. It is not surprising that honest and well-informed experts can disagree about facts. But beyond the disagreement about facts, there is another deeper disagreement about values. The disagreement about values may be described in an over-simplified way as a disagreement between naturalists and humanists. Naturalists believe that nature knows best. For them the highest value is to respect the natural order of things. Any gross human disruption of the natural environment is evil. Excessive burning of fossil fuels is evil. Changing nature’s desert, either the Sahara desert or the ocean desert, into a managed ecosystem where giraffes or tunafish may flourish, is likewise evil. Nature knows best, and anything we do to improve upon Nature will only bring trouble.

The humanist ethic begins with the belief that humans are an essential part of nature. Through human minds the biosphere has acquired the capacity to steer its own evolution, and now we are in charge. Humans have the right and the duty to reconstruct nature so that humans and biosphere can both survive and prosper. For humanists, the highest value is harmonious coexistence between humans and nature. The greatest evils are poverty, underdevelopment, unemployment, disease and hunger, all the conditions that deprive people of opportunities and limit their freedoms. The humanist ethic accepts an increase of carbon dioxide in the atmosphere as a small price to pay, if world-wide industrial development can alleviate the miseries of the poorer half of humanity. The humanist ethic accepts our responsibility to guide the evolution of the planet.

The sharpest conflict between naturalist and humanist ethics arises in the regulation of genetic engineering. The naturalist ethic condemns genetically modified food-crops and all other genetic engineering projects that might upset the natural ecology. The humanist ethic looks forward to a time not far distant, when genetically engineered food-crops and energy-crops will bring wealth to poor people in tropical countries, and incidentally give us tools to control the growth of carbon dioxide in the atmosphere. Here I must confess my own bias. Since I was born and brought up in England, I spent my formative years in a land with great beauty and a rich ecology which is almost entirely man-made. The natural ecology of England was uninterrupted and rather boring forest. Humans replaced the forest with an artificial landscape of grassland and moorland, fields and farms, with a much richer variety of plant and animal species. Quite recently, only about a thousand years ago, we introduced rabbits, a non-native species which had a profound effect on the ecology. Rabbits opened glades in the forest where flowering plants now flourish. There is no wilderness in England, and yet there is plenty of room for wild-flowers and birds and butterflies as well as a high density of humans. Perhaps that is why I am a humanist.

To conclude this piece I come to my third and last heresy. My third heresy says that the United States has less than a century left of its turn as top nation. Since the modern nation-state was invented around the year 1500, a succession of countries have taken turns at being top nation, first Spain, then France, Britain, America. Each turn lasted about 150 years. Ours began in 1920, so it should end about 2070. The reason why each top nation’s turn comes to an end is that the top nation becomes over-extended, militarily, economically and politically. Greater and greater efforts are required to maintain the number one position. Finally the over-extension becomes so extreme that the structure collapses. Already we can see in the American posture today some clear symptoms of over-extension. Who will be the next top nation? China is the obvious candidate. After that it might be India or Brazil. We should be asking ourselves, not how to live in an America-dominated world, but how to prepare for a world that is not America-dominated. That may be the most important problem for the next generation of Americans to solve. How does a people that thinks of itself as number one yield gracefully to become number two?

I am telling the next generation of young students, who will still be alive in the second half of our century, that misfortunes are on the way. Their precious Ph.D., or whichever degree they went through long years of hard work to acquire, may be worth less than they think. Their specialized training may become obsolete. They may find themselves over-qualified for the available jobs. They may be declared redundant. The country and the culture to which they belong may move far away from the mainstream. But these misfortunes are also opportunities. It is always open to them to join the heretics and find another way to make a living. With or without a Ph.D., there are big and important problems for them to solve.

I will not attempt to summarize the lessons that my readers should learn from these heresies. The main lesson that I would like them to take home is that the long-range future is not predetermined. The future is in their hands. The rules of the world-historical game change from decade to decade in unpredictable ways. All our fashionable worries and all our prevailing dogmas will probably be obsolete in fifty years. My heresies will probably also be obsolete. It is up to them to find new heresies to guide our way to a more hopeful future.

5. Bad Advice to a Young Scientist

Sixty years ago, when I was a young and arrogant physicist, I tried to predict the future of physics and biology. My prediction was an extreme example of wrongness, perhaps a world record in the category of wrong predictions. I was giving advice about future employment to Francis Crick, the great biologist who died in 2005 after a long and brilliant career. He discovered, with Jim Watson, the double helix. They discovered the double helix structure of DNA in 1953, and thereby gave birth to the new science of molecular genetics. Eight years before that, in 1945, before World War 2 came to an end, I met Francis Crick for the first time. He was in Fanum House, a dismal office building in London where the Royal Navy kept a staff of scientists. Crick had been working for the Royal Navy for a long time and was depressed and discouraged. He said he had missed his chance of ever amounting to anything as a scientist. Before World War 2, he had started a promising career as a physicist. But then the war hit him at the worst time, putting a stop to his work in physics and keeping him away from science for six years. The six best years of his life, squandered on naval intelligence, lost and gone forever. Crick was good at naval intelligence, and did important work for the navy. But military intelligence bears the same relation to intelligence as military music bears to music. After six years doing this kind of intelligence, it was far too late for Crick to start all over again as a student and relearn all the stuff he had forgotten. No wonder he was depressed. I came away from Fanum House thinking, “How sad. Such a bright chap. If it hadn’t been for the war, he would probably have been quite a good scientist”.

A year later, I met Crick again. The war was over and he was much more cheerful. He said he was thinking of giving up physics and making a completely fresh start as a biologist. He said the most exciting science for the next twenty years would be in biology and not in physics. I was then twenty-two years old and very sure of myself. I said, “No, you’re wrong. In the long run biology will be more exciting, but not yet. The next twenty years will still belong to physics. If you switch to biology now, you will be too old to do the exciting stuff when biology finally takes off”. Fortunately, he didn’t listen to me. He went to Cambridge and began thinking about DNA. It took him only seven years to prove me wrong. The moral of this story is clear. Even a smart twenty-two-year-old is not a reliable guide to the future of science. And the twenty-two-year-old has become even less reliable now that he is eighty-two.

[Excerpted from Many Colored Glass: Reflections on the Place of Life in the Universe (Page Barbour Lectures) by Freeman Dyson, University of Virgina Press, 2007.]


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: editor@edge.org
Copyright © 2007 By
Edge Foundation, Inc
All Rights Reserved.

Wednesday, February 10, 2010

Brain surgery boosts spirituality Lose a tumour, gain self-transcendence.

scientific american

February 10, 2010 | 4 comments

Brain surgery boosts spirituality

Lose a tumour, gain self-transcendence.

Nature


By Janelle Weaver

Removing part of the brain can induce inner peace, according to researchers from Italy. Their study provides the strongest evidence to date that spiritual thinking arises in, or is limited by, specific brain areas.

To investigate the neural basis of spirituality, Cosimo Urgesi, a cognitive neuroscientist at the University of Udine, and his colleagues turned to people with brain tumours to assess the feeling before and after surgery. Three to seven days after the removal of tumours from the posterior part of the brain, in the parietal cortex, patients reported feeling a greater sense of self-transcendence. This was not the case for patients with tumours removed from the frontal regions of the brain.

"Self-transcendence used to be considered just by philosophers and crank new age people," says co-author Salvatore Aglioti, a cognitive neuroscientist at the Sapienza University of Rome. "This is the first really close-up study on spirituality. We're dealing with a complex phenomenon that's close to the essence of being human."

The authors pinpointed two parts of the brain that, when damaged, led to increases in spirituality: the left inferior parietal lobe and the right angular gyrus. These areas at the back of the brain are involved in how we perceive our bodies in spatial relation to the external world. The authors of the study in the journal Neuron1, say that their findings support the connection between mystic experiences and feeling detached from the body.

"The most surprising part was the rapidity of the change," says Urgesi. "This discovery shows that some complex personality traits are more malleable than previously thought."

The science of spirituality

The researchers interviewed 88 people with brain tumours of various severities. Twenty of these people had benign tumours and although they underwent surgery no tissue was removed. All 88 people participated in interviews about their religious habits and beliefs before surgery and afterwards answered a series of true or false questions that assessed spirituality. The questionnaire tapped into three main components of self-transcendence: losing yourself in the moment, feeling connected to other people and nature, and believing in a higher power. Examples of the items on the questionnaire include: "I often become so fascinated with what I'm doing that I get lost in the moment - like I'm detached from time and place" and "I sometimes feel so connected to nature that everything seems to be part of one living organism."

The researchers then mapped the precise areas of the patients' brains where they had lesions as a result of surgery. Previous studies have shown that a broad network of frontal and parietal brain regions underlies religious beliefs 2,3,4,5. But spirituality does not seem to involve exactly the same regions of the brain as religion.

In the past, neurologists have observed spiritual changes in patients with brain damage, but it is not something they systematically evaluate. "We usually stay away from it, not because it's not an important topic, but because it's very private and personal," says Rik Vandenberghe, a neurologist at the University Hospital Gasthuisberg in Leuven, Belgium. "This paper is very interesting, but like many pioneering studies, it leaves open many questions." Vandenberghe, who uses a similar lesion-mapping technique, says the data should be interpreted with caution. "It's very unlikely that something like self-transcendence is localizable to just two brain areas," he says.

Coarse measure

Probably the most worrisome aspect of the study is the way the authors measured self-transcendence. "It's important to recognize that the whole study is based on changes in one self-report measure, which is a coarse measure that includes some strange items," says cognitive neuroscientist Richard Davidson of the University of Wisconsin-Madison. "In the future, it will be important to understand why lesions in the parietal cortex induce changes on this scale."

"Self-transcendence is an abstract concept, and different people will attribute different meanings to the word," says Vandenberghe. Patient self reporting is not always accurate, he says, adding that tapping into spirituality with more rigorous behavioural measures and pinpointing the specific thoughts and feelings that constitute it are the obvious next steps.

In future studies, Urgesi would like to measure other aspects of spirituality and determine how long changes in spirituality last in patients. He'd also like to inactivate parietal regions in healthy subjects using transcranial magnetic stimulation (TMS), a non-invasive technique that temporarily changes neural activity in a specific region, to see if he can induce immediate changes in self-transcendence. He envisions a day when TMS can be used to increase the feeling of self-transcendence in people with neurological or psychological disorders.

Nature

Perspective: Transitioning from Pet to Peer

MySciNet, An Inclusive Community


Career Advice

Perspective: Transitioning from Pet to Peer

So what exactly is a pet, and how can you avoid becoming one?

At professional meetings, people compliment you on your talks. You're often picked for oral, rather than poster, presentations. You are invited to give talks at other institutions and to serve on professional committees. Your university often calls on you to be their public face: You are trotted out for interviews with the press, asked to give research seminars to alumni, and invited to meet with potential donors. It's easy to feel good about the direction your career is heading in.

Then, not long before you come up for tenure, you are told there are questions about your case and that you urgently need to strengthen your tenure dossier. What happened?

Research shows that most successful academic careers have four stages; first you are an apprentice, then a colleague, then a mentor, and finally a sponsor. Apprentices learn the trade from a mentor and earn their status through the field's structured norms. Senior colleagues must come to value their judgment, to view them as serious thinkers whose contributions matter, and to consider them worthy bosses -- "acceptable as their department chair," as one of our colleagues put it.

That respect is earned by serving an apprenticeship. Scientists viewed as apprentices, who proceed to meet their institutions requirements, typically sail through the transition to colleague and peer.

But it is our observation that not every probationary faculty member is viewed as an apprentice, and hence, not every one has the same opportunity to make that transition. Some -- including some of the most outwardly promising, such as the hot young scientist described above -- come to be viewed as what we call "pets" instead of apprentices. Pets may have a harder time attaining the status of colleague within their department, and the early praise and high-profile roles they are offered can instill a false sense of security that puts them at risk of a negative tenure decision.

So what exactly is a pet, and how can you avoid becoming one?

In our experience, an early-career scientist can end up being a pet for a number of reasons, but most share this in common: They are different in some way. The difference could be gender or ethnicity, but it could also have to do with institutional pedigree or the type of research. Often, their research is interdisciplinary. Pets may have a way of expressing themselves that is unusual for their field. They may, for example, be media-savvy -- a characteristic their scientist colleagues might not respect. Pets are valued for their diversity, but not as members -- or potential members -- of the in-group.

Apprentices, in contrast to pets, are mentored to conduct research that has been "certified" as mainstream and, by the local definition, at the cutting edge. They publish with -- and often look, dress, and use the same style of discourse as -- their mentors and eminent people in the field. Indeed, they may be chosen as apprentices precisely because they fit so well the demographics, background, attitudes, values, and beliefs of their established colleagues. As perceived shared identity increases, so does mentoring.

In their early years, pets get feedback -- or appear to -- similar to what apprentices receive. Indeed, distinctiveness can be an advantage and lead to special opportunities. But when their research diverges from the local mainstream, their colleagues may regard it as peripheral, unimportant, or lacking in rigor. Meanwhile, those early signals may lead pets to believe that they are on the right track. But the same qualities that led to early recognition may be penalized at this later, critical stage. They feel betrayed when questions are raised late in the game.

Although we are not aware of research addressing this issue, there is related research from the women and science literature. Study after study has shown that early in their careers, women feel supported by their departments, don't experience stereotyping, and assume that being female will not present any real difficulties.

But, as documented in a high-profile Massachusetts Institute of Technology Report, as women progress, they become increasingly aware of gender discrimination. This increased awareness has been attributed to the fact that the junior ranks are more diverse, so judgments are inherently more fair. But it may also be because senior colleagues have so far compared these newcomers only with each other; they haven't yet started comparing them with themselves.

Although being distinctive within a junior cohort has value, problems arise when tenure committees start asking, is the candidate on track to become a researcher such as Dr. X? This older group is not diverse, and because it helped to establish the norms of the field, it knows them well and enforces them. They know that anyone they grant tenure is likely to some day be their department chair. To ensure they're likely to agree with your future judgments, they need your criteria to match theirs. The diversity that was so attractive when you were younger becomes uncomfortable.

What to look for, and what to do about it

From our description, it should be clear that becoming a pet is undesirable. Are you heading down that path? How can you know? Early in their probationary period, pets get lots of reassurance, but if you pay close attention, you can sense potential trouble.

- People seem to value your presence more than your actual contributions.

- People express surprise at your high performance in a mainstream activity or subtly attribute your success to your difference.

- You get nice feedback but don't get much helpful advice; for example, you are complimented on how well you spoke during your presentation, but no one tells you how to improve it.

- You are called by your first name and introduced informally in situations where your peers are referred to as Dr. X and Professor Y and given carefully prepared introductions.

- You feel different from your colleagues and sense they also consider you different in some way.

- Because of your difference, no one in your department feels they can mentor you effectively.

If you suspect you may be becoming -- or have already become -- a pet, the most important thing is not to be seduced by the early attention you receive, and to focus on meeting the most rigorous tenure standards.

- Build a tenure dossier your senior colleagues and the distinguished scientists in your field would consider strong. Publish in the highest-impact journals and, even if your work is interdisciplinary, get some publications published in mainstream journals known and valued by your department.

- In presentations, articles, proposals, research statements, and discussions, make sure that you articulate why your research matters. Describe it in ways that make it difficult for others to dismiss it as fringe, niche, optional, or lacking in rigor.

- Accept only invitations that advance your scientific reputation; avoid those that could be labeled as "service."

- When you give presentations, include references to mainstream work so that you don't isolate yourself or your scholarship. Provide session chairs with a brief biography so that it is easy for them to state your credentials when they introduce you.

- Present your research at the annual meetings of your professional society, not just at interdisciplinary and specialty venues. Propose special sessions where you and your research can be seen as leading the mainstream. Invite prominent researchers to present in your session. This can help to shape people's perceptions of the field you play in.

- Be proactive about getting to know, and then seeking research advice and mentoring from, people who are leaders in your field. Including eminent scholars as co-authors on presentations, publications, and proposals may be helpful. But do this selectively. Some will assume that primary credit for a piece of work belongs with the most established co-author.

The consequences of being viewed as different due to your gender, race, research, or institutional pedigree are complex and difficult to overcome. The key is to recognize what's happening and to not let the early, positive attention distract you from building an impervious record of scholarship.

Acknowledgements

This material is based upon work supported by the U.S. National Science Foundation (NSF) under Cooperative Agreement SBE-0245014, ADVANCE at the Columbia University Earth Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF.

Photo (top): A. Kotok

Text corrected, 2 February 2010.

Stephanie Pfirman is Hirschorn Professor and chair of the environmental science department at Barnard College and a member of Columbia University's Earth Institute (EI) ADVANCE program, both in New York City. Caryn J. Block is an associate professor of social-organizational psychology at Teachers College, Columbia University. Robin Bell is Doherty Senior Research Scientist at the Lamont-Doherty Earth Observatory at Columbia and a member of the university's EI ADVANCE program. Loriann Roberson is a professor of psychology and education in the Social-Organizational Psychology Program at Teachers College, Columbia University. Patricia Culligan is a professor of civil engineering and engineering mechanics and a member of the EI ADVANCE program.

10.1126/science.caredit.a1000011