This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Saturday, December 18, 2010

Physics & Math / Subatomic Particles: Back from the Future

Discover Magazine. Science, Technology and The Future

Physics & Math / Subatomic Particles

Back From the Future

A series of quantum experiments shows that measurements performed in the future can influence the present. Does that mean the universe has a destiny—and the laws of physics pull us inexorably toward our prewritten fate?

by Zeeya Merali; photography by Adam Magyar

From the April 2010 issue; published online August 26, 2010

Also see the other articles in this issue's special Beyond Einstein section: Is the Search for Immutable Laws of Nature a Wild-Goose Chase and The Mystery of the Rocketing Particles That Shouldn't Exist.

Jeff Tollaksen may well believe he was destined to be here at this point in time. We’re on a boat in the Atlantic, and it’s not a pleasant trip. The torrential rain obscures the otherwise majestic backdrop of the volcanic Azorean islands, and the choppy waters are causing the boat to lurch. The rough sea has little effect on Tollaksen, barely bringing color to his Nordic complexion. This is second nature to him; he grew up around boats. Everyone would agree that events in his past have prepared him for today’s excursion. But Tollaksen and his colleagues are investigating a far stranger possibility: It may be not only his past that has led him here today, but his future as well.

Tollaksen’s group is looking into the notion that time might flow backward, allowing the future to influence the past. By extension, the universe might have a destiny that reaches back and conspires with the past to bring the present into view. On a cosmic scale, this idea could help explain how life arose in the universe against tremendous odds. On a personal scale, it may make us question whether fate is pulling us forward and whether we have free will.

The boat trip has been organized as part of a conference sponsored by the Foundational Questions Institute to highlight some of the most controversial areas in physics. Tollaksen’s idea certainly meets that criterion. And yet, as crazy as it sounds, this notion of reverse causality is gaining ground. A succession of quantum experiments confirm its predictions—showing, bafflingly, that measurements performed in the future can influence results that happened before those measurements were ever made.

As the waves pound, it’s tough to decide what is more unsettling: the boat’s incessant rocking or the mounting evidence that the arrow of time—the flow that defines the essential narrative of our lives—may be not just an illusion but a lie.

Tollaksen, currently at Chapman University in Orange County, California, developed an early taste for quantum mechanics, the theory that governs the motion of particles in the subatomic world. He skipped his final year of high school, instead attending physics lectures by the charismatic Nobel laureate Richard Feynman at Caltech in Pasadena and learning of the paradoxes that still fascinate and frustrate physicists today.

Primary among those oddities was the famous uncertainty principle, which states that you can never know all the properties of a particle at the same time. For instance, it is impossible to measure both where the particle is and how fast it is moving; the more accurately you determine one aspect, the less precisely you can measure the other. At the quantum scale, particles also have curiously split personalities that allow them to exist in more than one place at the same time—until you take a look and check up on them. This fragile state, in which the particle can possess multiple contradictory attributes, is called a superposition. According to the standard view of quantum mechanics, measuring a particle’s properties is a violent process that instantly snaps the particle out of superposition and collapses it into a single identity. Why and how this happens is one of the central mysteries of quantum mechanics.

“The textbook view of measurements in quantum mechanics is inspired by biology,” Tollaksen tells me on the boat. “It’s similar to the idea that you can’t observe a system of animals without affecting them.” The rain is clearing, and the captain receives radio notification that some dolphins have been spotted a few minutes away; soon we’re heading toward them. Our attempts to spy on these animals serve as the zoological equivalent of what Tollaksen terms “strong measurements”—the standard type in quantum mechanics —because they are anything but unobtrusive. The boat is loud; it churns up water as it speeds to the location. When the dolphins finally show themselves, they swim close to the boat, arcing through the air and playing to their audience. According to conventional quantum mechanics, it is similarly impossible to observe a quantum system without interacting with the particles and destroying the fragile quantum behavior that existed before you looked.

Most physicists accept these peculiar restrictions as part and parcel of the theory. Tollaksen was not so easily appeased. “I was smitten, and I knew there was no chance I was ever going to do anything else with my life,” he recalls. On Feynman’s advice, the teenager moved to Boston to study physics at MIT. But he missed the ocean. “For the first time in my life, I lost the background sound of surf,” he says. “That was actually traumatic.”

Mindful that a job in esoteric physics might not be the best way to put food on his family’s table, Tollaksen worked on a computing start-up company while pursuing his Ph.D. But if the young man wasn’t sure of his calling, fate quickly gave him a nudge when a physicist named Yakir Aharonov visited the neighboring Boston University. Aharonov, now at Chapman with Tollaksen, was renowned for having codiscovered a bizarre quantum mechanical effect in which particles can be affected by electric and magnetic fields, even in regions where those fields should have no reach. But Tollaksen was most taken by another area of Aharonov’s research: a time-twisting interpretation of quantum mechanics.

“Aharonov was one of the first to take seriously the idea that if you want to understand what is happening at any point in time, it’s not just the past that is relevant. It’s also the future,” Tollaksen says. In particular, Aharonov reanalyzed the indeterminism that forms the backbone of quantum mechanics. Before quantum mechanics arrived on the scene, physicists believed that the laws of physics could be used to determine the future of the universe and every object within it. By this thinking, if we knew the properties of every particle on the planet we could, in principle, calculate any person’s fate; we could even calculate all the thoughts in his or her head.

That belief crumbled when experiments began to reveal the indeterministic effects of quantum mechanics—for instance, in the radioactive decay of atoms. The problem goes like this, Tollaksen says: Take two radioactive atoms, so identical that “even God couldn’t see the difference between them.” Then wait. The first atom might decay a minute later, but the second might go another hour before decaying. This is not just a thought experiment; it can really be seen in the laboratory. There is nothing to explain the different behaviors of the two atoms, no way to predict when they will decay by looking at their history, and—seemingly—no definitive cause that produces these effects. This indeterminism, along with the ambiguity inherent in the uncertainty principle, famously rankled Einstein, who fumed that God doesn’t play dice with the universe.

It bothered Aharonov as well. “I asked, what does God gain by playing dice?” he says. Aharonov accepted that a particle’s past does not contain enough information to fully predict its fate, but he wondered, if the information is not in its past, where could it be? After all, something must regulate the particle’s behavior. His answer—which seems inspired and insane in equal measure—was that we cannot perceive the information that controls the particle’s present behavior because it does not yet exist.

“Nature is trying to tell us that there is a difference between two seemingly identical particles with different fates, but that difference can only be found in the future,” he says. If we’re willing to unshackle our minds from our preconceived view that time moves in only one direction, he argues, then it is entirely possible to set up a deterministic theory of quantum mechanics.

In 1964 Aharonov and his colleagues Peter Bergmann and Joel Lebowitz, all then at Yeshiva University in New York, proposed a new framework called time-symmetric quantum mechanics. It could produce all the same treats as the standard form of quantum mechanics that everyone knew and loved, with the added benefit of explaining how information from the future could fill in the indeterministic gaps in the present. But while many of Aharonov’s colleagues conceded that the idea was built on elegant mathematics, its philosophical implications were hard to swallow. “Each time I came up with a new idea about time, people thought that something must be wrong,” he says.

Perhaps because of the cognitive dissonance the idea engendered, time-symmetric quantum mechanics did not catch on. “For a long time, it was nothing more than a curiosity for a few philosophers to discuss,” says Sandu Popescu at the University of Bristol, in England, who works on the time-symmetric approach with Aharonov. Clearly Aharonov needed concrete experiments to demonstrate that actions carried out in the future could have repercussions in the here and now.

Through the 1980s and 1990s, Tollaksen teamed up with Aharonov to design such upside-down experiments, in which outcome was determined by events occurring after the experiment was done. Generally the protocol included three steps: a “preselection” measurement carried out on a group of particles; an intermediate measurement; and a final, “postselection” step in which researchers picked out a subset of those particles on which to perform a third, related measurement. To find evidence of backward causality—information flowing from the future to the past—the experiment would have to demonstrate that the effects measured at the intermediate step were linked to actions carried out on the subset of particles at a later time.

Tollaksen and Aharonov proposed analyzing changes in a quantum property called spin, roughly analogous to the spin of a ball but with some important differences. In the quantum world, a particle can spin only two ways, up or down, with each direction assigned a fixed value (for instance, 1 or –1). First the physicists would measure spin in a set of particles at 2 p.m. and again at 2:30 p.m. Then on another day they would repeat the two tests, but also measure a subset of the particles a third time, at 3 p.m. If the predictions of backward causality were correct, then for this last subset, the spin measurement conducted at 2:30 p.m. (the intermediate time) would be dramatically amplified. In other words, the spin measurements carried out at 2 p.m. and those carried out at 3 p.m. together would appear to cause an unexpected increase in the intensity of spins measured in between, at 2:30 p.m. The predictions seemed absurd, as ridiculous as claiming that you could measure the position of a dolphin off the Atlantic coast at 2 p.m. and again at 3 p.m., but that if you checked on its position at 2:30 p.m., you would find it in the middle of the Mediterranean.

And the amplification would not be restricted to spin; other quantum properties would be dramatically increased to bizarrely high levels too. The idea was that ripples of the measurements carried out in the future could beat back to the present and combine with effects from the past, like waves combining and peaking below a boat, setting it rocking on the rough sea. The smaller the subsample chosen for the last measurement, the more dramatic the effects at intermediate times should be, according to Aharonov’s math. It would be hard to account for such huge amplifications in conventional physics.

For years this prediction was more philosophical than physical because it did not seem possible to perform the suggested experiments. All the team’s proposed tests hinged on being able to make measurements of the quantum system at some intermediate time; but the physics books said that doing so would destroy the quantum properties of the system before the final, postselection step could be carried out. Any attempt to measure the system would collapse its delicate quantum state, just as chasing dolphins in a boat would affect their behavior. Use this kind of invasive, or strong, measurement to check on your system at an intermediate time, and you might as well take a hammer to your apparatus.

By the late 1980s, Aharonov had seen a way out: He could study the system using so-called weak measurements. (Weak measurements involve the same equipment and techniques as traditional ones, but the “knob” controlling the power of the observer’s apparatus is turned way down so as not to disturb the quantum properties in play.) In quantum physics, the weaker the measurement, the less precise it can be. Perform just one weak measurement on one particle and your results are next to useless. You may think that you have seen the required amplification, but you could just as easily dismiss it as noise or an error in your apparatus.

The way to get credible results, Tollaksen realized, was with persistence, not intensity. By 2002 physicists attuned to the potential of weak measurements were repeating their experiments thousands of times, hoping to build up a bank of data persuasively showing evidence of backward causality through the amplification effect.

Just last year, physicist John Howell and his team from the University of Rochester reported success. In the Rochester setup, laser light was measured and then shunted through a beam splitter. Part of the beam passed right through the mechanism, and part bounced off a mirror that moved ever so slightly, due to a motor to which it was attached. The team used weak measurements to detect the deflection of the reflected laser light and thus to determine how much the motorized mirror had moved.

That is the straightforward part. Searching for backward causality required looking at the impact of the final measurement and adding the time twist. In the Rochester experiment, after the laser beams left the mirrors, they passed through one of two gates, where they could be measured again—or not. If the experimenters chose not to carry out that final measurement, then the deflected angles measured in the intermediate phase were boringly tiny. But if they performed the final, postselection step, the results were dramatically different. When the physicists chose to record the laser light emerging from one of the gates, then the light traversing that route, alone, ended up with deflection angles amplified by a factor of more than 100 in the intermediate measurement step. Somehow the later decision appeared to affect the outcome of the weak, intermediate measurements, even though they were made at an earlier time.

This amazing result confirmed a similar finding reported a year earlier by physicists Onur Hosten and Paul Kwiat at the University of Illinois at Urbana-Champaign. They had achieved an even larger laser amplification, by a factor of 10,000, when using weak measurements to detect a shift in a beam of polarized light moving between air and glass.

For Aharonov, who has been pushing the idea of backward causality for four decades, the experimental vindication might seem like a time to pop champagne corks, but that is not his style. “I wasn’t surprised; it was what I expected,” he says.

Paul Davies, a cosmologist at Arizona State University in Tempe, admires the fact that Aharonov’s team has always striven to verify its claims experimentally. “This isn’t airy-fairy philosophy—these are real experiments,” he says. Davies has now joined forces with the group to investigate the framework’s implications for the origin of the cosmos (See “Does the Universe Have a Destiny?” below).

Vlatko Vedral, a quantum physicist at the University of Oxford, agrees that the experiments confirm the existence and power of weak measurements. But while the mathematics of the team’s framework offers a valid explanation for the experimental results, Vedral believes these results alone will not be enough to persuade most physicists to buy into the full time-twisting logic behind it.

For Tollaksen, though, the results are awe-inspiring and a bit scary. “It is upsetting philosophically,” he concedes. “All these experiments change the way that I relate to time, the way I experience myself.” The results have led him to wrestle with the idea that the future is set. If the universe has a destiny that is already written, do we really have a free choice in our actions? Or are all our choices predetermined to fit the universe’s script, giving us only the illusion of free will?

Tollaksen ponders the philosophical dilemma. Was he always destined to become a physicist? If so, are his scientific achievements less impressive because he never had any choice other than to succeed in this career? If I time-traveled back from the 21st century to the shores of Lake Michigan where Tollaksen’s 13-year-old self was reading the works of Feynman and told him that in the future I met him in the Azores and his fate was set, could his teenage self—just to spite me—choose to run off and join the circus or become a sailor instead?

The free will issue is something that Tollaksen has been tackling mathematically with Popescu. The framework does not actually suggest that people could time-travel to the past, but it does allow a concrete test of whether it is possible to rewrite history. The Rochester experiments seem to demonstrate that actions carried out in the future—in the final, postselection step—ripple back in time to influence and amplify the results measured in the earlier, intermediate step. Does this mean that when the intermediate step is carried out, the future is set and the experimenter has no choice but to perform the later, postselection measurement? It seems not. Even in instances where the final step is abandoned, Tollaksen has found, the intermediate weak measurement remains amplified, though now with no future cause to explain its magnitude at all.

I put it to Tollaksen straight: This finding seems to make a mockery of everything we have discussed so far.

Tollaksen is smiling; this is clearly an argument he has been through many times. The result of that single experiment may be the same, he explains, but remember, the power of weak measurements lies in their repetition. No single measurement can ever be taken alone to convey any meaning about the state of reality. Their inherent error is too large. “Your pointer will still read an amplified result, but now you cannot interpret it as having been caused by anything other than noise or a blip in the apparatus,” he says.

In other words, you can see the effects of the future on the past only after carrying out millions of repeat experiments and tallying up the results to produce a meaningful pattern. Focus on any single one of them and try to cheat it, and you are left with a very strange-looking result—an amplification with no cause—but its meaning vanishes. You simply have to put it down to a random error in your apparatus. You win back your free will in the sense that if you actually attempt to defy the future, you will find that it can never force you to carry out postselection experiments against your wishes. The math, Tollaksen says, backs him on this interpretation: The error range in single intermediate weak measurements that are not followed up by the required post­selection will always be just enough to dismiss the bizarre result as a mistake.

physics mainstream isdestined to finally notice his time-twisting ideas, then so it will be.

Tollaksen sums up this confounding argument with one of his favorite quotes, from the ancient Jewish sage Rabbi Akiva: “All is foreseen; but freedom of choice is given.” Or as Tollaksen puts it, “I can have my cake and eat it too.” He laughs.

Here, finally, is the answer to Aharonov’s opening question: What does God gain by playing dice with the universe? Why must the quantum world always retain a degree of fuzziness when we try to look at it through the time slice of the present? That loophole is needed so that the future can exert an overall pull on the present, without ever being caught in the act of doing it in any particular instance.

“The future can only affect the present if there is room to write its influence off as a mistake,” Aharonov says.

Whether this realization is a masterstroke of genius that explains the mechanism for backward causality or an admission that the future’s influence on the past can never fully be proven is open to debate. Andrew Jordan, who designed the Rochester laser amplification experiment with Howell, notes that there is even fundamental controversy over whether his results support Aharonov’s version of backward causality. No one disputes his team’s straightforward experimental results, but “there is much philosophical thought about what weak values really mean, what they physically correspond to—if they even really physically correspond to anything at all,” Jordan says. “My view is that we don’t have to interpret them as a consequence of the future’s influencing the present, but rather they show us that there is a lot about quantum mechanics that we still have to understand.” Nonetheless, he is open to being convinced otherwise: “A year from now, I may well change my mind.”

Popescu argues that the Rochester findings are hugely important because they open the door to a completely new range of laboratory explorations based on weak measurements. In starting from the conventional interpretation of quantum mechanics, physicists had not realized such measurements were possible. “With his work on weak measurements, Aharonov began to pose questions about what is possible in quantum mechanics that nobody had ever even thought could be articulated,” Popescu says.

Aharonov remains circumspect. He has spent most of his adult life waiting for recognition of the merit of his theory. If it is destined that mainstream physics should finally take serious notice of his time-twisting ideas, then so it will be.

And Tollaksen? He too is at one with his destiny. A few months ago he moved to Laguna Beach, California. “I’m in a house where I can hear the surf again—what a relief,” he says. He feels that he is finally back to where he was always meant to be.


Is feedback from the future guiding the development of life, the universe, and, well, everything? Paul Davies at Arizona State University in Tempe and his colleagues are investigating whether the universe has a destiny—and if so, whether there is a way to detect its eerie influence.

Cosmologists have long been puzzled about why the conditions of our universe—for example, its rate of expansion—provide the ideal breeding ground for galaxies, stars, and planets. If you rolled the dice to create a universe, odds are that you would not get one as handily conducive to life as ours is. Even if you could take life for granted, it’s not clear that 14 billion years is enough time for it to evolve by chance. But if the final state of the universe is set and is reaching back in time to influence the early universe, it could amplify the chances of life’s emergence.

With Alonso Botero at the University of the Andes in Colombia, Davies has used mathematical modeling to show that bookending the universe with particular initial and final states affects the types of particles created in between. “We’ve done this for a simplified, one-dimensional universe, and now we plan to move up to three dimensions,” Davies says. He and Botero are also searching for signatures that the final state of the universe could retroactively leave on the relic radiation of the Big Bang, which could be picked up by the Planck satellite launched last year.

Ideally, Davies and Botero hope to find a single cosmic destiny that can explain three major cosmological enigmas. The first mystery is why the expansion of the universe is currently speeding up; the second is why some cosmic rays appear to have energies higher than the bounds of normal physics allow; and the third is how galaxies acquired their magnetic fields. “The goal is to find out whether Mother Nature has been doing her own postselections, causing these unexpected effects to appear,” Davies says.

Bill Unruh of the University of British Columbia in Vancouver, a leading physicist, is intrigued by Davies’s idea. “This could have real implications for whatever the universe was like in its early history,” he says.

Also see the other articles in this issue's special Beyond Einstein section: Is the Search for Immutable Laws of Nature a Wild-Goose Chase and The Mystery of the Rocketing Particles That Shouldn't Exist.

It's About Time: The Scientific Evidence for Psi Experiences

The Huffington Post

Cassandra VietenThe Huffington Post

Cassandra Vieten

Posted: December 17, 2010 09:01 AM

It's About Time: The Scientific Evidence for Psi Experiences

OK readers, later in this article, I'm going to use an example that will involve either a garden, a sailboat, a running man or a train. Can you accurately guess which one? In a forthcoming issue of the Journal of Personality and Social Psychology (JPSP), Cornell psychology professor Daryl Bem has published an article that suggests you can, possibly more often than the 25 percent of the time on average you might expect just by chance.

Entitled "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect," the paper presents evidence from nine experiments involving over 1,000 subjects suggesting that events in the future may influence events in the past -- a concept known as "retrocausation." In some of the experiments, students were able to guess at future events at levels of accuracy beyond what would be expected by chance. In others, events that took place in the future appeared to influence those in the past, such as one in which rehearsing a list of words enhanced recall of those words, with the twist that the rehearsal took place after the test of recall.

As Director of Research at the Institute of Noetic Sciences, where, among other things, we study experiences that seem to transcend the usual boundaries of time or space (generically called "psi" experiences), I've already received a slew of comments and queries regarding the pre-print of the article that is making the rounds.

The comments range from, "Wow, that's amazing!" to, "That's not possible -- there must be some mistake." But most responses are along the lines of "Hello?? This isn't news. Hundreds of articles reporting significant results on psi experiments have already been published in dozens of academic journals. What's the big deal?"

So what is notable about the current publication? To begin, Bem is not just any psychologist; he is one of the most prominent psychologists in the world (he was probably mentioned in your Psych 101 textbook, and may have even co-authored it). And JPSP is not just any journal but sits atop the psychology journal heap; the article, especially given its premise, was subjected to a rigorous peer-review (where scientific colleagues critique the article and decide whether it is worthy of publication). Also, Bem intentionally adopted well-accepted research protocols in the studies, albeit with a few key twists, that are simple and replicable (they don't require lots of special equipment, and the analyses are straightforward). Even so, whether the larger scientific community will pay attention to this study remains to be seen.

Which begs the question: Why is the existing literature on psi phenomena routinely dismissed by the scientific community and virtually ignored within the broader academic community? As science journalist Jonah Lehrer says about research findings on psi phenomena, "They've been demonstrated dozens of times, often by reputable scientists ... Why, then, do serious scientists dismiss the possibility of psi? Why do rational people assume that parapsychology is bullshit? Because these exciting results have consistently failed the test of replication."

Such assertions drive some of my colleagues crazy, who point to a large body of literature in which psi experiments have been replicated numerous times over many decades, involving dozens of independent scientists and thousands of subjects, and published in peer-reviewed journals. Still, the majority of the scientific community has largely dismissed the concept of psi -- no matter how reputable the investigator or prestigious his or her affiliation -- as frivolous, artifactual, not replicable, or having effect sizes that are so small as to be meaningless regardless of statistical significance. Worse, skeptics accuse psi researchers of being outright fraudulent, or well-meaning but delusional. Young scientists are regularly advised to stay far away from studying psi and warned about the ATF (the anti-tenure factor) that is associated with such interests. Senior scientists, including Nobel Laureates, have been known to be disinvited from giving talks if their interest in psi is discovered. Even religious scholars, who make it their business to examine the spiritual aspects of human experience, have trouble with psi.

With respect to effect sizes, yes, if you look at the results of lots of studies combined, psi effects are statistically significant, though small. However, a double standard is applied to the potential importance of small effects. The effect sizes reported in Bem's and many previous psi studies were frequently much larger than the effect sizes associated with many well-accepted scientific facts, like taking aspirin to prevent heart attacks, for example, or the risks of blood clots from taking Tamoxifen.

More importantly, though, even if we were to agree that "size does matter" and that these effects are generally small, let's remember that it shouldn't be possible to peer into the future at all, even a little, given what we generally understand about how the world works. Time is only supposed to go one way. Perception is supposed to be limited to the past or the present and only to those phenomena immediately and locally accessible by our five senses. When exceptions to these rules are observed, particularly under controlled laboratory conditions, they deserve a closer look.

Take running the four-minute mile. If we as scientists had studied even thousands of people in the 1950s, we might have concluded that running a four-minute mile was not humanly possible. Over time, however, it was found that a few people could actually do it -- an extremely small effect to be sure, but these anomalies proved that it was, in fact, possible. Not only do we now know that running a four-minute mile is possible, it is the standard for professional middle-distance runners (for those of you paying attention, that was the example with the running man).

Perhaps the oft-quoted maxim "extraordinary claims require extraordinary evidence" should be accompanied by a counter-maxim: "extraordinary anomalies deserve special attention." For example, a new drug to treat depression that resulted in some relief in one out of 100 people might not be worth a second glance, but if a new drug was claimed to cure AIDS in one out of 100 patients, it would justify further examination. When evidence runs contrary to prior probabilities, it calls for special consideration, not a knee-jerk out-of-hand dismissal.

As for replication, as noted earlier, psi proponents argue that there have been numerous replications -- often far more than many other scientifically supported "facts" that are taken for granted. Indeed, scientists familiar with this area of research view Bem's studies as clever conceptual replications that rest upon a large body of previous work. These scientists are now going beyond the idea of mere existence of these effects and forging ahead into studying what conditions may enhance them -- inherent individual traits, training, genetics? In small, underfunded labs around the world, scientists are working to improve research designs, measures and methods to better study psi.

There is also a growing recognition that it might not be quite so simple as developing one good experiment and then replicating it to death. An article published in the Dec. 13, 2010 issue of The New Yorker highlights a phenomenon that is well known to scientists, not only in the field of psi but across many disciplines: Initial experiments can show very strong results, but when the experiments are repeated again and again, the effects can decline. Gamblers may recognize this phenomenon as "beginner's luck." Of course this isn't true for all natural phenomena. When you drop a rock it will head toward the ground pretty much every time. But for more complex phenomena, we may need to contend with the "decline effect," along with observer effects and other design and measurement complexities.

Does this mean that the effects aren't real and that these topics are inherently "unscientific" and shouldn't be studied? Of course not. Recall that in the early 19th century, it took many years for Faraday to demonstrate the existence of electromagnetism to his colleagues, and still, he did not live to see his theory that electromagnetic forces extended out into empty space around a conductor validated. Many research topics are extremely complex, requiring decades of research, and all kinds of new measures, methods, controls and technologies to adequately explore them. Cancer remains a profound mystery despite the efforts of tens of thousands of scientists and billions of dollars spent looking for a cure. Sequencing the human genome was a vast and complicated undertaking. Even "evidence-based" drugs for treating depression, on which a multi-billion dollar industry is based, are being called into question as being not much better than a placebo after all. Unless the object of study is extremely simple, science is mostly a long, winding, painstaking, incremental and challenging pursuit.

Problems with fluctuating effect sizes, experimenter effects, finding adequate controls and so on, are inherent in studying phenomena with complex interactions and poorly understood mechanisms. So I don't think we can attribute resistance to evidence for psi to these, nor can we blame complexities of measurement, difficulties with replication or even the challenge of pinning down an underlying theory. I think it's fear that some of our most cherished beliefs about how the world works and about who and what we are may be wrong. On a deeper level, there may be a collective, protracted, post-traumatic stress disorder resulting from that period in human history when reliance on blind faith in supernatural explanations of reality led to a very dark time when priests determined what was true and rational thought and systematic observation were prohibited.

Bem's article and its supporting body of literature, combined with serious discussions of retrocausation in physics, suggest that retrocausation in human experience may indeed be possible. But the real significance of the article lies in the fact that the dialogue about psi has been brought once again into the arena of intelligent debate in a public forum, where it deserves to be. While a long period of cautiousness regarding the commingling of science and anything considered supernatural -- like perceiving the future or the impact of consciousness on physical systems -- has been an understandable and adaptive response, surely we can trust ourselves in the 21st century to examine these issues intelligently without losing our heads. Such examination may lead to radical revisions in our understanding of how the world works and our human potentials.

Saturday, February 20, 2010

Brain at the breaking point: the mechanics behind traumatic brain injuries

Study that stretched and strained neural connections could yield insights into traumatic injury

Web edition : 1:12 pm

Broken axonSudden forces cause microtubules running inside axons to break (arrow), leading to axon swelling and damage, a new study shows. The work may have implications for understanding traumatic brain injury.D. Smith

SAN DIEGO — Rigid pathways in brain cell connections buckle and break when stretched, scientists report, a finding that could aid in the understanding of exactly what happens when traumatic brain injuries occur.

Up to 20 percent of combat soldiers and an estimated 1.4 million U.S. civilians sustain traumatic brain injuries each year. But the mechanics behind these injuries have remained mysterious.

New research, described February 19 at the annual meeting of the American Association for the Advancement of Science, suggests exactly how a blow to the brain disrupts this complex organ.

The brain “is not like the heart. If you lose a certain percentage of your heart muscle, then you’ll have a certain cardiac output,” says Geoffrey Manley, a neurologist at the University of California, San Francisco. Rather, the brain is an organ of connections. Car crashes, bomb blasts and falls can damage these intricate links, and even destroying a small number of them can cause devastating damage.

“You can have very small lesions in very discrete pathways which can have phenomenal impact,” says Manley, who did not participate in the study. One of the challenges brain injury researchers face, he says, is that “we’re not really embracing this idea of functional connectivity.

Recently, researchers have found that sudden blows can cause damage to the long fibers that extend from brain cells called axons, sometimes breaking the links between brain cells. But researchers didn’t know exactly what inside the axon snapped. The new research, conducted by Douglas Smith of the University of Pennsylvania and colleagues, finds that tiny tracks called microtubules are damaged inside axons by forces similar to those that cause traumatic brain injury.

Microtubules extend down the length of axons and serve as “superhighways of protein transfer,” says Smith. Brain cells rely on microtubules to move important cellular material out to the end of the axons. When Smith and colleagues quickly stretched brain cells growing on a silicone membrane, the microtubules inside the axons immediately buckled and broke, spilling their contents. “This disconnection at various discrete points spells disaster, and things are just dumped out at that site,” Smith says. “Microtubules are the stiffest component in axons, and they can’t tolerate that rapid, dynamic stretch.”

Smith points out that the duration of the stress applied is crucial to how well the axons — and microtubules — withstand damage. Like Silly Putty pulled apart slowly, axons can adjust to gradual stretching, Smith says. But sudden forces, like those that happen in blasts and car crashes, would cause the Silly Putty to snap.

In their lab dish experiments with brain cells on silicone, the researchers were able to minimize microtubule damage with a drug called Taxol, commonly used to treat cancer. But it’s too early to say whether the drug would work in people with traumatic brain injuries.

Figuring out exactly what happens in traumatic brain injuries could lead to new ways to help patients, Manley says. Currently, traumatic brain injury research is in “the abyss between bench and bedside,” he says.

Thursday, February 18, 2010

Healing touch: the key to regenerating bodies

New Scientist


Healing touch: the key to regenerating bodies

Video: Regenerating bodies

YOU started life as a single cell. Now you are made of many trillions. There are more cells in your body than there are stars in the galaxy. Every day billions of these cells are replaced. And if you hurt yourself, billions more cells spring up to repair broken blood vessels and make new skin, muscle or even bone.

Even more amazing than the staggering number of cells, though, is the fact that, by and large, they all know what to do - whether to become skin or bone and so on. The question is, how?

"Cells don't have eyes or ears," says Dennis Discher, a biophysical engineer at the University of Pennsylvania in Philadelphia. "If you were blind and deaf, you'd get around by touch and smell. You'd feel a soft chair to sit on, a hard wall to avoid, or whether you're walking on carpet or concrete."

Until recently, the focus was all on "smell": that is, on how cells respond to chemical signals such as growth factors. Biologists thought of cells as automatons that blindly followed the orders they were given. In recent years, however, it has started to become clear that the sense of touch is vital as well, allowing cells to work out for themselves where they are and what they should be doing. Expose stem cells to flowing fluid, for instance, and they turn into blood vessels.

Simply expose stem cells to flowing fluid and they turn into blood vessels

What is emerging is a far more dynamic picture of growth and development, with a great deal of interplay between cells, genes and our body's internal environment. This may explain why exercise and physical therapy are so important to health and healing - if cells don't get the right physical cues when you are recovering from an injury, for instance, they won't know what to do. It also helps explain how organisms evolve new shapes - the better cells become at sensing what they should do, the fewer genetic instructions they need to be given.

The latest findings are also good news for people who need replacement tissues and organs. If tissue engineers can just provide the right physical environment, it should make it easier to transform stem cells into specific tissues and create complex, three-dimensional organs that are as good as the real thing. And doctors are already experimenting with ways of using tactile cues to improve wound healing and regeneration.

Biologists have long suspected that mechanical forces may help shape development. "A hundred years ago, people looked at embryos and saw that it was an incredibly physical process," says Donald Ingber, head of Harvard University's Wyss Institute for Biologically Inspired Engineering. "Then when biochemistry and molecular biology came in, the baby was thrown out with the bath water and everybody just focused on chemicals and genes."

While it was clear that physical forces do play a role - for example, astronauts living in zero gravity suffer bone loss - until recently there was no way to measure and experiment with the tiny forces experienced by individual cells. Only in the past few years, as equipment like atomic force microscopes has become more common, have biologists, physicists and tissue engineers begun to get to grips with how forces shape cells' behaviour.

One of the clearest examples comes from Discher and his colleagues, who used atomic force microscopy to measure the stiffness of a variety of tissues and gel pads. Then they grew human mesenchymal stem cells - the precursors of bone, muscle and many other tissue types - on the gels. In each case, the cells turned into the tissue that most closely matched the stiffness of the gel.

The softest gels, which were as flabby as brain tissue, gave rise to nerve cells. In contrast, gels that were 10 times stiffer - like muscle tissue - generated muscle cells, and yet stiffer gels gave rise to bone (Cell, vol 126, p 677). "What's surprising is not that there are tactile differences between one tissue and another," says Discher. After all, doctors rely on such differences every time they palpate your abdomen. "What's surprising is that cells feel that difference."

The details of how they do this are now emerging. Most cells other than blood cells live within a fibrous extracellular matrix. Each cell is linked to this matrix by proteins in its membrane called integrins, and the cell's internal protein skeleton is constantly tugging on these integrins to create a taut, tuned whole. "There's isometric tension that you don't see," says Ingber. In practice, this means changes in external tension - such as differences in the stiffness of the matrix, or the everyday stresses and strains of normal muscle movement - can be transmitted into the cell and ultimately to the nucleus, where they can direct the cell's eventual fate.

Since stem cells have yet to turn into specific cell types, biologists expected them to be extra sensitive to the environment, and this does indeed seem to be the case. Ning Wang, a bioengineer at the University of Illinois at Urbana-Champaign, found that the embryonic stem cells of mice are much softer than other, more specialised cells. This softness means that tiny external forces can deform the cells and influence their development (Nature Materials, vol 9, p 82).

For instance, if stem cells are exposed to flowing fluid, they turn into the endothelial cells that line the inner surface of blood vessels. In fact, fluid flow - particularly pulses that mimic the effect of a beating heart - is proving crucial for growing replacement arteries in the laboratory. The rhythmic stress helps align the fibres of the developing artery, making them twice as strong, says Laura Niklason, a tissue engineer at Yale University. A biotech company Niklason founded, called Humacyte, has begun animal testing on arteries grown this way.

Surprisingly, pulsatile motion can help heal injuries in situ too. At Harvard, Ingber and his colleague Dennis Orgill are treating patients with difficult-to-heal wounds by implanting a small sponge in the wound and connecting this to a pump. The pump sucks the cells surrounding the wound in and out of the sponge's pores, distorting them by about 15 to 20 per cent - an almost ideal stimulus for inducing the cells to grow and form blood vessels and thus boost the healing process, says Ingber.

Meanwhile, tissue engineers are finding that they can grow far better bone and cartilage by mimicking the stresses that the tissues normally experience in the body. For instance, human cartilage grown in the lab is usually nowhere near as strong as the real thing. Recently, however, Clark Hung, a biomedical engineer at Columbia University in New York City, has grown cartilage that matches its natural counterpart strength for strength. The secret, he has found, is rhythmically squeezing the cartilage as it grows to mimic the stress of walking.

The secret of growing cartilage that is as strong as the real thing is to mimic the effects of walking

Hung says this is partly because the pressure helps to pump nutrients into cartilage, which has no blood vessels. But his experiments suggest that the loading alone also plays an important role. His team hopes the engineered cartilage will eventually be used to resurface arthritic human joints.

Even relatively mild stresses make a big difference. Attempts to grow replacement bone by placing stem cells in a culture chamber of the desired shape have not been very successful, with the cells often dying or producing only weak bone. But Gordana Vunjak-Novakovic, a biomedical engineer also at Columbia, has found that mimicking the internal flow of fluid that growing bones normally experience helps maximise strength. Last year, her team used this approach to successfully grow a replica of part of the temporomandibular joint in the jaw from human stem cells, producing a naturally shaped, fully viable bone after just five weeks.

"If you don't stimulate bone cells, they don't do much," says Vunjak-Novakovic. "But if you do, they wake up and start making bone at a higher rate."

There is still a long way to go, however. The replica bone lacks the thin layer of cartilage that lines the real bone, and it also lacks a blood supply, so it begins to starve as soon as it is removed from the culture chamber.

Again, though, the answer could be to provide the cells with the right physical cues. For example, Vunjak-Novakovic has used lasers to drill channels in the scaffolds used to grow heart muscle in the lab. When fluid begins flowing through these channels, endothelial cells move in to line the channels while muscle cells move away. "Each of the cells will find its own niche," she says. Her team is now testing to see whether stem cells will turn into endothelial cells in the channels and into muscle cells elsewhere. Early results suggest that they will.

Even small differences in forces can influence development. Christopher Chen of the University of Pennsylvania grew flat sheets of mesenchymal stem cells and exposed them to a mixture of growth factors for bone and marrow development. The cells on the edges of the sheets, which were exposed to the greatest stresses, turned into bone cells, while those in the middle turned into the fat cells found in marrow, as in real bone (Stem Cells, vol 26, p 2921).

If this kind of sorting-out according to physical forces is widespread in development, it could be very good news for tissue engineers. Instead of having to micromanage the process of producing a replacement organ, they need only to provide the right cues and let the cells do the rest.

If tissue engineers provide the right physical cues when growing organs, cells will do the rest

Indeed, it makes a lot of sense for some developmental decisions to be "devolved" to cells. The growth of tissues like muscles, bone, skin and blood vessels has to be coordinated as our bodies develop and adapt to different activities and injuries. A rigid genetic programme could easily be derailed, whereas using tactile cues as guides allows tissues to adapt quickly as conditions change - for instance, carrying heavy loads will make our bones grow stronger.

This kind of plasticity may play a vital role in evolution as well as during the lifetime of individuals. When the ancestors of giraffes acquired mutations that made their necks longer, for instance, they did not have to evolve a whole new blueprint for making necks. Instead, the nerves, muscles and skin would have grown proportionately without needing further changes in instructions. The result of this plasticity is a developmental programme that is better able to cope with evolutionary changes, says Ingber.

There is, however, a drawback. When disease or injury changes the stiffness of a tissue, things can go awry. Some researchers suspect that tissue stiffening plays a role in multiple sclerosis, in which nerves lose their protective myelin sheath (Journal of Biology, vol 8, p 78). It may also play a role in some cancers (see "Lumps and bumps").

It could also explain why many tissues fail to heal perfectly after an injury. To prevent infection, the body needs to patch up wounds as quickly as possible. So it uses a form of collagen that is easier to assemble than the normal one. "It's a quick patch, things are sealed off and you go on - but it's not perfect regeneration," says Discher. The quick-fix collagen is stiffer than normal tissue, as anyone with a large scar will tell you.

After a heart attack, for example, the dead portion of the heart muscle scars over. Why, Discher wondered, don't heart muscle cells then replace the scar tissue? To find out, he and his colleagues grew embryonic heart cells on matrixes of differing stiffness. When the matrix was the same stiffness as healthy heart muscle, the cells grew normally and beat happily. But if the matrix was as stiff as scar tissue, the cells gradually stopped beating (Journal of Cell Science, vol 121, p 3794).

The constant work of trying to flex the stiffer matrix wears the cells out, Discher thinks. "It's like pushing on a brick wall. Finally, they give up."

Discher believes the solution may lie in finding a way to soften the scar tissue so that heart cells can repopulate it. Several enzymes, such as matrix metalloproteinases and collagenases, might do the job, but overdoing it could be risky. "If you degrade the matrix too much, you lose the patch," he warns.

The stiffness of scar tissue may also prevent regeneration in nerve injury, because nerve cells prefer the softest of surroundings. "It might just be that the growing tip of the axon senses that there's a stiff wall ahead of it and doesn't grow through because of that," speculates Jochen Guck, a biophysicist at the University of Cambridge in the UK.

There is still a long way to go before we fully understand how cells sense and respond to the forces on them. But it is becoming clear that the touchy-feely approach could be the key to regenerating the body.

Lumps and bumps

Many tumours are stiffer than the tissues in which they form - after all, doctors often first detect many cancers of organs such as the breast and prostate by feeling a hard lump. Some researchers now suspect that this stiffness is not always just a consequence of the cancer. It may be a cause as well.

A team led by Paul Janmey, a biophysicist at the University of Pennsylvania in Philadelphia, has found that the cycle of cell division in breast cells stops when they are grown on a soft gel, keeping them in a quiescent state (Current Biology, vol 19, p 1511). Anything that signals stiffness - even just touching a cell with a rigid probe - can be enough to start it dividing again.

Similarly, when Valerie Weaver, a cancer biologist at the University of California at San Francisco, and her team used chemicals to soften the extracellular matrix in which breast cells were growing in the lab they found the cells were less likely to become malignant (Cell, vol 139, p 891). If her findings are confirmed, they could explain why women with denser breast tissue are more likely to develop breast cancer.

Some researchers, too, have reported seeing tumours form around the scars from breast-implant surgery. "This needs to be looked at again," says Weaver. If the link is confirmed, it might be possible to block tumour growth by interfering with the way cells detect stiffness.

Bob Holmes is a consultant for New Scientist based in Edmonton, Canada

Chromosome caps presage the brain's decline, Longer telomeres associated with multivitamin use

New Scientist


by Anil Ananthaswamy

Chromosome caps presage the brain's decline

Longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing.

A SIGN of a cell's age could help predict the onset of dementia. Elderly people are more likely to develop cognitive problems if their telomeres - the stretches of DNA that cap the ends of chromosomes - are shorter than those of their peers.

The shortening of telomeres is linked to reduced lifespan, heart disease and osteoarthritis. Telomeres naturally shorten with age as cells divide, but also contract when cells experience oxidative damage linked to metabolism. Such damage is associated with cognitive problems like dementia. Thomas von Zglinicki at Newcastle University, UK, showed in 2000 that people with dementia not caused by Alzheimer's tended to have shorter telomeres than people without dementia.

To see if healthy individuals with short telomeres are at risk of developing dementia, Kristine Yaffe at the University of California, San Francisco, and colleagues, followed 2734 physically fit adults with an average age of 74.

Yaffe's team tracked them for seven years and periodically assessed memory, language, concentration, attention, motor and other skills. At the start, the researchers measured the length of telomeres in blood cells and grouped each person according to short, medium or long telomeres.

After accounting for differences in age, race, sex and education, the researchers found that those with long telomeres experienced less cognitive decline compared to those with short or medium-length telomeres (Neurobiology of Aging, DOI: 10.1016/j.neurobiolaging.2009.12.006).

Von Zglinicki calls the work a "carefully done, large study", but notes that short telomeres by themselves are not enough to predict whether an individual will get dementia.

The key, says Ian Deary at the University of Edinburgh, UK, will be to combine telomere length with other biomarkers. "Most likely, longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing."

Longer telomere length will become one ingredient in a recipe for successful mental and bodily ageing.


A study conducted by researchers at the National Institutes of Health has provided the first epidemiologic evidence that the use of multivitamins by women is associated with longer telomeres: the protective caps at the ends of chromosomes that shorten with the aging of a cell. The study was reported online on March 11, 2009 in the American Journal of Clinical Nutrition.

Telomere length has been proposed as a marker of biological aging. Shorter telomeres have been linked with higher mortality within a given period of time and an increased risk of some chronic diseases.

For the current research, Honglei Chen and colleagues evaluated 586 participants aged 35 to 74 in the Sister Study, an ongoing prospective cohort of healthy sisters of breast cancer patients. Dietary questionnaires completed upon enrollment collected information concerning food and nutritional supplement intake. Stored blood samples were analyzed for leukocyte (white blood cell) DNA telomere length.

Sixty-five percent of the participants reported using multivitamin supplements at least once per month, and 74 percent consumed them daily. Eighty-nine percent of all multivitamin users consumed one a day multivitamin formulas, 21 percent consumed antioxidant combinations, and 17 percent were users of "stress-tabs" or B complex vitamins.

The researchers found 5.1 percent longer telomeres on average in daily users of multivitamins compared with nonusers. Increased telomere length was associated with one a day and antioxidant formula use, but not with stress-tabs or B complex. Individual vitamin B12 supplements were associated with increased telomere length and iron supplements with shorter telomeres. When nutrients from food were analyzed, vitamins C and E emerged as protective against telomere loss.

In their discussion of the findings, the authors explain that telomeres are particularly vulnerable to oxidative stress. Additionally, inflammation induces oxidative stress and lowers the activity of telomerase, the enzyme that that is responsible for maintaining telomeres. Because dietary antioxidants, B vitamins, and specific minerals can help reduce oxidative stress and inflammation, they may be useful for the maintenance of telomere length. In fact, vitamins C and E have been shown in cell cultures to retard telomere shortening and increase cellular life span.

"Our study provides preliminary evidence linking multivitamin use to longer leukocyte telomeres," the authors conclude. "This finding should be further evaluated in future epidemiologic studies and its implications concerning aging the etiology of chronic diseases should be carefully evaluated."

Iran showing fastest scientific growth of any country

by Debora MacKenzie

It might be the Chinese year of the tiger, but scientifically, 2010 is looking like Iran's year.

Scientific output has grown 11 times faster in Iran than the world average, faster than any other country. A survey of the number of scientific publications listed in the Web of Science database shows that growth in the Middle East – mostly in Turkey and Iran – is nearly four times faster than the world average.

Science-Metrix, a data-analysis company in Montreal, Canada, has published a detailed report (PDF) on "geopolitical shifts in knowledge creation" since 1980. "Asia is catching up even more rapidly than previously thought, Europe is holding its position more than most would expect, and the Middle East is a region to watch," says the report's author, Eric Archambault.

World scientific output grew steadily, from 450,000 papers a year in 1980 to 1,500,000 in 2009. Asia as a whole surpassed North America last year.

Nuclear, nuclear, nuclear

Archambaut notes that Iran's publications have emphasised inorganic and nuclear chemistry, nuclear and particle physics and nuclear engineering. Publications in nuclear engineering grew 250 times faster than the world average – although medical and agricultural research also increased.

Science-Metrix also predicts that this year, China will publish as many peer-reviewed papers in natural sciences and engineering as the US. If current trends continue, by 2015 China will match the US across all disciplines – although the US may publish more in the life and social sciences until 2030.

China's prominence in world science is known to have been growing, but Science-Metrix has discovered that its output of peer-reviewed papers has been growing more than five times faster than that of the US.


Meanwhile, "European attitudes towards collaboration are bearing fruit", writes Archambaut. While Asia's growth in output was mirrored by North America's fall, Europe, which invests heavily in cross-border scientific collaboration, held its own, and now produces over a third of the world's science, the largest regional share. Asia produces 29 per cent and North America 28 per cent.

Scientific output fell in the former Soviet Union after its collapse in 1991 and only began to recover in 2006. Latin America and the Caribbean together grew fastest of any region, although its share of world science is still small. Growth in Oceania, Europe and Africa has stayed at about the same rate over the past 30 years. Only North American scientific output has grown "considerably slower" than the world as a whole.

"The number of papers is a first-order metric that doesn't capture quality," admits Archambaut. There are measures for quality, such as the number of times papers are cited, and "Asian science does tend to be less cited overall".

But dismissing the Asian surge on this basis is risky, he feels. "In the 1960s, when Japanese cars started entering the US market, US manufacturers dismissed their advance based on their quality" – but then lost a massive market share to Japan. The important message, he says, is that "Asia is becoming the world leader in science, with North America progressively left behind".

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

Wednesday, February 17, 2010


The Third Culture


By Freeman Dyson

My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.

FREEMAN DYSON is professor of physics at the Institute for Advanced Study, in Princeton. His professional interests are in mathematics and astronomy. Among his many books are Disturbing the Universe, Infinite in All Directions Origins of Life, From Eros to Gaia, Imagined Worlds, and The Sun, the Genome, and the Internet. His most recent book, Many Colored Glass: Reflections on the Place of Life in the Universe (Page Barbour Lectures), is being published this month by University of Virgina Press.

1. The Need for Heretics

In the modern world, science and society often interact in a perverse way. We live in a technological society, and technology causes political problems. The politicians and the public expect science to provide answers to the problems. Scientific experts are paid and encouraged to provide answers. The public does not have much use for a scientist who says, “Sorry, but we don’t know”. The public prefers to listen to scientists who give confident answers to questions and make confident predictions of what will happen as a result of human activities. So it happens that the experts who talk publicly about politically contentious questions tend to speak more clearly than they think. They make confident predictions about the future, and end up believing their own predictions. Their predictions become dogmas which they do not question. The public is led to believe that the fashionable scientific dogmas are true, and it may sometimes happen that they are wrong. That is why heretics who question the dogmas are needed.

As a scientist I do not have much faith in predictions. Science is organized unpredictability. The best scientists like to arrange things in an experiment to be as unpredictable as possible, and then they do the experiment to see what will happen. You might say that if something is predictable then it is not science. When I make predictions, I am not speaking as a scientist. I am speaking as a story-teller, and my predictions are science-fiction rather than science. The predictions of science-fiction writers are notoriously inaccurate. Their purpose is to imagine what might happen rather than to describe what will happen. I will be telling stories that challenge the prevailing dogmas of today. The prevailing dogmas may be right, but they still need to be challenged. I am proud to be a heretic. The world always needs heretics to challenge the prevailing orthodoxies. Since I am heretic, I am accustomed to being in the minority. If I could persuade everyone to agree with me, I would not be a heretic.

We are lucky that we can be heretics today without any danger of being burned at the stake. But unfortunately I am an old heretic. Old heretics do not cut much ice. When you hear an old heretic talking, you can always say, “Too bad he has lost his marbles”, and pass on. What the world needs is young heretics. I am hoping that one or two of the people who read this piece may fill that role.

Two years ago, I was at Cornell University celebrating the life of Tommy Gold, a famous astronomer who died at a ripe old age. He was famous as a heretic, promoting unpopular ideas that usually turned out to be right. Long ago I was a guinea-pig in Tommy’s experiments on human hearing. He had a heretical idea that the human ear discriminates pitch by means of a set of tuned resonators with active electromechanical feedback. He published a paper explaining how the ear must work, [Gold, 1948]. He described how the vibrations of the inner ear must be converted into electrical signals which feed back into the mechanical motion, reinforcing the vibrations and increasing the sharpness of the resonance. The experts in auditory physiology ignored his work because he did not have a degree in physiology. Many years later, the experts discovered the two kinds of hair-cells in the inner ear that actually do the feedback as Tommy had predicted, one kind of hair-cell acting as electrical sensors and the other kind acting as mechanical drivers. It took the experts forty years to admit that he was right. Of course, I knew that he was right, because I had helped him do the experiments.

Later in his life, Tommy Gold promoted another heretical idea, that the oil and natural gas in the ground come up from deep in the mantle of the earth and have nothing to do with biology. Again the experts are sure that he is wrong, and he did not live long enough to change their minds. Just a few weeks before he died, some chemists at the Carnegie Institution in Washington did a beautiful experiment in a diamond anvil cell, [Scott et al., 2004]. They mixed together tiny quantities of three things that we know exist in the mantle of the earth, and observed them at the pressure and temperature appropriate to the mantle about two hundred kilometers down. The three things were calcium carbonate which is sedimentary rock, iron oxide which is a component of igneous rock, and water. These three things are certainly present when a slab of subducted ocean floor descends from a deep ocean trench into the mantle. The experiment showed that they react quickly to produce lots of methane, which is natural gas. Knowing the result of the experiment, we can be sure that big quantities of natural gas exist in the mantle two hundred kilometers down. We do not know how much of this natural gas pushes its way up through cracks and channels in the overlying rock to form the shallow reservoirs of natural gas that we are now burning. If the gas moves up rapidly enough, it will arrive intact in the cooler regions where the reservoirs are found. If it moves too slowly through the hot region, the methane may be reconverted to carbonate rock and water. The Carnegie Institute experiment shows that there is at least a possibility that Tommy Gold was right and the natural gas reservoirs are fed from deep below. The chemists sent an E-mail to Tommy Gold to tell him their result, and got back a message that he had died three days earlier. Now that he is dead, we need more heretics to take his place.

2. Climate and Land Management

The main subject of this piece is the problem of climate change. This is a contentious subject, involving politics and economics as well as science. The science is inextricably mixed up with politics. Everyone agrees that the climate is changing, but there are violently diverging opinions about the causes of change, about the consequences of change, and about possible remedies. I am promoting a heretical opinion, the first of three heresies that I will discuss in this piece.

My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.

There is no doubt that parts of the world are getting warmer, but the warming is not global. I am not saying that the warming does not cause problems. Obviously it does. Obviously we should be trying to understand it better. I am saying that the problems are grossly exaggerated. They take away money and attention from other problems that are more urgent and more important, such as poverty and infectious disease and public education and public health, and the preservation of living creatures on land and in the oceans, not to mention easy problems such as the timely construction of adequate dikes around the city of New Orleans.

I will discuss the global warming problem in detail because it is interesting, even though its importance is exaggerated. One of the main causes of warming is the increase of carbon dioxide in the atmosphere resulting from our burning of fossil fuels such as oil and coal and natural gas. To understand the movement of carbon through the atmosphere and biosphere, we need to measure a lot of numbers. I do not want to confuse you with a lot of numbers, so I will ask you to remember just one number. The number that I ask you to remember is one hundredth of an inch per year. Now I will explain what this number means. Consider the half of the land area of the earth that is not desert or ice-cap or city or road or parking-lot. This is the half of the land that is covered with soil and supports vegetation of one kind or another. Every year, it absorbs and converts into biomass a certain fraction of the carbon dioxide that we emit into the atmosphere. Biomass means living creatures, plants and microbes and animals, and the organic materials that are left behind when the creatures die and decay. We don’t know how big a fraction of our emissions is absorbed by the land, since we have not measured the increase or decrease of the biomass. The number that I ask you to remember is the increase in thickness, averaged over one half of the land area of the planet, of the biomass that would result if all the carbon that we are emitting by burning fossil fuels were absorbed. The average increase in thickness is one hundredth of an inch per year.

The point of this calculation is the very favorable rate of exchange between carbon in the atmosphere and carbon in the soil. To stop the carbon in the atmosphere from increasing, we only need to grow the biomass in the soil by a hundredth of an inch per year. Good topsoil contains about ten percent biomass, [Schlesinger, 1977], so a hundredth of an inch of biomass growth means about a tenth of an inch of topsoil. Changes in farming practices such as no-till farming, avoiding the use of the plow, cause biomass to grow at least as fast as this. If we plant crops without plowing the soil, more of the biomass goes into roots which stay in the soil, and less returns to the atmosphere. If we use genetic engineering to put more biomass into roots, we can probably achieve much more rapid growth of topsoil. I conclude from this calculation that the problem of carbon dioxide in the atmosphere is a problem of land management, not a problem of meteorology. No computer model of atmosphere and ocean can hope to predict the way we shall manage our land.

Here is another heretical thought. Instead of calculating world-wide averages of biomass growth, we may prefer to look at the problem locally. Consider a possible future, with China continuing to develop an industrial economy based largely on the burning of coal, and the United States deciding to absorb the resulting carbon dioxide by increasing the biomass in our topsoil. The quantity of biomass that can be accumulated in living plants and trees is limited, but there is no limit to the quantity that can be stored in topsoil. To grow topsoil on a massive scale may or may not be practical, depending on the economics of farming and forestry. It is at least a possibility to be seriously considered, that China could become rich by burning coal, while the United States could become environmentally virtuous by accumulating topsoil, with transport of carbon from mine in China to soil in America provided free of charge by the atmosphere, and the inventory of carbon in the atmosphere remaining constant. We should take such possibilities into account when we listen to predictions about climate change and fossil fuels. If biotechnology takes over the planet in the next fifty years, as computer technology has taken it over in the last fifty years, the rules of the climate game will be radically changed.

When I listen to the public debates about climate change, I am impressed by the enormous gaps in our knowledge, the sparseness of our observations and the superficiality of our theories. Many of the basic processes of planetary ecology are poorly understood. They must be better understood before we can reach an accurate diagnosis of the present condition of our planet. When we are trying to take care of a planet, just as when we are taking care of a human patient, diseases must be diagnosed before they can be cured. We need to observe and measure what is going on in the biosphere, rather than relying on computer models.

Everyone agrees that the increasing abundance of carbon dioxide in the atmosphere has two important consequences, first a change in the physics of radiation transport in the atmosphere, and second a change in the biology of plants on the ground and in the ocean. Opinions differ on the relative importance of the physical and biological effects, and on whether the effects, either separately or together, are beneficial or harmful. The physical effects are seen in changes of rainfall, cloudiness, wind-strength and temperature, which are customarily lumped together in the misleading phrase “global warming”. In humid air, the effect of carbon dioxide on radiation transport is unimportant because the transport of thermal radiation is already blocked by the much larger greenhouse effect of water vapor. The effect of carbon dioxide is important where the air is dry, and air is usually dry only where it is cold. Hot desert air may feel dry but often contains a lot of water vapor. The warming effect of carbon dioxide is strongest where air is cold and dry, mainly in the arctic rather than in the tropics, mainly in mountainous regions rather than in lowlands, mainly in winter rather than in summer, and mainly at night rather than in daytime. The warming is real, but it is mostly making cold places warmer rather than making hot places hotter. To represent this local warming by a global average is misleading.

The fundamental reason why carbon dioxide in the atmosphere is critically important to biology is that there is so little of it. A field of corn growing in full sunlight in the middle of the day uses up all the carbon dioxide within a meter of the ground in about five minutes. If the air were not constantly stirred by convection currents and winds, the corn would stop growing. About a tenth of all the carbon dioxide in the atmosphere is converted into biomass every summer and given back to the atmosphere every fall. That is why the effects of fossil-fuel burning cannot be separated from the effects of plant growth and decay. There are five reservoirs of carbon that are biologically accessible on a short time-scale, not counting the carbonate rocks and the deep ocean which are only accessible on a time-scale of thousands of years. The five accessible reservoirs are the atmosphere, the land plants, the topsoil in which land plants grow, the surface layer of the ocean in which ocean plants grow, and our proved reserves of fossil fuels. The atmosphere is the smallest reservoir and the fossil fuels are the largest, but all five reservoirs are of comparable size. They all interact strongly with one another. To understand any of them, it is necessary to understand all of them.

As an example of the way different reservoirs of carbon dioxide may interact with each other, consider the atmosphere and the topsoil. Greenhouse experiments show that many plants growing in an atmosphere enriched with carbon dioxide react by increasing their root-to-shoot ratio. This means that the plants put more of their growth into roots and less into stems and leaves. A change in this direction is to be expected, because the plants have to maintain a balance between the leaves collecting carbon from the air and the roots collecting mineral nutrients from the soil. The enriched atmosphere tilts the balance so that the plants need less leaf-area and more root-area. Now consider what happens to the roots and shoots when the growing season is over, when the leaves fall and the plants die. The new-grown biomass decays and is eaten by fungi or microbes. Some of it returns to the atmosphere and some of it is converted into topsoil. On the average, more of the above-ground growth will return to the atmosphere and more of the below-ground growth will become topsoil. So the plants with increased root-to-shoot ratio will cause an increased transfer of carbon from the atmosphere into topsoil. If the increase in atmospheric carbon dioxide due to fossil-fuel burning has caused an increase in the average root-to-shoot ratio of plants over large areas, then the possible effect on the top-soil reservoir will not be small. At present we have no way to measure or even to guess the size of this effect. The aggregate biomass of the topsoil of the planet is not a measurable quantity. But the fact that the topsoil is unmeasurable does not mean that it is unimportant.

At present we do not know whether the topsoil of the United States is increasing or decreasing. Over the rest of the world, because of large-scale deforestation and erosion, the topsoil reservoir is probably decreasing. We do not know whether intelligent land-management could increase the growth of the topsoil reservoir by four billion tons of carbon per year, the amount needed to stop the increase of carbon dioxide in the atmosphere. All that we can say for sure is that this is a theoretical possibility and ought to be seriously explored.

3. Oceans and Ice-ages

Another problem that has to be taken seriously is a slow rise of sea level which could become catastrophic if it continues to accelerate. We have accurate measurements of sea level going back two hundred years. We observe a steady rise from 1800 to the present, with an acceleration during the last fifty years. It is widely believed that the recent acceleration is due to human activities, since it coincides in time with the rapid increase of carbon dioxide in the atmosphere. But the rise from 1800 to 1900 was probably not due to human activities. The scale of industrial activities in the nineteenth century was not large enough to have had measurable global effects. So a large part of the observed rise in sea level must have other causes. One possible cause is a slow readjustment of the shape of the earth to the disappearance of the northern ice-sheets at the end of the ice age twelve thousand years ago. Another possible cause is the large-scale melting of glaciers, which also began long before human influences on climate became significant. Once again, we have an environmental danger whose magnitude cannot be predicted until we know more about its causes, [Munk, 2002].

The most alarming possible cause of sea-level rise is a rapid disintegration of the West Antarctic ice-sheet, which is the part of Antarctica where the bottom of the ice is far below sea level. Warming seas around the edge of Antarctica might erode the ice-cap from below and cause it to collapse into the ocean. If the whole of West Antarctica disintegrated rapidly, sea-level would rise by five meters, with disastrous effects on billions of people. However, recent measurements of the ice-cap show that it is not losing volume fast enough to make a significant contribution to the presently observed sea-level rise. It appears that the warming seas around Antarctica are causing an increase in snowfall over the ice-cap, and the increased snowfall on top roughly cancels out the decrease of ice volume caused by erosion at the edges. The same changes, increased melting of ice at the edges and increased snowfall adding ice on top, are also observed in Greenland. In addition, there is an increase in snowfall over the East Antarctic Ice-cap, which is much larger and colder and is in no danger of melting. This is another situation where we do not know how much of the environmental change is due to human activities and how much to long-term natural processes over which we have no control.

Another environmental danger that is even more poorly understood is the possible coming of a new ice-age. A new ice-age would mean the burial of half of North America and half of Europe under massive ice-sheets. We know that there is a natural cycle that has been operating for the last eight hundred thousand years. The length of the cycle is a hundred thousand years. In each hundred-thousand year period, there is an ice-age that lasts about ninety thousand years and a warm interglacial period that lasts about ten thousand years. We are at present in a warm period that began twelve thousand years ago, so the onset of the next ice-age is overdue. If human activities were not disturbing the climate, a new ice-age might already have begun. We do not know how to answer the most important question: do our human activities in general, and our burning of fossil fuels in particular, make the onset of the next ice-age more likely or less likely?

There are good arguments on both sides of this question. On the one side, we know that the level of carbon dioxide in the atmosphere was much lower during past ice-ages than during warm periods, so it is reasonable to expect that an artificially high level of carbon dioxide might stop an ice-age from beginning. On the other side, the oceanographer Wallace Broecker [Broecker, 1997] has argued that the present warm climate in Europe depends on a circulation of ocean water, with the Gulf Stream flowing north on the surface and bringing warmth to Europe, and with a counter-current of cold water flowing south in the deep ocean. So a new ice-age could begin whenever the cold deep counter-current is interrupted. The counter-current could be interrupted when the surface water in the Arctic becomes less salty and fails to sink, and the water could become less salty when the warming climate increases the Arctic rainfall. Thus Broecker argues that a warm climate in the Arctic may paradoxically cause an ice-age to begin. Since we are confronted with two plausible arguments leading to opposite conclusions, the only rational response is to admit our ignorance. Until the causes of ice-ages are understood, we cannot know whether the increase of carbon-dioxide in the atmosphere is increasing or decreasing the danger.

4. The Wet Sahara

My second heresy is also concerned with climate change. It is about the mystery of the wet Sahara. This is a mystery that has always fascinated me. At many places in the Sahara desert that are now dry and unpopulated, we find rock-paintings showing people with herds of animals. The paintings are abundant, and some of them are of high artistic quality, comparable with the more famous cave-paintings in France and Spain. The Sahara paintings are more recent than the cave-paintings. They come in a variety of styles and were probably painted over a period of several thousand years. The latest of them show Egyptian influences and may be contemporaneous with early Egyptian tomb paintings. Henri Lhote’s book, “The Search for the Tassili Frescoes”, [Lhote, 1958], is illustrated with reproductions of fifty of the paintings. The best of the herd paintings date from roughly six thousand years ago. They are strong evidence that the Sahara at that time was wet. There was enough rain to support herds of cows and giraffes, which must have grazed on grass and trees. There were also some hippopotamuses and elephants. The Sahara then must have been like the Serengeti today.

At the same time, roughly six thousand years ago, there were deciduous forests in Northern Europe where the trees are now conifers, proving that the climate in the far north was milder than it is today. There were also trees standing in mountain valleys in Switzerland that are now filled with famous glaciers. The glaciers that are now shrinking were much smaller six thousand years ago than they are today. Six thousand years ago seems to have been the warmest and wettest period of the interglacial era that began twelve thousand years ago when the last Ice Age ended. I would like to ask two questions. First, if the increase of carbon dioxide in the atmosphere is allowed to continue, shall we arrive at a climate similar to the climate of six thousand years ago when the Sahara was wet? Second, if we could choose between the climate of today with a dry Sahara and the climate of six thousand years ago with a wet Sahara, should we prefer the climate of today? My second heresy answers yes to the first question and no to the second. It says that the warm climate of six thousand years ago with the wet Sahara is to be preferred, and that increasing carbon dioxide in the atmosphere may help to bring it back. I am not saying that this heresy is true. I am only saying that it will not do us any harm to think about it.

The biosphere is the most complicated of all the things we humans have to deal with. The science of planetary ecology is still young and undeveloped. It is not surprising that honest and well-informed experts can disagree about facts. But beyond the disagreement about facts, there is another deeper disagreement about values. The disagreement about values may be described in an over-simplified way as a disagreement between naturalists and humanists. Naturalists believe that nature knows best. For them the highest value is to respect the natural order of things. Any gross human disruption of the natural environment is evil. Excessive burning of fossil fuels is evil. Changing nature’s desert, either the Sahara desert or the ocean desert, into a managed ecosystem where giraffes or tunafish may flourish, is likewise evil. Nature knows best, and anything we do to improve upon Nature will only bring trouble.

The humanist ethic begins with the belief that humans are an essential part of nature. Through human minds the biosphere has acquired the capacity to steer its own evolution, and now we are in charge. Humans have the right and the duty to reconstruct nature so that humans and biosphere can both survive and prosper. For humanists, the highest value is harmonious coexistence between humans and nature. The greatest evils are poverty, underdevelopment, unemployment, disease and hunger, all the conditions that deprive people of opportunities and limit their freedoms. The humanist ethic accepts an increase of carbon dioxide in the atmosphere as a small price to pay, if world-wide industrial development can alleviate the miseries of the poorer half of humanity. The humanist ethic accepts our responsibility to guide the evolution of the planet.

The sharpest conflict between naturalist and humanist ethics arises in the regulation of genetic engineering. The naturalist ethic condemns genetically modified food-crops and all other genetic engineering projects that might upset the natural ecology. The humanist ethic looks forward to a time not far distant, when genetically engineered food-crops and energy-crops will bring wealth to poor people in tropical countries, and incidentally give us tools to control the growth of carbon dioxide in the atmosphere. Here I must confess my own bias. Since I was born and brought up in England, I spent my formative years in a land with great beauty and a rich ecology which is almost entirely man-made. The natural ecology of England was uninterrupted and rather boring forest. Humans replaced the forest with an artificial landscape of grassland and moorland, fields and farms, with a much richer variety of plant and animal species. Quite recently, only about a thousand years ago, we introduced rabbits, a non-native species which had a profound effect on the ecology. Rabbits opened glades in the forest where flowering plants now flourish. There is no wilderness in England, and yet there is plenty of room for wild-flowers and birds and butterflies as well as a high density of humans. Perhaps that is why I am a humanist.

To conclude this piece I come to my third and last heresy. My third heresy says that the United States has less than a century left of its turn as top nation. Since the modern nation-state was invented around the year 1500, a succession of countries have taken turns at being top nation, first Spain, then France, Britain, America. Each turn lasted about 150 years. Ours began in 1920, so it should end about 2070. The reason why each top nation’s turn comes to an end is that the top nation becomes over-extended, militarily, economically and politically. Greater and greater efforts are required to maintain the number one position. Finally the over-extension becomes so extreme that the structure collapses. Already we can see in the American posture today some clear symptoms of over-extension. Who will be the next top nation? China is the obvious candidate. After that it might be India or Brazil. We should be asking ourselves, not how to live in an America-dominated world, but how to prepare for a world that is not America-dominated. That may be the most important problem for the next generation of Americans to solve. How does a people that thinks of itself as number one yield gracefully to become number two?

I am telling the next generation of young students, who will still be alive in the second half of our century, that misfortunes are on the way. Their precious Ph.D., or whichever degree they went through long years of hard work to acquire, may be worth less than they think. Their specialized training may become obsolete. They may find themselves over-qualified for the available jobs. They may be declared redundant. The country and the culture to which they belong may move far away from the mainstream. But these misfortunes are also opportunities. It is always open to them to join the heretics and find another way to make a living. With or without a Ph.D., there are big and important problems for them to solve.

I will not attempt to summarize the lessons that my readers should learn from these heresies. The main lesson that I would like them to take home is that the long-range future is not predetermined. The future is in their hands. The rules of the world-historical game change from decade to decade in unpredictable ways. All our fashionable worries and all our prevailing dogmas will probably be obsolete in fifty years. My heresies will probably also be obsolete. It is up to them to find new heresies to guide our way to a more hopeful future.

5. Bad Advice to a Young Scientist

Sixty years ago, when I was a young and arrogant physicist, I tried to predict the future of physics and biology. My prediction was an extreme example of wrongness, perhaps a world record in the category of wrong predictions. I was giving advice about future employment to Francis Crick, the great biologist who died in 2005 after a long and brilliant career. He discovered, with Jim Watson, the double helix. They discovered the double helix structure of DNA in 1953, and thereby gave birth to the new science of molecular genetics. Eight years before that, in 1945, before World War 2 came to an end, I met Francis Crick for the first time. He was in Fanum House, a dismal office building in London where the Royal Navy kept a staff of scientists. Crick had been working for the Royal Navy for a long time and was depressed and discouraged. He said he had missed his chance of ever amounting to anything as a scientist. Before World War 2, he had started a promising career as a physicist. But then the war hit him at the worst time, putting a stop to his work in physics and keeping him away from science for six years. The six best years of his life, squandered on naval intelligence, lost and gone forever. Crick was good at naval intelligence, and did important work for the navy. But military intelligence bears the same relation to intelligence as military music bears to music. After six years doing this kind of intelligence, it was far too late for Crick to start all over again as a student and relearn all the stuff he had forgotten. No wonder he was depressed. I came away from Fanum House thinking, “How sad. Such a bright chap. If it hadn’t been for the war, he would probably have been quite a good scientist”.

A year later, I met Crick again. The war was over and he was much more cheerful. He said he was thinking of giving up physics and making a completely fresh start as a biologist. He said the most exciting science for the next twenty years would be in biology and not in physics. I was then twenty-two years old and very sure of myself. I said, “No, you’re wrong. In the long run biology will be more exciting, but not yet. The next twenty years will still belong to physics. If you switch to biology now, you will be too old to do the exciting stuff when biology finally takes off”. Fortunately, he didn’t listen to me. He went to Cambridge and began thinking about DNA. It took him only seven years to prove me wrong. The moral of this story is clear. Even a smart twenty-two-year-old is not a reliable guide to the future of science. And the twenty-two-year-old has become even less reliable now that he is eighty-two.

[Excerpted from Many Colored Glass: Reflections on the Place of Life in the Universe (Page Barbour Lectures) by Freeman Dyson, University of Virgina Press, 2007.]

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: editor@edge.org
Copyright © 2007 By
Edge Foundation, Inc
All Rights Reserved.