This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Saturday, January 29, 2011

Technological Fundamentalism: Why bad things happen when humans play God

Dissident Voice: a radical newsletter in the struggle for peace and social justice

Technological Fundamentalism

Why bad things happen when humans play God

If humans were smart, we would bet on our ignorance.

That advice comes early in the Hebrew Bible. Adam and Eve’s banishment in chapters two and three of Genesis can be read as a warning that hubris is our tragic flaw. In the garden, God told them they could eat freely of every tree but the tree of knowledge of good and evil. This need not be understood as a command that people must stay stupid, but only that we resist the temptation to believe that we are godlike and can competently manipulate the complexity of the world.

We aren’t, and we can’t, which is why we should always remember that we are far more ignorant than we are knowledgeable. It’s true that in the past few centuries, we humans have dramatically expanded our understanding of how the world works through modern science. But we would be sensible to listen to plant geneticist Wes Jackson, one of the leaders in the sustainable agriculture movement, who suggest that we adopt “an ignorance-based worldview” that could help us understand these limits.1 Jackson, cofounder of The Land Institute research center, argues that such an approach would help us ask important questions that go beyond the available answers and challenge us to force existing knowledge out of its categories. Putting the focus on what we don’t know can remind us of the need for humility and limit the damage we do.

This call for humility is an antidote to the various fundamentalisms that threaten our world today. I use the term “fundamentalism” to describe any intellectual, political, or theological position that asserts an absolute certainty in the truth and/or righteousness of a belief system. Fundamentalism is an extreme form of hubris—overconfidence not only in one’s beliefs but in the ability of humans to understand complex questions definitively. Fundamentalism isn’t unique to religious people but is instead a feature of a certain approach to the world, rooted in mistaking limited knowledge for wisdom.

In ascending order of threat, these fundamentalisms are religious, national, market, and technological. All share some similar characteristics, while each poses a particular threat to democracy and sustainable life on the planet.

Religious fundamentalism is the most contested of the four, and hence is the one most often critiqued. National fundamentalism routinely unleashes violence that leads to critique, though most often the critique focuses on other nations’ hyperpatriotic fundamentalism rather than our own. And as the prophets of neoliberalism’s dream of unrestrained capitalism are exposed as false prophets, criticism of market fundamentalism is moving slowly from the left to the mainstream.

Religious, national, and market fundamentalisms are frightening, but they may turn out to be less dangerous than our society’s technological fundamentalism.

Technological fundamentalists believe that the increasing use of evermore sophisticated high-energy, advanced technology is always a good thing and that any problems caused by the unintended consequences of such technology eventually can be remedied by more technology. Those who question such declarations are often said to be “anti-technology,” which is a meaningless insult. All human beings use technology of some kind, whether stone tools or computers. An anti-fundamentalist position is not that all technology is bad, but that the introduction of new technology should be evaluated carefully on the basis of its effects—predictable and unpredictable—on human communities and the non-human world, with an understanding of the limits of our knowledge.

Our experience with unintended consequences is fairly extensive. For example, there’s the case of automobiles and the burning of petroleum in internal-combustion engines, which give us the ability to travel considerable distances with a fair amount of individual autonomy. This technology also has given us traffic jams and road rage, strip malls and smog, while contributing to climate destabilization that threatens the ability of the ecosphere to sustain human life as we know it. We haven’t quite figured out how to cope with these problems, and in retrospect it might have been wise to go slower in the development of a system geared toward private, individual transportation based on the car, with more attention to potential consequences.2

Or how about CFCs and the ozone hole? Chlorofluorocarbons have a variety of industrial, commercial, and household applications, including in air conditioning. They were thought to be a miracle chemical when introduced in the 1930s—non-toxic, non-flammable, and non-reactive with other chemical compounds. But in the 1980s, researchers began to understand that while CFCs are stable in the troposphere, when they move to the stratosphere and are broken down by strong ultraviolet light they release chlorine atoms that deplete the ozone layer. This unintended effect deflated the exuberance a bit. Depletion of the ozone layer means that more UV radiation reaches the Earth’s surface, and overexposure to UV radiation is a cause of skin cancer, cataracts, and immune suppression.

But wait, the technological fundamentalists might argue, our experience with CFCs refutes your argument—humans got a handle on that one and banned CFCs, and now the ozone hole is closing. True enough, but what lessons have been learned? Society didn’t react to the news about CFCs by thinking about ways to step back from a developed world that has become dependent on air conditioning, but instead looked for replacements to keep the air conditioning running.3 So the reasonable question is: When will the unintended effects of the CFC replacements become visible? If not the ozone hole, what’s next? There’s no way to predict, but it seems reasonable to ask the question and sensible to assume the worst.

We don’t have to look far for evidence that our hubris is creating the worst. Every measure of the health of the ecosphere—groundwater depletion, topsoil loss, chemical contamination, increased toxicity in our own bodies, the number and size of “dead zones” in the oceans, accelerating extinction of species and reduction of bio-diversity—suggests we may be past the point of restoration. As Jackson’s example suggests, scientists themselves often recognize the threat and turn away from the hubris of technological fundamentalism. This powerful warning of ecocide came from 1,700 of the world’s leading scientists:

Human beings and the natural world are on a collision course. Human activities inflict harsh and often irreversible damage on the environment and on critical resources. If not checked, many of our current practices put at serious risk the future that we wish for human society and the plant and animal kingdoms, and may so alter the living world that it will be unable to sustain life in the manner that we know. Fundamental changes are urgent if we are to avoid the collision our present course will bring about.4

That statement was issued in 1992, and in the past two decades we have yet to change course and instead pursue ever riskier projects. As the most easily accessible oil is exhausted, we feed our energy/affluence habit by drilling in deep water and processing tar sands, guaranteeing the destruction of more ecosystems. We extract more coal through mountain-top removal, guaranteeing the destruction of more ecosystems.5 And we take technological fundamentalism to new heights by considering large-scale climate engineering projects—known as geo-engineering or planetary engineering, typically involving either carbon-dioxide removal from the atmosphere and solar-radiation management—as a “solution” to climate destabilization.

The technological fundamentalism that animates these delusional plans makes it clear why Wes Jackson’s call for an ignorance-based worldview is so important. If we were to step back and confront honestly the technologies we have unleashed—out of that hubris, believing our knowledge is adequate to control the consequences of our science and technology—I doubt any of us would ever get a good night’s sleep. We humans have been overdriving our intellectual headlights for thousands of years, most dramatically in the twentieth century when we ventured with reckless abandon into two places where we had no business going—the atom and the cell.

On the former: The deeper we break into the energy package, the greater the risks. Building fires with sticks gathered from around the camp is relatively easy to manage, but breaking into increasingly earlier material of the universe—such as fossil fuels and, eventually, uranium—is quite a different project, more complex and far beyond our capacity to control. Likewise, manipulating plants through traditional selective breeding is local and manageable, whereas breaking into the workings of the gene—the foundational material of life—takes us into places we have no way to understand.

These technological endeavors suggest that the Genesis story was prescient; our taste of the fruit of the tree of knowledge of good and evil appears to have been ill-advised, given where it has led us. We live now in the uncomfortable position of realizing we have moved too far and too fast, outstripping our capacity to manage safely the world we have created. The answer is not some naïve return to a romanticized past, but a recognition of what we have created and a systematic evaluation to determine how to recover from our most dangerous missteps.

A good first step is to adopt an ignorance-based worldview, to heed the warning against hubris that appears in the most foundational stories—religious and secular—of every culture. That would not only increase our chances of survival, but in Jackson’s words, make possible “a more joyful participation in our engagement with the world.”

  1. Wes Jackson, “Toward an Ignorance-Based Worldview,” The Land Report, Spring 2005, 14-16. See also Bill Vitek and Wes Jackson, eds., The Virtues of Ignorance: Complexity, Sustainability, and the Limits of Knowledge (Lexington: University Press of Kentucky, 2008). []
  2. Jane Holtz Kay, Asphalt Nation: How the Automobile Took Over America and How We Can Take It Back (New York: Crown, 1997). []
  3. Stan Cox, Losing Our Cool: Uncomfortable Truths About Our Air-Conditioned World (and Finding New Ways to Get Through the Summer (New York: New Press, 2010). []
  4. Henry Kendall, a Nobel Prize physicist and former chair of the Union of Concerned Scientists’ board of directors, was the primary author of the “World Scientists’ Warning to Humanity.” []
  5. Naomi Klein, “Addicted to Risk,” TEDWomen conference, December 8, 2010. []

Robert Jensen is a professor of journalism at the University of Texas at Austin and author of Citizens of Empire: The Struggle to Claim Our Humanity and Getting Off: Pornography and the End of Masculinity (South End Press, 2007). His latest book is All My Bones Shake: Seeking a Progressive Path to the Prophetic Voice, published by Soft Skull Press. He can be reached at: rjensen@uts.cc.utexas.edu. Read other articles by Robert, or visit Robert's website.

This article was posted on Saturday, January 29th, 2011 at 8:00am and is filed under Environment, Neoliberalism, Religion, Science/Technology.

Friday, January 28, 2011

"The Hidden Reality": The multiple universe, explained

"The Hidden Reality": The multiple universe, explained

The idea of the parallel world has fascinated writers for centuries. A new book explains the science behind it

Thursday, January 27, 2011

How Corporate Journalism Happily Lost Interest in Climate Change

Dissident Voice: a radical newsletter in the struggle for peace and social justice

The Empty Press Room

How Corporate Journalism Happily Lost Interest in Climate Change

In the media’s coverage of climate change, are we really still stuck on square one of some ghastly board game?

Global warming was recognised as a hugely serious problem as far back as 1988 when the United Nations set up the Intergovernmental Panel on Climate Change (IPCC). Since then the science has become more solid, more detailed, in fact, irrefutable: the risk of dangerous climate change has risen alarmingly, and the corporate media has continued to bury serious debate on what to do about it. According to NASA researchers at the Goddard Institute for Space Studies (GISS) in New York, global surface temperatures in 2010 tied with 2005 as the warmest on record.

“If the warming trend continues, as is expected, if greenhouse gases continue to increase, the 2010 record will not stand for long,” says James Hansen, the director of GISS.

“Global temperature is rising as fast in the past decade as in the prior two decades, despite year-to-year fluctuations associated with the El Niño-La Niña cycle of tropical ocean temperature”, Hansen and colleagues report. 1

The very stability of the Earth’s climate system is on the brink. Even an overall global warming of two degrees Celsius (2C) would be “a guaranteed disaster”, warns Hansen:

It is equivalent to the early Pliocene epoch [between about 5.3 and 2.6 million years ago] when the sea level was 25 m higher. What we don’t know is how long it takes ice sheets to disintegrate, but we know we’d be starting a process which then is going to be out of control. 2

Hansen believes that the UN climate talks in Mexico last December were “doomed to failure” since they did not address the fundamental physical constraints of the Earth’s climate system and how to live within them. These constraints and – crucially – how they are under threat by a rampant system of corporate globalisation are taboo subjects for the corporate media.

Anything beyond a 2C rise may well be catastrophic for humanity. Runaway global warming could be triggered if methane deposits under melting Arctic permafrost were to be released into the atmosphere.

Kevin Anderson, the director of the Tyndall Centre for Climate Change Research, is another senior climate scientist who is deeply worried:

There is currently nothing substantive to suggest we are heading for anything other than a 4C rise in temperature, possibly as early as the 2060s. ‘Yet over a pint of ale or sharing a coffee it is hard to find any scientist seriously engaged in climate change who considers a 4C rise within this century as anything other than catastrophic for both human society and ecosystems. 3

Meanwhile, powerful states and corporations are accelerating the rate of planetary consumption. There are occasional nice-sounding
“green” promises and aspirations. But in the age of WikiLeaks and the Palestine Papers, we know that powerful and ugly interest groups are really in charge, wheeling and dealing for short-term power and profit behind the benevolent rhetoric.

Climategate As An Excuse For Media Disinterest?

Remember those leaked emails involving climate scientists at the University of East Anglia and colleagues around the world? The lazily dubbed “Climategate” affair generated a huge media storm in a teacup with even the Guardian, the supposed flagship newspaper of environment reporting, culpable. In November 2010, one year after the storm broke, senior NASA climate scientist Gavin Schmidt noted on the excellent RealClimate blog:

As we predicted, no inquiries found anyone guilty of misconduct, no science was changed and no papers retracted. In the meantime we’ve had one of the hottest years on record, scientists continue to do science, and politicians…. well, they continue to do what politicians do. 4

As Schmidt observes, before the hacking of climate emails the media had responsibly begun to avoid the wackier ‘global warming is a hoax’ advocates. Yes, sceptics were occasionally interviewed. But they were becoming slightly more reasonable, at least accepting that carbon dioxide is a greenhouse gas, for example. Now, however, warns Schmidt:

Since the emails were released, and despite the fact that there is no evidence within them to support any of these claims of fraud and fabrication, the UK media has opened itself so wide to the spectrum of thought on climate that the GW hoaxers have now suddenly find themselves well within the mainstream. Nothing has changed the self-evidently ridiculousness of their arguments, but their presence at the media table has meant that the more reasonable critics seem far more centrist than they did a few months ago.

Schmidt cites a few examples of the corporate media driving the ‘balance’ of climate debate towards the cliff edge – and beyond:

‘[Lord] Monckton being quoted as a ‘prominent climate sceptic’ on the front page of the New York Times [...]; The Guardian digging up baseless fraud accusations against a scientist at SUNY [the State University of New York] that had already been investigated and dismissed; The Sunday Times ignoring experts telling them the IPCC was right in favor of the anti-IPCC meme of the day; The Daily Mail making up quotes that fit their GW hoaxer narrative; The Daily Express breathlessly proclaiming the whole thing a ‘climate con’; The Sunday Times (again) dredging up unfounded accusations of corruption in the surface temperature data sets. All of these stories are based on the worst kind of oft-rebunked nonsense and they serve to make the more subtle kind of scepticism pushed by [Bjorn ‘The Skeptical Environmentalist’] Lomborg et al seem almost erudite.’ 5

Ben Stewart, media director of Greenpeace UK, is clear that the media, and not climate scientists, are to blame for any extra public confusion or scepticism:

The public haven’t read a thousand emails from scientists they have never heard of. The emails didn’t change the way that carbon dioxide traps heat in the atmosphere, but the media created a situation that presented a false symmetry between the various sides of the debate. 6

Despite the massive media attention devoted to the leaked emails and to the absurd claims of extreme climate sceptics, public concern about climate instability rightly remains high. Bob Ward, policy and communications director at the Grantham Research Institute on Climate Change and the Environment, accepts that there may have been some fallout from the media’s irresponsible reporting – a confused public easing off the pressure on politicians to reduce emissions. But on the seriousness of the climate threat, itself, Ward says:

I haven’t seen any evidence there has been any big change in public opinion. 7 No thanks to the corporate media.

One Observer editorial last year noted correctly that the leaked emails had had ‘a disproportionate effect in stifling public urgency over climate change.’ 8 Wringing their hands, the paper’s editors complained that it was “baffling” why “it should be so hard to turn a matter of near certain scientific urgency into political action.”

Tragically, like their corporate colleagues elsewhere, the Observer’s editors appear oblivious to the corporate-driven greed of global capitalism that threatens billions of people. No wonder it is “profoundly depressing that the chances of concerted global action to protect the environment seem to be receding.” It is a platitude and a slippery diversion to say, as the Observer does, “We must restart the fight against global warming”.

Where is the Observer editorial call to “restart the fight against corporate domination of society”? Where is their urging of mass action to oppose government and business policies and practices that are steering us towards the edge of the climate abyss? When the paper writes of David Cameron and Nick Clegg, “Their claimed ambitions to take a lead on climate change really are a worthy object of scepticism”, we may greet such an obvious statement with muted applause. But we should express the same scepticism of the Observer and the rest of the corporate media when it comes to the need for urgent, radical and far-reaching analysis and action on the climate crisis.

The Empty Room

Last month’s UN climate talks in Cancun, Mexico, were never going to save the planet. Indeed, they seem to have been regarded as a minor side-show by much of the world’s political leaders and news media. Amy Goodman of the US-based Democracy Now! was a rare exception with daily in-depth reports and interviews. Here is how she presented one item with a wry note of irony:

Well, I’m Amy Goodman, here in Cancun. We’re covering the UN global warming summit. You know, last year this time, we were covering the Copenhagen summit. The press room was packed. There were thousands of journalists. It’s empty now. I mean, it’s nice to have printers and computers galore, but with no one in the room but folks who are cleaning up and keeping it tidy and IT people galore, well, I don’t think this was just meant for me. But I think there’s a bigger story here about the lack of interest in the Cancun meeting as the world is getting warmer. 9

Goodman continued:

Am I being unfair? Across the hallway is the writing press room. Oh, it has seats for hundreds and hundreds of journalists. And there are now, what, maybe three? All this, as the earth gets hotter and hotter.

Democracy Now! reviewed the transcripts of the previous week’s evening news broadcasts on ABC, CBS and NBC. The Cancun talks were not mentioned a single time by any of the networks.

  1. NASA Goddard Institute for Space Studies, ‘NASA Research Finds 2010 Tied for Warmest Year on Record’, press release, January 12, 2011 []
  2. Phil England, ‘Tax on carbon: The only wave to save our planet?’, Independent, January 4, 2011 []
  3. Kevin Anderson, BBC News, “Viewpoint: Small steps offer no respite from climate effects“, December 15, 2010, Last updated at 18:15 []
  4. Gavin Schmidt, “One year later“, RealClimate, November 20, 2010 []
  5. Gavin Schmidt, “Whatevergate“, RealClimate, February 16, 2010; see embedded links in Schmidt’s article for the original sources []
  6. David Adam, environment correspondent, “How has ‘Climategate’ affected the battle against climate change?” , Guardian, July 8, 2010 []
  7. Adam, op. cit. []
  8. We must restart the fight against global warming“, Observer, August 1, 2010 []
  9. Amy Goodman, “Pressing the Silence: At the UN Climate Change Conference, the Media Center is Oddly Quiet“, Democracy Now!, December 6, 2010 []

Media Lens is a UK-based media watchdog group headed by David Edwards and David Cromwell. The second Media Lens book, NEWSPEAK in the 21st Century by David Edwards and David Cromwell, was published in 2009 by Pluto Press. Read other articles by Media Lens, or visit Media Lens's website.

This article was posted on Thursday, January 27th, 2011 at 8:00am and is filed under Global Warming, Media.

Wednesday, January 19, 2011

Defining Critical Thinking

The Critical Thinking Community
Foundation for Critical Thinking

Defining Critical Thinking

Critical thinking...the awakening of the intellect to the study of itself.

Critical thinking is a rich concept that has been developing throughout the past 2500 years. The term "critical thinking" has its roots in the mid-late 20th century. We offer here overlapping definitions, together which form a substantive, transdisciplinary conception of critical thinking.

Critical Thinking as Defined by the National Council for Excellence in Critical Thinking, 1987

A statement by Michael Scriven & Richard Paul for the
{presented at the 8th Annual International Conference on Critical Thinking and Education Reform, Summer 1987}.

Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.

It entails the examination of those structures or elements of thought implicit in all reasoning: purpose, problem, or question-at-issue; assumptions; concepts; empirical grounding; reasoning leading to conclusions; implications and consequences; objections from alternative viewpoints; and frame of reference. Critical thinking — in being responsive to variable subject matter, issues, and purposes — is incorporated in a family of interwoven modes of thinking, among them: scientific thinking, mathematical thinking, historical thinking, anthropological thinking, economic thinking, moral thinking, and philosophical thinking.

Critical thinking can be seen as having two components: 1) a set of information and belief generating and processing skills, and 2) the habit, based on intellectual commitment, of using those skills to guide behavior. It is thus to be contrasted with: 1) the mere acquisition and retention of information alone, because it involves a particular way in which information is sought and treated; 2) the mere possession of a set of skills, because it involves the continual use of them; and 3) the mere use of those skills ("as an exercise") without acceptance of their results.

Critical thinking varies according to the motivation underlying it. When grounded in selfish motives, it is often manifested in the skillful manipulation of ideas in service of one’’s own, or one's groups’’, vested interest. As such it is typically intellectually flawed, however pragmatically successful it might be. When grounded in fairmindedness and intellectual integrity, it is typically of a higher order intellectually, though subject to the charge of "idealism" by those habituated to its selfish use.

Critical thinking of any kind is never universal in any individual; everyone is subject to episodes of undisciplined or irrational thought. Its quality is therefore typically a matter of degree and dependent on , among other things, the quality and depth of experience in a given domain of thinking or with respect to a particular class of questions. No one is a critical thinker through-and-through, but only to such-and-such a degree, with such-and-such insights and blind spots, subject to such-and-such tendencies towards self-delusion. For this reason, the development of critical thinking skills and dispositions is a life-long endeavor.

Another Brief Conceptualization of Critical Thinking

Critical thinking is self-guided, self-disciplined thinking which attempts to reason at the highest level of quality in a fair-minded way. People who think critically consistently attempt to live rationally, reasonably, empathically. They are keenly aware of the inherently flawed nature of human thinking when left unchecked. They strive to diminish the power of their egocentric and sociocentric tendencies. They use the intellectual tools that critical thinking offers – concepts and principles that enable them to analyze, assess, and improve thinking. They work diligently to develop the intellectual virtues of intellectual integrity, intellectual humility, intellectual civility, intellectual empathy, intellectual sense of justice and confidence in reason. They realize that no matter how skilled they are as thinkers, they can always improve their reasoning abilities and they will at times fall prey to mistakes in reasoning, human irrationality, prejudices, biases, distortions, uncritically accepted social rules and taboos, self-interest, and vested interest. They strive to improve the world in whatever ways they can and contribute to a more rational, civilized society. At the same time, they recognize the complexities often inherent in doing so. They avoid thinking simplistically about complicated issues and strive to appropriately consider the rights and needs of relevant others. They recognize the complexities in developing as thinkers, and commit themselves to life-long practice toward self-improvement. They embody the Socratic principle: The unexamined life is not worth living, because they realize that many unexamined lives together result in an uncritical, unjust, dangerous world.
~ Linda Elder, September, 2007

Why Critical Thinking?
The Problem
Everyone thinks; it is our nature to do so. But much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced. Yet the quality of our life and that of what we produce, make, or build depends precisely on the quality of our thought. Shoddy thinking is costly, both in money and in quality of life. Excellence in thought, however, must be systematically cultivated.

A Definition
Critical thinking is that mode of thinking - about any subject, content, or
problem - in which the thinker improves the quality of his or her thinking
by skillfully taking charge of the structures inherent in thinking and
imposing intellectual standards upon them.

The Result
A well cultivated critical thinker:

  • raises vital questions and problems, formulating them clearly and
  • gathers and assesses relevant information, using abstract ideas to
    interpret it effectively
    comes to well-reasoned conclusions and solutions, testing them against relevant criteria and standards;
  • thinks openmindedly within alternative systems of thought,
    recognizing and assessing, as need be, their assumptions, implications, and practical consequences; and
  • communicates effectively with others in figuring out solutions to complex problems.

Critical thinking is, in short, self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem solving abilities and a commitment to overcome our native egocentrism and sociocentrism. (Taken from Richard Paul and Linda Elder, The Miniature Guide to Critical Thinking Concepts and Tools, Foundation for Critical Thinking Press, 2008).

Critical Thinking Defined by Edward Glaser

In a seminal study on critical thinking and education in 1941, Edward Glaser defines critical thinking as follows “The ability to think critically, as conceived in this volume, involves three things: ( 1 ) an attitude of being disposed to consider in a thoughtful way the problems and subjects that come within the range of one's experiences, (2) knowledge of the methods of logical inquiry and reasoning, and (3) some skill in applying those methods. Critical thinking calls for a persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports it and the further conclusions to which it tends. It also generally requires ability to recognize problems, to find workable means for meeting those problems, to gather and marshal pertinent information, to recognize unstated assumptions and values, to comprehend and use language with accuracy, clarity, and discrimination, to interpret data, to appraise evidence and evaluate arguments, to recognize the existence (or non-existence) of logical relationships between propositions, to draw warranted conclusions and generalizations, to put to test the conclusions and generalizations at which one arrives, to reconstruct one's patterns of beliefs on the basis of wider experience, and to render accurate judgments about specific things and qualities in everyday life. (Edward M. Glaser, An Experiment in the Development of Critical Thinking, Teacher’s College, Columbia University, 1941).

Defining Critical Thinking

Thursday, January 13, 2011

The Violent Universe: As in the Heavens, So too on Earth

The Indian physicist Subrahmanyan Chandrasekhar (1910-1995) was remembered in several fascinating and inspiring articles in the December 2010 issue of Physics Today. Perhaps the most stimulating one of them, written by Freeman Dyson, is freely available to non-subscribers on the Physics Today website. See “Chandrasekhar’s Role in 20th Century Science” by Freeman Dyson.

physics today

A Publication of the American Institute of Physics

Chandrasekhar’s role in 20th-century science

Once the astrophysics community had come to grips with a calculation performed by a 19-year-old student sailing off to graduate school, the heavens could never again be seen as a perfect and tranquil dominion.

December 2010, page 44

In 1946 Subrahmanyan Chandrasekhar gave a talk at the University of Chicago entitled “The Scientist.” 1 He was then 35 years old, less than halfway through his life and less than a third of the way through his career as a scientist, but already he wa reflecting deeply on the meaning and purpose of his work. His talk was one of a series of public lectures organized by Robert Hutchins, then the chancellor of the university. The list of speakers is impressive, and included Frank Lloyd Wright, Arnold Schoenberg, and Marc Chagall. That list proves two things. It shows that Hutchins was an impresario with remarkable powers of persuasion, and that he already recognized Chandra as a world-class artist whose medium happened to be theories of the universe rather than music or paint. I say “Chandra” because that is the name his friends used for him when he was alive.

Basic science and derived science

Chandra began his talk with a description of two kinds of scientific inquiry. “I want to draw your attention to one broad division of the physical sciences which has to be kept in mind, the division into a basic science and a derived science. Basic science seeks to analyze the ultimate constitution of matter and the basic concepts of space and time. Derived science, on the other hand, is concerned with the rational ordering of the multifarious aspects of natural phenomena in terms of the basic concepts.”

As examples of basic science, Chandra mentioned the discovery of the atomic nucleus by Ernest Rutherford and the discovery of the neutron by James Chadwick. Each of those discoveries was made by a simple experiment that revealed the existence of a basic building block of the universe. Rutherford discovered the nucleus by shooting alpha particles at a thin gold foil and observing that some of the particles bounced back. Chadwick discovered the neutron by shooting alpha particles at a beryllium target and observing that the resulting radiation collided with other nuclei in the way expected for a massive neutral twin of the proton. As an example of derived science, Chandra mentioned the discovery by Edmond Halley in 1705 that the comet now bearing his name had appeared periodically in the sky at least four times in recorded history and that its elliptical orbit was described by Newton’s law of gravitation. He also noted the discovery by William Herschel in 1803 that the orbits of binary stars are governed by the same law of gravitation operating beyond our solar system. The observations of Halley and Herschel did not reveal new building blocks, but they vastly extended the range of phenomena that the basic science of Newton could explain.

Chandra also described the particular examples of basic and derived science that played the decisive role in his own intellectual development. In 1926, when Chandra was 15 years old but already a physics student at Presidency College in Madras (now Chennai), India, Enrico Fermi and Paul Dirac independently discovered the basic concepts of Fermi–Dirac statistics: If a bunch of electrons is distributed over a number of quantum states, each quantum state can be occupied by at most one electron, and the probability that a state is occupied is a simple function of the temperature. Those basic properties of electrons were a cornerstone of the newborn science of quantum mechanics. They paved the way to the solution of one of the famous unsolved problems of condensed-matter physics, explaining why the specific heats of solid materials decrease with temperature and go rapidly to zero as the temperature goes to zero.

Two years later, in 1928, the famous German professor Arnold Sommerfeld, one of the chief architects of quantum mechanics, visited Presidency College. Chandra was well prepared. He had read and understood Sommerfeld’s classic textbook, Atomic Structure and Spectral Lines. He boldly introduced himself to Sommerfeld, who took the time to tell him about the latest work of Fermi and Dirac. Sommerfeld gave the young Chandra the galley proofs of his paper on the electron theory of metals, a yet-to-be-published article that gave the decisive confirmation of Fermi–Dirac statistics. Sommerfeld’s paper was a masterpiece of derived science, showing how the basic concepts of Fermi and Dirac could explain in detail why metals exist and how they behave. The Indian undergraduate was one of the first people in the world to read it.

Two years after his meeting with Sommerfeld, at the ripe old age of 19, Chandra sailed on the steamship Pilsna to enroll as a graduate student at Cambridge University. He was to work there with Ralph Fowler, who had used Fermi–Dirac statistics to explain the properties of white dwarf stars—stars that have exhausted their supply of nuclear energy by burning hydrogen to make helium or carbon and oxygen. White dwarfs collapse gravitationally to a density many thousands of times greater than normal matter, and then slowly cool down by radiating away their residual heat. Fowler’s triumph of derived science included a calculation of the relation between the density and mass of a white dwarf, and his result agreed well with the scanty observations available at that time. With the examples of Sommerfeld and Fowler to encourage him, Chandra was sailing to England with the intention of making his own contribution to derived science.

A sea change

Aboard the Pilsna, Chandra quickly found a way to move forward. The calculations of Sommerfeld and Fowler had assumed that the electrons were nonrelativistic particles obeying the laws of Newtonian mechanics. That assumption was certainly valid for Sommerfeld. Electrons in metals at normal densities have speeds that are very small compared with the speed of light. But for Fowler, the assumption of Newtonian mechanics was not so safe. Electrons in the central regions of white dwarf stars might be moving fast enough to make relativistic effects important. So Chandra spent his free time on the ship repeating Fowler’s calculation of the behavior of a white dwarf star, but with the electrons obeying the laws of Einstein’s special relativity instead of the laws of Newton. Fowler had calculated that for a given chemical composition, the density of a white dwarf would be proportional to the square of its mass. That made sense from an intuitive point of view. The more massive the star, the stronger the force of gravity and the more tightly the star would be squeezed together. The more massive stars would be smaller and fainter, which explained the fact that no white dwarfs much more massive than the Sun had been seen.

To his amazement, Chandra found that the change from Newton to Einstein has a drastic effect on the behavior of white dwarf stars. It makes the matter in the stars more compressible, so that the density becomes greater for a star of given mass. The density does not merely increase faster as the mass increases, it tends to infinity as the mass reaches a finite value, the Chandrasekhar limit. Provided its mass is below the limit, physicists can model a white dwarf star with relativistic electrons and obtain a unique mass–density relation; there are no models for white dwarfs with mass greater than the Chandrasekhar limit. The limiting mass depends on the chemical composition of the star. For stars that have burned up all their hydrogen, it is about 1.5 times the mass of the Sun.

Chandra finished his calculation before he reached England and never had any doubt that his conclusion was correct. When he arrived in Cambridge and showed his results to Fowler, Fowler was friendly but unconvinced and unwilling to sponsor Chandra’s paper for publication by the Royal Society in London. Chandra did not wait for Fowler’s approval but sent a brief version of the paper to the Astrophysical Journal in the US.2 The journal sent it for refereeing to Carl Eckart, a famous geophysicist who did not know much about astronomy. Eckart recommended that it be accepted, and it was published a year later. Chandra had a cool head. He had no wish to engage in public polemics with the British dignitaries who failed to understand his argument. He published his work quietly in a reputable astronomical journal and then waited patiently for the next generation of astronomers to recognize its importance. Meanwhile, he would remain on friendly terms with Fowler and the rest of the British academic establishment, and he would find other problems of derived science that his mastery of mathematics and physics would allow him to solve.

The decline and fall of Aristotle

Astronomers had good reason in 1930 to react with skepticism to Chandra’s statements. The implications of his discovery of a limiting mass were totally baffling. All over the sky, we see an abundance of stars cheerfully shining with masses greater than the limit. Chandra’s calculation says that when those stars burn up their nuclear fuel, there will exist no equilibrium states into which they can cool down. What, then, can a massive star do when it runs out of fuel? Chandra had no answer to that question, and neither did anyone else when he raised it in 1930.

The answer was discovered in 1939 by J. Robert Oppenheimer and his student Hartland Snyder. They published their solution in a paper, “On Continued Gravitational Contraction.”3 In my opinion, it was Oppenheimer’s most important contribution to science. Like Chandra’s contribution nine years earlier, it was a masterpiece of derived science, taking some of Einstein’s basic equations and showing that they give rise to startling and unexpected consequences in the real world of astronomy. The difference between Chandra and Oppenheimer was that Chandra started with the 1905 theory of special relativity, whereas Oppenheimer started with Einstein’s 1915 theory of general relativity. In 1939 Oppenheimer was one of the few physicists who took general relativity seriously. At that time it was an unfashionable subject, of interest mainly to philosophers and mathematicians. Oppenheimer knew how to use it as a working tool, to answer questions about real objects in the sky.

Oppenheimer and Snyder accepted Chandra’s conclusion that there exists no static equilibrium state for a cold star with mass larger than the Chandrasekhar limit. Therefore, the fate of a massive star at the end of its life must be dynamic. They worked out the solution to the equations of general relativity for a massive star collapsing under its own weight and discovered that the star is in a state of permanent free fall—that is, the star continues forever to fall inward toward its center. General relativity allows that paradoxical behavior because the time measured by an observer outside the star runs faster than the time measured by an observer inside the star. The time measured on the outside goes all the way from now to the end of the universe, while the time measured on the inside runs only for a few days. During the gravitational collapse, the inside observer sees the star falling freely at high speed, while the outside observer sees it quickly slowing down. The state of permanent free fall is, so far as we know, the actual state of every massive object that has run out of fuel. We know that such objects are abundant in the universe. We call them black holes.

With several decades of hindsight, we can see that Chandra’s discovery of a limiting mass and the Oppenheimer–Snyder discovery of permanent free fall were major turning points in the history of science. Those discoveries marked the end of the Aristotelian vision that had dominated astronomy for 2000 years: the heavens as the realm of peace and perfection, contrasted with Earth as the realm of strife and change. Chandra and Oppenheimer demonstrated that Aristotle was wrong. In a universe dominated by gravitation, no peaceful equilibrium is possible. During the 1930s, between the theoretical insights of Chandra and Oppenheimer, Fritz Zwicky’s systematic observations of supernova explosions confirmed that we live in a violent universe.4 In the same decade, Zwicky discovered the dark matter whose gravitation dominates the dynamics of large-scale structures. After 1939, astronomers slowly and reluctantly abandoned the Aristotelian universe as more evidence accumulated of violent events in the heavens. Radio and x-ray telescopes revealed a universe full of shock waves and high-temperature plasmas, with outbursts of extreme violence associated in one way or another with black holes.

Every child learning science in school and every viewer watching popular scientific documentary programs on television now knows that we live in a violent universe. The “violent universe” has become a part of the prevailing culture. We know that an asteroid collided with Earth 65 million years ago and caused the extinction of the dinosaurs. We know that every heavy atom of silver or gold was cooked in the core of a massive star before being thrown out into space by a supernova explosion. We know that life survived on our planet for billions of years because we are living in a quiet corner of a quiet galaxy, far removed from the explosive violence that we see all around us in more turbulent parts of the universe. Astronomy has changed its character totally during the past 100 years. A century ago the main theme of astronomy was to explore a quiet and unchanging landscape. Today the main theme is to observe and explain the celestial fireworks that are the evidence of violent change. That radical transformation in our picture of the universe began on the good ship Pilsna when the 19-year-old Chandra discovered that there can be no stable equilibrium state for a massive star.

New ideas confront the old order

It has always seemed strange to me that the work of the three main pioneers of the violent universe—Chandra, Oppenheimer, and Zwicky—received so little recognition and acclaim at the time when it was done. Those discoveries were neglected, in part, because all three pioneers came from outside the astronomical profession. The professional astronomers of the 1930s were conservative in their view of the universe and in their social organization. They saw the universe as a peaceful domain that they knew how to explore with the standard tools of their trade. They were not inclined to take seriously the claims of interlopers with new ideas and new tools. It was easy for the astronomers to ignore the outsiders because the new discoveries did not fit into the accepted ways of thinking and the discoverers did not fit into the established astronomical community.

In addition to those general considerations, which applied to all three of the scientists, individual circumstances contributed to the neglect of their work. For Chandra, the special circumstances were the personalities of Arthur Eddington and Edward Arthur Milne, who were the leading astronomers in England when Chandra arrived from India. Eddington and Milne had their own theories of stellar structure in which they firmly believed; both of those were inconsistent with Chandra’s calculation of a limiting mass. The two astronomers promptly decided that Chandra’s calculation was wrong and never accepted the physical facts on which it was based.

Zwicky confronted an even worse situation at Caltech, where the astronomy department was dominated by Edwin Hubble and Walter Baade. Zwicky belonged to the physics department and had no official credentials as an astronomer. Hubble and Baade believed that Zwicky was crazy, and he believed that they were stupid. Both beliefs had some basis in fact. Zwicky had beaten the astronomers at their own game of observing the heavens, using a wide-field camera that could cover the sky 100 times faster than could other telescope cameras existing at that time. Zwicky then made an enemy of Baade by accusing him of being a Nazi. As a result of that and other incidents, Zwicky’s discoveries were largely ignored for the next 20 years.

The neglect of Oppenheimer’s greatest contribution to science was mostly due to an accident of history. His paper with Snyder, establishing in four pages the physical reality of black holes, was published in the Physical Review on 1 September 1939, the same day Adolf Hitler sent his armies into Poland and began World War II. In addition to the distraction created by Hitler, the same issue of the Physical Review contained the monumental paper by Niels Bohr and John Wheeler on the theory of nuclear fission—a work that spelled out, for all who could read between the lines, the possibilities of nuclear power and nuclear weapons. 5 It is not surprising that the understanding of black holes was pushed aside by the more urgent excitements of war and nuclear energy.

Each of the three pioneers, after a brief period of revolutionary discovery and a short publication, lost interest in fighting for the revolution. Chandra enjoyed seven peaceful years in Europe before moving to America, mostly working, without revolutionary implications, on the theory of normal stars. Zwicky, after finishing the sky survey that revealed dark matter and several types of supernovae, became involved in military problems as World War II was beginning; ultimately, he became an expert in rocketry. Oppenheimer, after discovering the most important astronomical consequence of general relativity, turned his attention to mundane nuclear explosions and became the director of the Los Alamos laboratory.

When I tried in later years to start a conversation with Oppenheimer about the importance of black holes in the evolution of the universe, he was as unwilling to talk about them as he was to talk about his work at Los Alamos. Oppenheimer suffered from an extreme form of the prejudice prevalent among theoretical physicists, overvaluing pure science and undervaluing derived science. For Oppenheimer, the only activity worthy of the talents of a first-rate scientist was the search for new laws of nature. The study of the consequences of old laws was an activity for graduate students or third-rate hacks. He had no desire in later years to return to the study of black holes, the area in which he had made his most important contribution to science. Indeed, Oppenheimer might have continued to make important contributions in the 1950s, when black holes were an unfashionable subject, but he preferred to follow the latest fashion. Oppenheimer and Zwicky did not, like Chandra, live long enough to see their revolutionary ideas adopted by a younger generation and absorbed into the mainstream of astronomy.

From stellar structure to Shakespeare

Chandra would spend 5–10 years on each field that he wished to study in depth. He would take a year to master the subject, a few more years to publish a series of journal articles demolishing the problems that he could solve, and then a few more years writing a definitive book that surveyed the subject as he left it for his successors. Once the book was finished, he left that field alone and looked for the next topic to study.

That pattern was repeated eight times and recorded in the dates and titles of Chandra’s books. An Introduction to the Study of Stellar Structure (University of Chicago Press, 1939) summarizes his work on the internal structure of white dwarfs and other types of stars. Principles of Stellar Dynamics (University of Chicago Press, 1942) describes his highly original work on the statistical theory of stellar motions in clusters and in galaxies. Radiative Transfer (Clarendon Press, 1950) gives the first accurate theory of radiation transport in stellar atmospheres. Hydrodynamic and Hydromagnetic Stability (Clarendon Press, 1961) provides a foundation for the theory of all kinds of astronomical objects—including stars, accretion disks, and galaxies—that may become unstable as a result of differential rotation. Ellipsoidal Figures of Equilibrium (Yale University Press, 1969) solves an old problem by finding all the possible equilibrium configurations of an incompressible liquid mass rotating in its own gravitational field. The problem had been studied by the great mathematicians of the 19th century—Carl Jacobi, Richard Dedekind, Peter Lejeune Dirichlet, and Bernhard Riemann—who were unable to determine which of the various configurations were stable. In the introduction to his book, Chandra remarks,

These questions were to remain unanswered for more than a hundred years. The reason for this total neglect must in part be attributed to a spectacular discovery by Poincaré, which channeled all subsequent investigations along directions which appeared rich with possibilities; but the long quest it entailed turned out in the end to be after a chimera.

After the ellipsoidal figures opus came a gap of 15 years before the appearance of the next book, The Mathematical Theory of Black Holes (Clarendon Press, 1983). Those 15 years were the time during which Chandra worked hardest and most intensively on the subject closest to his heart: the precise mathematical description of black holes and their interactions with surrounding fields and particles. His book on black holes was his farewell to technical research, just as The Tempest was William Shakespeare’s farewell to writing plays. After the book was published, Chandra lectured and wrote about nontechnical themes, about the works of Shakespeare and Beethoven and Shelley, and about the relationship between art and science. A collection of his lectures for the general public was published in 1987 with the title Truth and Beauty.1

During the years of his retirement, he spent much of his time working his way through Newton’s Principia. Chandra reconstructed every proposition and every demonstration, translating the geometrical arguments of Newton into the algebraic language familiar to modern scientists. The results of his historical research were published shortly before his death in his last book, Newton’s “Principia” for the Common Reader (Clarendon Press, 1995). To explain why he wrote the book, he said, “I am convinced that one’s knowledge of the Physical Sciences is incomplete without a study of the Principia in the same way that one’s knowledge of Literature is incomplete without a knowledge of Shakespeare.”6

Chandra’s work on black holes was the most dramatic example of his commitment to derived science as a tool for understanding nature. Our basic understanding of the nature of space and time rests on two foundations: first, the equations of general relativity discovered by Einstein, and second, the black hole solutions of those equations discovered by Karl Schwarzschild and Roy Kerr and explored in depth by Chandra. To write down the basic equations is a big step toward understanding, but it is not enough. To reach a real understanding of space and time, it is necessary to construct solutions of the equations and to explore all their unexpected consequences. Chandra never said that he understood more about space and time than Einstein, but he did. So long as Einstein did not accept the existence of black holes, his understanding of space and time was far from complete.

When I was a student at Cambridge, I studied with Chandra’s friend Godfrey Hardy, a pure mathematician who shared Chandra’s views about British imperialism and Indian politics. When I came, Hardy was old and he spent most of his time writing books. With the arrogance of youth, I asked Hardy why he wasted his time writing books instead of doing research. Hardy replied, “Young men should prove theorems. Old men should write books.” That was good advice that I have never forgotten. Chandra followed it too. I do not know whether he learned it from Hardy.

This article is based on a talk I gave for the Chandrasekhar Centennial Symposium at the University of Chicago on 16 October 2010.

Freeman Dyson is a retired professor at the Institute for Advanced Study in Princeton, New Jersey.


  1. S. Chandrasekhar, Truth and Beauty: Aesthetics and Motivations in Science, U. Chicago Press, Chicago (1987).
  2. S. Chandrasekhar, Astrophys. J. 74, 81 (1931).
  3. J. R. Oppenheimer, H. Snyder, Phys. Rev. 56, 455 (1939).
  4. See, for example, F. Zwicky, Morphological Astronomy, Springer, Berlin (1957), sec. 8 and 9.
  5. N. Bohr, J. A. Wheeler, Phys. Rev. 56, 426 (1939).
  6. S. Chandrasekhar, Curr. Sci. 67, 495 (1994).
  7. Ref. 2, reprinted in K. C. Wali, A Quest for Perspectives: Selected Works of S. Chandrasekhar, with Commentary, vol. 1, Imperial College Press, London (2001), p. 13.

Astronomers Describe Violent Universe

WASHINGTON (AP) - The deeper astronomers gaze into the cosmos, the more they find it's a bizarre and violent universe. The research findings from this week's annual meeting of U.S. astronomers range from blue orphaned baby stars to menacing "rogue" black holes that roam our galaxy, devouring any planets unlucky enough to be within their limited reach.

"It's an odd universe we live in," said Vanderbilt University astronomer Kelly Holley-Bockelmann. She presented her theory on rogue black holes at the American Astronomical Society's meeting in Austin, Texas, earlier this week.

It should be noted that she's not worried and you shouldn't be either. The odds of one of these black holes swallowing up Earth or the sun or wreaking other havoc is somewhere around 1 in 10 quadrillion in any given year.

"This is the glory of the universe," added J. Craig Wheeler, president of the astronomy association. "What is odd and what is normal is changing."

Just five years ago, astronomers were gazing at a few thousand galaxies where stars formed in a bizarre and violent manner. Now the number is in the millions, thanks to more powerful telescopes and supercomputers to crunch the crucial numbers streaming in from space, said Wheeler, a University of Texas astronomer.

Scientists are finding that not only are they improving their understanding of the basic questions of the universe—such as how did it all start and where is it all going—they also keep stumbling upon unexpected, hard-to-explain cosmic quirks and the potential, but comfortably distant, dangers.

Much of what they keep finding plays out like a stellar version of a violent Quentin Tarantino movie. The violence surrounds and approaches Earth, even though our planet is safe and "in a pretty quiet neighborhood," said Wheeler, author of the book "Cosmic Catastrophes."

One example is an approaching gas cloud discussed at the meeting Friday. The cloud has a mass 1 million times that of the sun. It is 47 quadrillion miles away. But it's heading toward our Milky Way galaxy at 150 miles per second. And when it hits, there will be fireworks that form new stars and "really light up the neighborhood," said astronomer Jay Lockman at the National Radio Astronomy Observatory in West Virginia.

But don't worry. It will hit a part of the Milky Way far from Earth and the biggest collision will be 40 million years in the future.

The giant cloud has been known for more than 40 years, but only now have scientists realized how fast it's moving. So fast, Lockman said, that "we can see it sort of plowing up a wave of galactic material in front of it."

When astronomers this week unveiled a giant map of mysterious dark matter in a supercluster of galaxies, they explained that the violence of the cramped-together galaxies is so great that there is now an accepted vocabulary for various types of cosmic brutal behavior.

The gravitational force between the clashing galaxies can cause "slow strangulation," in which crucial gas is gradually removed from the victim galaxy. "Stripping" is a more violent process in which the larger galaxy rips gas from the smaller one. And then there's "harassment," which is a quick fly-by encounter, said astronomer Meghan Gray of the University of Nottingham in the United Kingdom.

Gray's presentation essentially showed the victims of galaxy-on-galaxy violence. She and her colleagues are trying to figure out the how the dirty deeds were done.

In the past few days, scientists have unveiled plenty to ooh and aah over:

_ Photos of "blue blobs" that astronomers figure are orphaned baby stars. They're called orphans because they were "born in the middle of nowhere" instead of within gas clouds, said Catholic University of America astronomer Duilia F. de Mello.

_ A strange quadruplet of four hugging stars, which may eventually help astronomers understand better how stars form.

_ A young star surrounded by dust, that may eventually become a planet. It's nicknamed "the moth," because the interaction of star and dust are shaped like one.

_ A spiral galaxy with two pairs of arms spinning in opposite directions, like a double pinwheel. It defies what astronomers believe should happen. It is akin to one of those spinning-armed flamingo lawn ornaments, said astronomer Gene Byrd of the University of Alabama.

_ The equivalent of post-menopausal stars giving unlikely birth to new planets. Most planets form soon after a sun, but astronomers found two older stars, one at least 400 million years old, with new planets.

"Intellectually and spiritually, if I can use that word with a lower case 's,' it's awe-inspiring," Wheeler said. "It's a great universe."