This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.
In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml
If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.
The smallest words in our vocabulary often reveal the most about us, including our levels of honesty and thinking style
STOP for a moment and think about your
most recent conversation, email, tweet or text message. Perhaps you
think you said something about dinner plans, domestic chores or work.
And you probably did. But at the same time, you said much more. The
precise words you used revealed more about you than you can imagine.
Hidden inside language are small,
stealthy words that can reveal a great deal about your personality,
thinking style, emotional state and connections with others. These words
account for less than 0.1 per cent of your vocabulary but make up more
than half of the words commonly used. Your brain is not wired to notice
them but if you pay close attention, you will start to see their subtle
power.
I'm a social psychologist whose
interest in these words came about almost accidentally. In the early
1980s, I stumbled on a finding that fascinated me. People who reported
having a traumatic experience and who kept the experience a secret had
far more health problems than people who talked openly. Why would
keeping a secret be so unhealthy? If you asked people to write about
their secrets, would their health improve? The answer, I soon
discovered, was yes.
As part of this work, we developed a
computer program to analyse the language people used when they wrote
about traumas. We made numerous discoveries using this tool, such as the
value of using words associated with positive emotions.
However, our most striking discovery
was not about the content of people's writing but the style. In
particular, we found that the use of pronouns – I, me, we, she, they –
mattered enormously. The more people changed from using first-person
singular pronouns (I, me, my) to using other pronouns (we, you, she,
they) from one piece of writing to the next, the better their health
became. Their word use reflected their psychological state.
This was the prelude to a more
substantial discovery that has become my life's work. I found myself
reading endless reams of text to analyse language style. For example, I
wondered if there were any gender distinctions and found that yes, there
were significant differences.
As I played with more and more words,
certain patterns kept recurring. Not only was gender a factor, there
were large differences in language style as a function of people's age,
social class, emotional state, level of honesty, personality, degree of
formality, leadership ability, quality of relationships and so on. Word
use was associated with almost every dimension of social psychology I
studied.
I'm now convinced that by
understanding language style, we gain a far clearer sense of the social
and psychological processes affecting our behaviours.
What do I mean by style? In any given
sentence, there are two basic types of word. The first is content words,
which provide meaning. These include nouns (table, uncle), verbs (to
love, to walk), adjectives (blue, mouthwatering) and adverbs (sadly,
hungrily).
The other type are "function" words.
These serve quieter, supporting roles – connecting, shaping and
organising the content words. They are what determines style.
Function words include pronouns (I,
she, it), articles (a, an, the), prepositions (up, with), auxiliary
verbs (is, don't), negations (no, never), conjunctions (but, and),
quantifiers (few, most) and common adverbs (very, really). By
themselves, they don't have much meaning. Whereas a content word such as
"table" can trigger an image in everyone's mind, try to imagine "that"
or "really" or "the".
Why make such a big deal about these
words? Because they are the keys to the soul. OK, maybe that's an
overstatement, but bear with me.
Function words are psychologically
very revealing. They are used at high rates, while also being short and
hard to detect. They are processed in the brain differently than content
words. And, critically, they require social skills to use properly.
It's about time that these forgettable little words got their due.
In November 1863, four months after
the devastating Battle of Gettysburg, Abraham Lincoln delivered one of
the most significant speeches in American history:
Four score and seven years ago our
fathers brought forth, upon this continent, a new nation, conceived in
Liberty, and dedicated to the proposition that all men are created
equal.
Now we are engaged in a great civil
war, testing whether that nation, or any nation so conceived, and so
dedicated, can long endure. We are met here on a great battlefield of
that war. We have come to dedicate a portion of it as a final resting
place for those who here gave their lives that that nation might live.
It is altogether fitting and proper that we should do this.
But in a larger sense we can not
dedicate – we can not consecrate – we can not hallow this ground. The
brave men, living and dead, who struggled, here, have consecrated it far
above our poor power to add or detract. The world will little note, nor
long remember, what we say here, but can never forget what they did
here.
It is for us, the living, rather to be dedicated
here to the unfinished work which they have, thus far, so nobly carried
on. It is rather for us to be here dedicated to the great task remaining
before us – that from these honored dead we take increased devotion to
that cause for which they here gave the last full measure of devotion –
that we here highly resolve that these dead shall not have died in vain;
that this nation shall have a new birth of freedom; and that this
government of the people, by the people, for the people, shall not
perish from the earth.
Close your eyes and reflect on the
content of the speech. Which words occurred most frequently? Most people
say "nation", "war", "men" and possibly "dead". Not so. The most
commonly used word is "that", followed by "the". Only one content word
is in the top 15 – "nation". It is remarkable that such a great speech
can be largely composed of small, insignificant words.
But this is typical. A very small
number of function words account for most of the words we hear, read and
say. Over the past 20 years, my colleagues and I have analysed billions
of written and spoken words and compiled a list of the most common (see diagram).
Every one of the top 20 is a function word; together they account for
almost 30 per cent of all words that we use, read and hear. English has
about 450 common function words in total, which account for 55 per cent
of all the words we use.
To put this into perspective, the
average English speaker has a vocabulary of perhaps 100,000 words. More
than 99.9 per cent of this is made up of content words but these account
for less than half of the words we use. This split is comparable in
other languages.
Function words are both short and hard
to perceive. One reason we have trouble spotting their high rate of
usage is that our brains naturally slide over them. We automatically
focus on content words as they provide the basic who, what and where of a
conversation.
This distinction can also be seen in
people with brain damage. Occasionally, a person will have a brain
injury that affects their ability to use content words but not function
words. Injuries in other areas can produce the opposite results.
The two brain regions of interest are
Broca's and Wernicke's areas. If a person with damage to their Broca's
area were asked to describe a picture of, say, a girl and an old woman,
he or she might say, "girl… ummm… woman… ahh… picture, uhhh… old."
Someone with a damaged Wernicke's area might say, "Well, right here is
one of them and I think she's next to that one. So if I see over there
you'll see her too." To say that Broca's area controls style words and
Wernicke's controls content words is a gross oversimplification.
Nevertheless, it points to the fact that the distinction between content
and style words is occurring at a fairly basic level in the brain.
Interestingly, Broca's area is in the frontal lobe of the brain, which
controls a number of social skills.
Brain research, then, supports the
conclusion that function words are related to our social worlds. To see
just how social, imagine finding this note on the street:
HE IS AROUND BUT I DON'T KNOW WHERE. I WILL BE BACK SOON. DON'T DO IT!
The note is grammatically correct and
is understandable in a certain sense, but we have no real idea what it
means. Every word is a function word. Whoever wrote the note had a
shared understanding with its intended recipient of who "he" is, where
"here" is, and so on.
Now you find out the note was written by Bob to Julia, who had the following phone conversation a few minutes earlier:
Bob: Hi, you caught me at a crazy time. I've got to go out but I'll leave a note on the door.
Julia: Great. I need the accountant to sign my expense form. Do you know where he is?
Bob: I'll see if he's in.
Julia: Did I tell you that I'm thinking of taking up smoking again? I know it annoys you.
Bob: Are you nuts? Let's talk about this.
All of a sudden, the note makes sense.
Function words require social skills
to use properly. The speaker assumes the listener knows who everyone is
and the listener must know the speaker to follow the conversation. The
ability to understand a simple conversation packed full of function
words demands social knowledge. All function words work in this way. The
ability to use them is a marker of basic social skills – and analysing
how people use function words reveals a great deal about their social
worlds.
That is not to say a single sentence is particularly
revealing. If you mention "a chair" versus "that chair", it says very
little about you. But what if we monitored your words over the course of
a week? What if we found that you use "a" and "the" at high rates, or
hardly at all?
In fact, there are people who use
articles at very high rates and others who rarely use them. Men tend to
use them at higher rates than women. Gender aside, high article users
tend to be more organised, emotionally stable, conscientious,
politically conservative and older.
Now things start to get interesting.
It seems the use of articles can tell us about the ways people think,
feel and connect with others. The same is true for pronouns,
prepositions, and virtually all function words.
One area this is useful is in
personality research. As you might guess, different patterns of function
words reveal important parts of people's personalities.
In one experiment, we analysed
hundreds of essays written by my students and we identified three very
different writing styles: formal, analytic and narrative.
Formal writing often appears stiff,
sometimes humourless, with a touch of arrogance. It includes high rates
of articles and prepositions but very few I-words, and infrequent
discrepancy words, such as "would", and adverbs. Formality is related to
a number of important personality traits. Those who score highest in
formal thinking tend to be more concerned with status and power and are
less self-reflective. They drink and smoke less and are more mentally
healthy, but also tend to be less honest. As people age, their writing
styles tend to become more formal.
Analytical writing, meanwhile, is all
about making distinctions. These people attain higher grades, tend to be
more honest, and are more open to new experiences. They also read more
and have more complex views of themselves.
Narrative writers are natural
storytellers. The function words that generally reveal storytelling
involve people, past-tense verbs and inclusive words such as "with" and
"together". People who score high for narrative writing tend to have
better social skills, more friends and rate themselves as more outgoing.
By watching how people use function
words, we gain insight into how they think, how they organise their
worlds and how they relate to other people.
This work on personality only
scratches the surface. We have also found that function words can detect
emotional states, spot when people are lying, predict where they rank
in social hierarchies and the quality of their relationships. They
reveal much about the dynamics within groups. They can be used to
identify the authors of disputed texts, and much more.
The smallest, stealthiest words in our vocabulary often reveal the most about us.
James W. Pennebaker is chair of the department of psychology at the University of Texas at Austin. This article is based on his new book, The Secret Life of Pronouns: What our words say about us (Bloomsbury Press). You can find out more and analyse your own words at secretlifeofpronouns.com
KNOW YOURSELF
If you had access to all the words you used, what
could you learn about yourself? Using a recording device programmed to
switch on for about 30 seconds once every 12 to 14 minutes, I have been
able to analyse my family's interactions.
The first weekend I wore it seemed uneventful. But
when I transcribed my recording I was distressed to see the way I spoke
to my 12-year-old son. My tone was often detached. I used big words,
lots of articles and few pronouns. My language was warmer with my wife
and daughter.
The experience had a profound effect on me.
Thereafter, I made a conscious attempt to be warmer and more
psychologically available to my son.
I have also analysed my language in emails,
classroom lectures, articles and letters. Sometimes my language is
predictable, sometimes it isn't. And when it isn't, I learn something
about myself.
Here we will discuss the problem of denialists, their standard
arguing techniques, how to identify denialists and/or cranks, and
discuss topics of general interest such as skepticism, medicine, law and
science. I’ll be taking on denialists in the sciences, while my
brother, Chris, will be geared more towards the legal and policy
implications of industry groups using denialist arguments to prevent
sound policies.
First of all, we have to get some basic terms defined for all of our new readers.
Denialism is the employment of rhetorical tactics to give the
appearance of argument or legitimate debate, when in actuality there is
none. These false arguments are used when one has few or no facts to
support one’s viewpoint against a scientific consensus or against
overwhelming evidence to the contrary. They are effective in distracting
from actual useful debate using emotionally appealing, but ultimately
empty and illogical assertions.
Examples of common topics in which denialists employ their tactics
include: Creationism/Intelligent Design, Global Warming denialism,
Holocaust denial, HIV/AIDS denialism, 9/11 conspiracies, Tobacco
Carcinogenecity denialism (the first organized corporate campaign),
anti-vaccination/mercury autism denialism and anti-animal testing/animal
rights extremist denialism. Denialism spans the ideological spectrum,
and is about tactics rather than politics or partisanship. Chris will be
covering denialism of industry groups, such as astroturfing, and the
use of a standard and almost sequential set of denialist arguments that
he discusses in his Denialist Deck of Cards.
5 general tactics are used by denialists to sow confusion. They are conspiracy,
selectivity (cherry-picking), fake experts, impossible expectations
(also known as moving goalposts), and general fallacies of logic.
Throughout this first week we’ll be discussing each of these 5
tactics in turn to give examples of how they are used, and how to
recognize their implementation. We’ll also introduce our handy little
icon scheme that we’ll attach to each post discussing denialists. If you
just can’t wait a whole week, well, visit our old blog’s definition to see what we’re talking about.
Finally, some ground rules. We don’t argue with cranks. Part of
understanding denialism is knowing that it’s futile to argue with them,
and giving them yet another forum is unnecessary. They also have the
advantage of just being able to make things up and it takes forever to
knock down each argument as they’re only limited by their imagination
while we’re limited by things like logic and data. Recognizing denialism
also means recognizing that you don’t need to, and probably shouldn’t
argue with it. Denialists are not honest brokers in the debate (you’ll
hear me harp on this a lot). They aren’t interested in truth, data, or
informative discussion, they’re interested in their world view being the
only one, and they’ll say anything to try to bring this about. We feel
that once you’ve shown that what they say is deceptive, or prima-facie
absurd, you don’t have to spend a graduate career dissecting it and
taking it apart. It’s more like a “rule-of-thumb” approach to bad
scientific argument. That’s not to say we won’t discuss science or our
posts with people who want to honestly be informed, we just don’t want
to argue with cranks. We have work to do.
Second, denialism isn’t about name-calling or the psychological
coping mechanism of denial. The first reaction of any denialist to being
labeled such is to merely reply, “you’re the denialist” or to redefine
the terms so that it excludes them (usually comparing themselves to
Galileo in the process). However, denialism is about tactics
that are used to frustrate legitimate discussion, it is not about simply
name-calling. It’s about how you engage in a debate when you have no data (the key difference between denialists and the paradigm-shifters of yesteryear). There are a few more common defenses that we’ll discuss in time.
So while the denialists will inevitably show up and suggest my belief
in the validity of carbon dating shows I’m a Bible denialist, or my
inability to recognize the wisdom of some HIV/AIDS crank shows I don’t
understand biology, we won’t tend to engage them. They’re cranks and we
aim to show how you can instantly recognize and dismiss crank arguments.
Finally, just because some people believe in stupid things, doesn’t
make them denialists. A lot of people get suckered in by denialist
arguments and benefit from having the record corrected or being shown
how to recognize good scientific debate versus unsound denialist
debates. We aren’t suggesting everybody who has a few wacky ideas is a
crank, part of the reason denialists abound and are often successful in
bringing the masses over to their side is that their arguments don’t
necessarily sound insane to the uninitiated. Denialist arguments are
emotionally appealing and work on a lot of people. We’re trying to
inform people about denialism and how to recognize denialist arguments
so that ultimately they will be less effective in swaying those that may
not be fully informed about science. Hopefully, by creating awareness
of the ground rules of legitimate scientific debate, citizens, policy
makers, and the media may better distinguish between sound and unsound
scientific debate.
What are denialist conspiracy theories and why should people be
instantly distrustful of them? And what do they have to do with
denialism?
Almost every denialist argument will eventually devolve into a
conspiracy. This is because denialist theories that oppose
well-established science eventually need to assert deception on the part
of their opponents to explain things like why every reputable
scientist, journal, and opponent seems to be able to operate from the
same page. In the crank mind, it isn’t because their opponents are
operating from the same set of facts, it’s that all their opponents are liars (or fools) who are using the same false set of information.
But how could it be possible, for instance, for every nearly every
scientist in a field be working together to promote a falsehood? People
who believe this is possible simply have no practical understanding of
how science works as a discipline. For one, scientists don’t just
publish articles that reaffirm a consensus opinion. Articles that just
rehash what is already known or say “everything is the same” aren’t
interesting and don’t get into good journals. Scientific journals are
only interested in articles that extend knowledge, or challenge
consensus (using data of course). Articles getting published in the big
journals like Science or Nature are often revolutionary (and not
infrequently wrong), challenge the expectations of scientists or
represent some phenomenal experiment or hard work (like the human genome
project). The idea that scientists would keep some kind of exceptional
secret is absurd, or that, in the instance of evolution deniers, we only
believe in evolution because we’ve been infiltrated by a cabal of
“materialists” is even more absurd. This is not to say that real
conspiracies never occur, but the assertion of a conspiracy in the
absence of evidence (or by tying together weakly correlated and
nonsensical data) is usually the sign of a crackpot. Belief in the
Illuminati, Zionist conspiracies, 9/11 conspiracies, holocaust denial
conspiracies, materialist atheist evolution conspiracies, global warming
science conspiracies, UFO government conspiracies, pharmaceutical
companies suppressing altie-med conspiracies, or what have you, it
almost always rests upon some unnatural suspension of disbelief in the
conspiracy theorist that is the sign of a truly weak mind. Hence, our
graphic to denote the presence of these arguments – the tinfoil hat.
Another common conspiratorial attack on consensus science (without
data) is that science is just some old-boys club (not saying it’s
entirely free of it but…) and we use peer-review to silence dissent.
This is a frequent refrain of HIV/AIDS denialists like Dean Esmay or Global Warming denialists like Richard Lindzen
trying to explain why mainstream scientists won’t publish their BS. The
fact is that good science speaks for itself, and peer-reviewers are
willing to publish things that challenge accepted facts if the data are
good. If you’re just a denialist cherry-picking data and nitpicking the
work of others, you’re out of luck. Distribution of scientific funding
(another source of conspiracy from denialists) is similarly based on
novelty and is not about repeating some kind of party line. Yes, it’s
based on study-sections and peer-review of grants, but the idea that the
only studies that get funded are ones that affirm existing science is
nuts, if anything it’s the opposite.
Lately, there’s been a lot of criticism of the excess focus on
novelty in distribution of funding and in what gets accepted into
journals. I encourage all scientists and those interested in science to
watch this video of John Ioannidis giving grand rounds at NIH on how
science gets funded, published, and sadly, often proven wrong. I put it
up at google video. He is the author of “Why most published research findings are false”
published in PLoS last year. It’s proof that science is perfectly
willing to be critical of itself, more than happy to publish exceptional
things that often turn out wrong, but ultimately, highly
self-correcting.
I realize it’s an hour long, but it’s really a great talk.
For our next installment of the big five tactics in denialism we’ll
discuss the tactic of selectivity, or cherry-picking of data.
Denialists tend to cite single papers supporting their idea (often you
have to squint to see how it supports their argument). Similarly they
dig up discredited or flawed papers either to suggest they are supported
by the scientific literature, or to disparage a field making it appear
the science is based on weak research. Quote mining is also an example
of “selective” argument, by using a statement out of context, just like
using papers or data out of context, they are able to sow confusion.
Here at denialism blog we’ll use the cherries to denote the presence of
selectivity in a denialist screed.
Examples abound. Such as when HIV/AIDS denialists harp about Gallo
fudging the initial identification of HIV (a famous dispute about
whether or not he stole Montagnier’s virus) to suggest the virus was
never actually identified or that the field rests on a weak foundation.
Jonathan Wells likes to harp endlessly about Haeckels’ embryos to
suggest that the tens of thousands of other papers on the subject of
evolution, and the entire basis of genetics, biology and biochemistry
are wrong.
One of the main reasons this is such an effective tactic to use on
science is that when something is shown to be incorrect, we can’t
“purge” the literature so the bad papers stay there forever. Only when a
paper is retracted is the literature actually restored, and there’s a
lot of research and researchers that got things wrong on the way to
figuring out a problem. It’s really just the nature of research, we make
mistakes, but the self-correcting nature of science helps get us
incrementally closer to some form of scientific truth. It is up to the
individual researcher to read and quote more than the papers that
support their foregone conclusion, as one has to develop theories that
effectively synthesize all the data and represent an understanding of an entire field, not just quote the data one likes.
Then there is the issue of selective quotation of perfectly good
science or scientists. For example, see our post on how the Family
Research council misrepresents data on contraception to promote their political agenda. Talk Origins has an entire quote-mine project devoted to documenting how creationists misrepresent scientists to advance their agenda.
This tendency towards quote-mining and misrepresentation of science
is really the clearest proof of the dishonesty inherent in denialist
tactics (with the possible exception in the case of Intelligent Design
Creationism of the wedge document
– but an internal statement of denialists’ goals is usually hard to
come by). Selectivity is exceedingly common, and proof that many
denialists aren’t just intellectually, but morally bankrupt.
You know who they are – those organizations that have words like
“freedom” and “rights” “choice” and “consumer” in their names but always
shill for corporate interests…those occasional MDs or engineers
creationists find that will say evolution has nothing to do with
science. They are the fake experts.
But how do we tell which experts are fake and which are real?
To figure out who is a fake expert you have to figure out what a real
expert is. My definition would be a real expert is someone with a
thorough understanding of the field they are discussing, who accurately
represents the scientific literature and the state of understanding of
the scientific enterprise. There has been some other other discussion on
scienceblogs from Janet at Adventures in Ethics and Science, it also reiterates some of the same points in relation to what she
feels comfortable discussing as an expert. It also stresses the
importance of context in evaluating the validity of expert opinion. But
I’m not the god of the dictionary so let’s consider some other
definitions.
The OED gives the definition
simply as “One whose special knowledge or skill causes him to be
regarded as an authority; a specialist. Also attrib., as in expert
evidence, witness, etc.”
I don’t think this is adequate to describe what we really mean
though, that is, how do you identify a trusted source of scientific
information?
Legally (in the US), scientific expertise had been defined by whether
the testimony the expert provided conforms to the so-called Frye rule
from 1923 until 1993 when the Daubert vs. Merrel Dow Pharmaceuticals
case changed the definition to be consistent with the federal rules of
evidence. The Frye rule was that scientific testimony was valid if the
theory it was based on was “generally accepted”, that is it was
admissible if the theory on which the evidence was based had a somewhat
arbitrary critical mass of followers in the scientific field.
In many ways Daubert was a big improvement, although it puts more
onus on the judge to determine if the science presented should be
considered valid as it merely stated that experts were defined by the
federal rules of evidence which allow the judge to determine:
If scientific, technical, or other specialized knowledge
will assist the trier of fact to understand the evidence or to determine
a fact in issue, a witness qualified as an expert by knowledge, skill,
experience, training, or education, may testify thereto in the form of
an opinion or otherwise.
(A good article on this issue here from the NEJM and a more updated article.)
Luckily the justices didn’t just leave it at the federal rules of
evidence and Blackmun created a set of guidelines for judges to
determine if the expert was “reliable”. They require the theory
presented by the witness to have undergone peer review, show
falsifiability, empirical testing, reproducibility, and a known error
rate for a scientific theory to have some validity in addition
to the general acceptance rule of Frye. While the individual states
remain a patchwork of Frye, Daubert, and Frye-plus rules for
admissibility of evidence, at least federally this is the new
requirement (although it still does suffer from being a bit vague).
The experts that present such evidence must have some credentials
and/or experience with the discipline, and the evidence they present
must pass these tests. It’s actually not a half-bad way to identify a
trusted source, in particular if the judge is intellectually honest
about the witness meeting these requirements. Although the law currently
allows a lot of latitude on this as it’s really up to the judge to
determine if the expert testimony satisfies the Daubert requirements.
The commonalities between the different accepted definitions are that experts have experience in their field,
and they can provide answers that are consistent with the state of
knowledge in that field that are useful. The legal definition appears
more stringent, in that it requires the expert to speak in a clear
fashion and discuss science that actually meets Popperian requirements
of epistemology(falsifiability, testing, etc.) – but I’m not about to
jump into that quagmire today.
Clearly, the exact definition of what an “expert” is still eludes us,
but it becomes readily apparent from the legal, dictionary and common
practice definitions employed by scientists what experts are not.
They aren’t merely an empty set of credentials and they aren’t merely
people who have at some point published in some random field. Even the
rather silly expert wiki would seem to agree on this.
Therefore I would say a fake expert is usually somebody who
is relied upon for their credentials rather than any real experience in
the field at issue, who will promote arguments that are inconsistent
with the literature, aren’t generally accepted by those who study the
field in question, and/or whose theories aren’t consistent with
established epistemological requirements for scientific inquiry. Sheesh.
I just described Michael Egnor, Bill Dembski, Michael Fumento, Patrick
Michaels, Steven Milloy, Richard Lindzen…
So, in honor of the false experts hired by everyone from creationists
to global warming deniers, I present to you, the thinking chimp. Our
mascot of the false expert, who isn’t as good at telling you accurate
information about science as he is at flinging poo.
**Janet points us to another post of hers discussing how to identify a trusted source.
I’m sorry for mixing terminologies. But moving goalposts isn’t
adequate to describe the full hilarity of the kinds of arguments
denialists make. For instance, the goalposts never have to be moved when
they require evidence that places them somewhere in the land before
time. What I mean is the use, by denialists, of the absence of complete
and absolute knowledge of a subject to prevent implementation of sound
policies, or acceptance of an idea or a theory.
So while moving goalposts describes a way of continuing to avoid
acceptance of a theory after scientists have obligingly provided
additional evidence that was a stated requirement for belief, impossible
expectations describes a way to make it impossible for scientists to ever prove anything to the satisfaction of the denialist. They’re related though so we’ll group both together.
Let’s take the example of the global warming deniers. One finds that
they harp endlessly about models, how much models suck, how you can’t
model anything, on and on and on. True, models are hard, anything
designed to prognosticate such a large set of variables as those
involved in climate is going to be highly complex, and I’ll admit, I
don’t understand them worth a damn. Climate science in general is beyond
me, and I read the papers in Science and Nature that come out, blink a
few times, and then read the editors description to see why I should
care. But with or without models, which I do trust the scientists and
peer-reviewers involved to test adequately, that doesn’t change the fact
that actual measurement of global mean temperature is possible, and is
showing an alarmingly steep increase post-industrialization.
The next thing the global warming deniers harp on is about how we
don’t have enough records of temperature to make a educated statement
about whether our climate is really heating up that much as the
instrumental record only goes about 150 years back. Then you show them
proxy records that go back a thousand years, and after they’re done
accusing people of falsifying, they say it’s still not enough, then you
go back a few tens of thousands of years, and it’s still not enough,
then finally you go back about 750 thousand years and they say, that’s
just 0.0001% of the earth’s history! That’s like a blink of the eye in
terms of earth’s climate. Then you sigh and wish for painless death.
I’ll let real-climate
fight the fights over proxy records and CO2 lag, because, simply, they
know a lot more than me, and if you really want to argue with global
warming denialists I recommend reading A Few Things Ill Considered’s Faq
first. But what I can recognize is the tendency of the global warming
deniers to constantly move the goalposts back and back, and once they
whip out the argument we’ve only got proxy measurements for a fraction
of earths life (a mere few hundred thousand years), you know they’ve
graduated to impossible expectations.
A person who wasn’t just obviously stonewalling would say after
you’ve shown them this much data that maybe we should take the data as
is before we’re all under water. You don’t need to know the position of
every molecule of air on the planet, throughout the entire history of
earth to make a prudent judgement about avoiding dramatic climate
change. (If they say that we don’t know what ideal is say, “yeah, but
Florida will still be under water). You don’t need to know the
position of every molecule in the galaxy before deciding you need to
jump out of the way of a speeding train. Similarly, we don’t need to
have a perfect model of the earth’s climate to understand that all the
current data and simulations suggest decreasing carbon output is of
critical importance right now, and not when humans have obtained some
impossible level of scientific knowledge.
The honorary gif for making these tiresome arguments is – the goalpost (and no Chris you may not animate it).
P.S. This does not mean that I endorse all efforts to model complex
systems. In the future I’ll probably complain about some modeling
implementation of systems biology which I tend to think is total BS.
I’ll explain the difference then.
P.P.S To see an example of some really hilarious creationist goalpost moving see our post on Michael Egnor demanding biologists provide an answer for something he can’t even define.
Almost everybody knows about the fallacies of logic,
formal and informal, that are routinely used in arguments with
denialists. While these fallacies aren’t perfect examples of logic that
show when an argument is always wrong, they are good rules of
thumb to tell when you’re listening to bunk, and if you listen to
denialists you’ll hear plenty. I wish they’d teach these to high school
students as a required part of their curriculum, but it probably would
decrease the efficacy of advertisement on future consumers.
The problem comes when the denialists get a hold of the fallacies then accuse you, usually, of ad hominem! It goes like this.
Denialist says something wacky…
Commenter or blogger corrects their mistake…
Denialist says same thing, changes argument slightly…
Commenter or blogger again corrects their mistake…
Denialist says something even wackier, says it disproves all of a field of science…
Commenter or blogger, exasperated, corrects it and threatens disemvowelment…
Denialist restates original wacky argument…
Commenter or blogger’s head explodes, calls denialist an idiot.
Denialist says he won because commenter or blogger resorted to ad hominem.
The thing to remember about logical fallacies is that their violation
isn’t proof or disproof of the validity of the opponent’s argument.
Your opponent might just be an idiot, but ultimately right. Some people
just don’t know how to argue or keep their temper. Logical fallacies are
rules of thumb to identify when portions of arguments are poorly
constructed or likely irrational. They are dependent on context, and
aren’t really rigorous proofs of the validity or invalidity of any
argument.
Further, some fallacies, like ad hominem are poorly understood, so
when an opponent says you’re wrong because of this this and this
therefor you’re an idiot, the poor victim of the ad hominem feels like
they can claim victory over the argument. When in reality ad hominem
refers to the dismissal of an argument by just insulting the person.
Time and time again you see someone exasperated by the crank who won’t
turn despite being shown again and again where their error is, and
finally just call the guy an idiot. That’s actually not an ad hominem.
That might be totally true and highly relevant to the argument at hand.
Sometimes people are just too stupid or too ignorant to realize when
they’ve been soundly thrashed, and true cranks will stubbornly go on,
and on and on…
But that doesn’t mean the fallacies of logic aren’t useful
as rules of thumb for detecting the BS. The ones you hear most are
arguments from metaphor or analogy (prime creationist tactic), appeals
to consequence (creationist and global warming denier), appeals to
ignorance (all – see moving goalposts), appeals to authority (all),
straw men and red herrings.
For instance, the classic creationist example of using the analogy of
the mouse-trap to suggest “irreducible complexity” as a problem for
biology. Fallacies let you dismiss this instantly by saying, analogies
aren’t science pal, how about some data. Analogies are often helpful for
getting concepts across, but you routinely see them used by denialists
as evidence. And more frequently you see their analogies aren’t
even apt. For instance the mouse-trap is perfectly functional as its
constituent parts. It’s a platform, a spring and a hook, just because
they’re not assembled doesn’t mean they’ve lost their function. They
just can’t kill mice anymore unless you throw them with sufficient
velocity at rodents. Similarly the watchmaker analogy, the jet airplane
analogy, or when a few months ago I saw this endless silly analogy about
arsonists and design. Uggh. Pointless. Don’t even bother, you see
things like this being used to challenge actual honest to goodness data?
You’re done. If you spend too much time piecing together looking for a
method to the madness you’ll end up like our poor robot. He’s the mascot
for logical fallacies.
Poor guy. One too many fallacies, now he’s broken.
Well, I’ve outlined what I think are the critical components of
successful crankiness. Ideally, this will serve as a guide to those of
you who want to come up with a stupid idea, and then defend it against
all evidence to the contrary.
Here’s how you do it:
Step one: Develop a wacky idea.
It is critical that your wacky idea must be something pretty
extraordinary. A good crank shoots for the stars. You don’t defend to
the death some simple opinion, like Coke is better than Pepsi. You’ve
got to think big! You’ve got to do something like deny HIV causes AIDS,
or relativity, or reject an entire field of biology, or deny the earth
is older than 6000 years. If you can’t think of anything, try reading
the Bible for claims that are now obviously ludicrous – like the
possibility of climbing into heaven using a ladder. Insist on its
literal truth.
The thing you deny has to be something that’s so obvious to the
majority of people that when they hear it, they want to hear an
explanation, if only because it’s clearly going to be nuts.
This is critical to all successive steps. If you don’t say something
outrageous and contrarian, no one will ever see you as the iconoclastic
genius that you are.
The presentation of this idea is also important. Remember that really
important people with really important ideas don’t have time for
grammar or spelling. Also try interesting use of punctuation!!!!,
CAPITALization and textcolor. When you EMPHASIZE things people will inevitably take your more seriously.
Make sure that you develop new physical laws, name them after
yourself, and if you must cite anything, either cite your own name or
work, or that of another crank. If you’re feeling bold cite some famous
scientist, like Einstein, but don’t list a specific passage, just assume
that they said or did something that supports your idea. After all
you’re both geniuses, you must think alike!
It’s also important during your research of this new idea, never to
be worried about preserving the original intent of other authors you
quote or cite. If any words they say can be construed to mean something
else, that’s ok too. Academic license is part of academic freedom.
Whenever possible try to include figures. Line drawings and diagrams
with complicated mathematical symbols are ideal. Remember, most people
don’t know calculus, include equations you find in other books to prove
the mathematical or physical relationship you have discovered. The type
of people who will believe your idea aren’t big into checking others’
work for consistency, so it will be OK. Those that do would never
believe you anyway, but by the time they get around to that, you’ll have
a cult following.
Step two: Disseminate your idea
This can be done many ways.
The old-school method is to spend your day job writing angry letters
to politicians, newspaper editors, and anyone else that you thought
might listen to you.
Cranks with independent wealth can self-publish their own book (I
have many of these provided courtesy of an astronomer friend whose
institute regularly receives such works and places them in their “crank
file”). A book lends credibility, especially to other cranks who think
that anyone who could actually focus their intellects for long enough to
write a book, must be onto something. Ideally, send your book
to scientists in the field you are trying to undermine, they’ll know
just where to put them. If your idea has a more mainstream appeal, send
it to church leaders and various pundits who might give it some play in
their pulpits.
These days, technology has provided us what is known as a blog. Your
target audience, despite the improvements in technology, are just as
likely not to care as before. Less so, because now they don’t even have
to experience the inconvenience of opening your crank letter or having
to file your crank book. The secret to generating traffic then is
exploiting the fact that the internet gives access to all sorts of
people who will be irritated by your mere presence. Leave comments in
others blogs that describe how you have solved this big problem, where
everyone else has failed. Ideally, get a minion to constantly extol your
virtues and genius. If one is lacking just sockpuppet yourself from
another computer. It’s not even necessary to leave comments at science
blogs or (real) skeptic sites. Any site will do, bother cat fanciers,
tech geeks, whoever. Traffic will inevitably follow.
Technology has also made it easy to make videos and DVDs, and
provided internet radio outlets for crankery. Do you have a new idea for
how the twin towers fell? Well put it up on Youtube and embed it in
your blog like so:
Podcasts also serve this function nicely – and since none of your
critics will waste their time transcribing the nonsense you say in order
to debunk it, videos and podcasts tend to be a good way to avoid excess
criticism.
Do you have access to a religious mailing list? Send out your
informational DVD on your new proof that all science is a lie to those
that might receive it as gospel.
If you’re very adventurous, try submitting a paper to a scientific
journal. First try big, Science and Nature are ideal. If it’s medicine
try the New England Journal or JAMA – they are pretty good examples of
the stodgy orthodoxy who will no doubt persecute you. When they reject
your paper, remember, you’re just like Galileo, or Einstein. They
rejected your ideas because they’re just not ready to accept them. Remember, you’re a skeptic! You’re one of those people keeping science honest by making them consider new ideas
(except when they’re very old ideas recycled). Don’t let them brush you
off easily, resend your manuscript multiple times. If they reject it
claim victory! It means you’re a true original. You’ve come up
with something the scientific establishment just can’t deal with because
of their small-mindedness and bigotry. Ideally keep sending it to
publications, to editors at their home addresses, to their children’s
school etc. If they get a restraining order claim victory! You’ve been
persecuted! You now are a true heir to Galileo.
If you want your manuscript (it may make you sound smarter to call it
your “treatise” or “monograph”) to actually get published, try
something like Medical Hypotheses.
Journals with an impact factor of less than 1 might actually be
desperate enough to publish something cranky, especially if you can
jargonize it enough to make yourself sound smart, or create enough fake
data to trick the editors. If it has to do with global warming consider a Wall Street Journal Op-Ed. The Creation Research Quarterly
is perfect for anything disproving some facet of evolution, geology,
astronomy, or physics. You don’t have to be a creationist for them to
like your crank theory, anything that pokes holes in dastardly consensus
science is a victory.
Then try journals that don’t require real experiments, rigorous trial
design, peer review or anything that actually indicates actual science
has been done. Other cranks in your “field” may have started just such a
journal – like the Journal of 9/11 studies.
There are about as many places that will publish crank work as there
are crank ideas, don’t stop trying! If you get your ideas published in
such a journal claim victory! You have mainstream acceptance and a
publication record now.
There are also many message boards that might like your idea. If you have a crazy new ideology about evolution try the International Society for Complexity Information and Design. If you have a new idea for what causes AIDS, a great starting point is the Dissident Action Group.
Search for forums that might be amenable to your idea and post it
there. Make sure to re-post it after every ten replies or so, so people
can read it again. Another good starting place is Newiki
which has the stunning tagline “If Copernicus or Galileo were alive
today, this is where you would find their work.” They clearly love the
crankery.
Finally, don’t forget other cranks are an excellent resource! Cranks
usually like to hear about other cranks ideas, even if they conflict
with their own crank ideas (9/11 conspiracy cranks might be an
exception). Remember, intellectual consistency doesn’t matter
as long as you are both criticizing the orthodoxy. These other cranks
can mention your idea. They will undoubtedly find it “interesting” if
they mention it, even if they don’t agree with all aspects of it. See our recent post on Denyse O’Leary and the Creation Museum,
a perfect example. Ideally they will link your site, join your webring,
mention your ideas, and many other cranks will promptly arrive to
acknowledge your genius (sorry, only other cranks will ever do this –
ever). Don’t forget this means you will have to help them promote their
crank ideas.
Cranks also have a major presence on radio – both internet and terrestrial. Are you anything like this crank? Or or this one? Maybe they’ll have you on their radio show to discuss your new crackpot theory.
Follow these steps and soon your idea will be a topic of discussion
everywhere. Don’t forget to routinely make claims that the views of
orthodox science are imperiled by the threat of acceptance of your
ideas, it will make people more likely to believe your later claims of
persecution and visit your site to see if you’ve figured out you haven’t
changed anything. Suggest that the valid scientific theory is debunked,
or will be within a decade frequently, routinely declare victory over
the mainstream theory.
Step three: (Not) Responding to Criticism
All great minds will be criticized by peon scientists who have grown fat
and bloated with public grant funds. They’ve been feeding at the public
coffers for so long, they wouldn’t know an original idea if it fell out
of the ether and struck them on their thick skulls. Here are some
simple responses to common criticisms:
Accusation: “You haven’t published in a real peer-reviewed journal”
Response: Either say “Peer review is just an old-boys network for peon
scientists to pat each other on the back”, or accuse journal editors of
persecuting you. Compare yourself to Galileo.
Accusation: “You don’t have solid proof”
Response: Either restate what you said already, restate it slightly
differently, call your accuser a name, or suggest they are part of the
conspiracy to hide the truth. Compare yourself to Galileo.
Accusation: “Because of X, Y, and Z, your theory is false and you’re an idiot”
Response: Yell “That’s Ad Hominem – I win the argument” (and that they’ve persecuted you).
Accusation: “Because of X, Y and Z, you are wrong”
Response: If they fail to call you an idiot, there are a few ways to
respond to this. Either nitpick an aspect of their argument so that you
can ignore the rest while diverting the discussion into a meaningless
tangent. Or cut and paste large sections of print or references to
papers that may or may not agree with you (the exhaustion strategy).
Finally, it’s always a good idea to just ignore them and restate your
original argument. Alternatively demand they provide you with *scientific* evidence that their theory is the correct one. If they do, ignore it and restate your original argument.
Accusation: “No credible scientists or scientific agencies believe this theory”
Response: “That’s because they’re part of a conspiracy to hide the
truth!” In addition assert motives for the conspiracy like maintaining
control over the populace, spreading materialistic atheist dogma,
acquiring grant money, etc. Don’t forget to challenge orthodoxy and
compare yourself to Galileo! He was persecuted by the orthodoxy too!
Remember, whenever a majority of scientists believe anything, that means
it’s wrong. Cite Kuhn, compare yourself to Galileo again.
If they show up at your blog and leave comments, remember to delete
anything critical at all, dissent must not be tolerated on your home
turf. Anything critical might damage the proof of your unassailable
intellect, and the absence of critique will make it appear as if your
critics are afraid to engage you on your own turf.
You see? It’s easy! All you have to do is ignore anything that contradicts your theory, nitpick others’ arguments, force them
to explain themselves, accuse them of lying, accuse them of conspiring
against the truth, exhaust them with dumps of links or citations, repeat
yourself, and compare yourself to Galileo, because he had problems
convincing the orthodoxy too. Also, don’t forget to call yourself a
skeptic, or dissident, or iconoclast.
Step four: Get Persecuted!
You haven’t graduated to being a full crank until you’ve been persecuted. Here are some suggestions:
If you are faculty at a university, make sure to write a book about
your crank idea. When the other members at the department decide to deny
you tenure because of your moronic ideas or call you an idiot claim
persecution!
If you work at an office, make sure you spend your time promoting
your crank idea. Tell everybody about it. Send mass emails about it.
Leave copies of your “monograph” where your boss and others can find it –
like the breakroom. If you’re fired for pursuing your crankery on the
job claim persecution!
If someone shows up at your website or forum and points out the flaws in your argument claim persecution!
If anyone calls you an idiot, a moron, a pseudoscientist, a crank, or denialist claim persecution!
If people don’t immediately accept your idea upon hearing it claim persecution!
If they won’t teach your idea in public schools as fact claim persecution!
If they won’t teach the controversy over your ideas in public schools claim persecution!
If people criticize journals for publishing your papers claim persecution!
If people circulate petitions against teaching your ideas claim persecution!
If a journalists covers only the scientific side and doesn’t cover yours claim persecution!
If no one visits your site or listens to you claim persecution!
If no one persecutes you claim persecution!
In this modern world there is such a thing as “parity of ideas”.
Everything must be balanced against its opposite. If anyone says
anything that contradicts you, it is your right to be able to counter
what they say for “balance”, even if you don’t have proof or
credibility. If they don’t do this you are being persecuted.
You see? It’s easy to be a crank. Just follow these simple guidelines and remember, you’re never wrong. No matter what.
I’d like to thank Chris Noble (not for being a crank or anything but for this idea) and lab lemming’s psuedoscientific method for inspiration for this post.
*Update* I’ve added some additional material based on comments (Thanks Pat, Marc and Mongrel)
Here at denialism blog, we’re very interested in what makes people
cranks. Not only how one defines crankish behavior, but literally how
people develop unreasonable attitudes about the world in the face of
evidence to the contrary. Our definition of a crank, loosely, is a
person who has unreasonable ideas about established science or facts
that will not relent in defending their own, often laughable, version of
the truth. Central to the crank is the “overvalued idea”. That is some
idea they’ve incorporated into their world view that they will not
relinquish for any reason. Common overvalued ideas that are a source of
crankery range from bigotry, antisemitism(holocaust deniers), biblical
literalism (creationists – especially YEC’s), egotism (as it relates to
the complete unwillingness to ever be proven wrong) or an indiscriminant
obsession with possessing “controversial” or iconoclastic ideas. Some
people just love believing in things that no one in their right mind
does, out of some obscure idea that it makes them seem smart or
different.
The OED definition of a crank seems to be a little old-fashioned:
5. colloq. (orig. U.S.). A person with a mental twist; one who is apt
to take up eccentric notions or impracticable projects; esp. one who is
enthusiastically possessed by a particular crotchet or hobby; an
eccentric, a monomaniac. [This is prob. a back-formation from CRANKY,
sense 4.] Also attrib. and Comb.
The OED etymology suggests it’s been in use for about 180 years, but I
don’t think it was defined well until that Nature quote in 1906 (which
very poetically describes the problem) that the definition seems to take
shape. Cranks aren’t interested in debate, nor do they respond to
reason, they’ll just blather on about their idiotic pet theory until
everyone in the room has fled or opened a vein. Another take on that
quote might be that a crank can only be turned one way, which would fit with the mechanical metaphor and suggest they’re only ever interested in spouting one line of reasoning.
Cranks overestimate their own knowledge and ability, and underestimate that of acknowledged experts.
Cranks insist that their alleged discoveries are urgently important.
Cranks rarely if ever acknowledge any error, no matter how trivial.
Cranks love to talk about their own beliefs, often in inappropriate
social situations, but they tend to be bad listeners, and often appear
to be uninterested in anyone else’s experience or opinions.
Now, in our terminology not every denialist is a crank, but cranks
use pretty much exclusively denialist arguments to make their point.
Cranks are a bit more deserving of pity, a bit closer to delusion and
mental illness than the pure denialist, who knows that they are spouting
BS to sow confusion.
Most people have a pretty good gestalt for what one is, and the
standard definitions are pretty accurate. But we’re more interested in
how people, sometimes perfectly reasonable people, turn into cranks. An
interesting resource to understand the phenomenon is this article in the Journal of Personality and Social Psychology
by Justin Kruger and David Dunning about how people who are incompetent
not only have an inflated sense of their own competence, but are also
incapable of even recognizing competence. Take for example this figure
from the paper (it’s not Wiley so hopefully I won’t be sued). It’s
pretty self-explanatory
What’s even more amazing is that when they then shared the
performance of other participants with the people who performed poorly
(hoping that they would then adjust their self-perception downward)
people who scored poorly failed to adjust their self-perception of their
performance. In other words, they are completely unaware of their own
competence, and can’t detect competence in others.
Now, doesn’t this explain a lot? It explains the tendency of cranks
not to care if other cranks (and denialists in general for that matter)
have variations on their own crazy ideas, just as long as the other
cranks are opposing the same perceived incorrect truth. Cranks and
denialists aren’t honest brokers in a debate, they stand outside of it
and just shovel horse manure into it to try to sow confusion and doubt
about real science. They don’t care if some other crank or denialist
comes along and challenges the prevailing theory by tossing cow manure,
as long as what they’re shoveling stinks.
For instance, you notice that Dembski doesn’t spend a whole lot of
time attacking Ken Ham, nor does the DI seem to care a great deal about
any kind of internal consistency of ideas. Michael Behe, for example, is
a raging “Darwinist” compared to Michael Egnor, who any scientifically competent person would recognize as well, this:
So what we have from the DI, the other denialists and their
organizations, is evidence of people with no competence in understanding
science, who overestimate their own abilities, and are incapable of
recognizing competence in others.
Next time I think we’ll discuss how people might start out as
reasonable people and then become cranks because they’re more interested
in being “right” than actually pursuing any kind of scientific truth.
Some cranks seem to be defined not so much by incompetence, but by their
obsession with their overvalued idea that ruins their ability to think
rationally.
P.S. I wrote this piece over the weekend and then PZ published this piece on a crank named Gilder
on Sunday. He’s a real textbook case. Crazy, gibbering, throwing out
lingo and jargon, and clearly not competent to recognize that he’s
completely wrong about information theory.
**Unskilled and Unaware of It: How Difficulties in Recognizing One’s
Own Incompetence Lead to Inflated Self-Assessments, Justin Kruger and
David Dunning, Cornell University, Journal of Personality and Social
Psychology, vol 77, no 6, p 1121-1134 (1999)
It’s good news though! A description of the tactics and appropriate response to denialism was published in the European Journal of Public Health
by authors Pascal Diethelm and Martin McKee. It’s entitled “Denialism:
what is it and how should scientists respond?” and I think it does an
excellent job explaining the harms of deniailsm, critical elements of
denialism, as well as providing interesting historical examples of
corporate denialism on the part of tobacco companies.
In particular I found their inclusion of a tactic of inversionism interesting:
There is also a variant of conspiracy theory,
inversionism, in which some of one’s own characteristics and motivations
are attributed to others. For example, tobacco companies describe
academic research into the health effects of smoking as the product of
an ‘anti-smoking industry’, described as ‘a vertically integrated,
highly concentrated, oligopolistic cartel, combined with some public
monopolies’ whose aim is to ‘manufacture alleged evidence, suggestive
inferences linking smoking to various diseases and publicity and
dissemination and advertising of these so-called findings to the widest
possible public’.9
This is in a subsection on their coverage of conspiracy in denialism
and it rings very true. Often those who function with a conspiratorial
mindset project their motives, tactics, and style of thinking on their
opponents. It’s nice to have word for it, but I usually think of
inversionism as the tendency of some to readily believe anything that
inverts a commonly held belief. A tendency which many scientists
manifest, probably because the science and facts often contradict
intuition and “common-sense” beliefs. Maybe we can think of a better
word for this, or maybe simply refer to it as projection.
I also enjoy their conclusion:
Whatever the motivation, it is important to recognize
denialism when confronted with it. The normal academic response to an
opposing argument is to engage with it, testing the strengths and
weaknesses of the differing views, in the expectations that the truth
will emerge through a process of debate. However, this requires that
both parties obey certain ground rules, such as a willingness to look at
the evidence as a whole, to reject deliberate distortions and to accept
principles of logic. A meaningful discourse is impossible when one
party rejects these rules. Yet it would be wrong to prevent the
denialists having a voice. Instead, we argue, it is necessary to shift
the debate from the subject under consideration, instead exposing to
public scrutiny the tactics they employ and identifying them publicly
for what they are. An understanding of the five tactics listed above
provides a useful framework for doing so.
Excellent! I couldn’t have said it better myself.
Many of the letters in reply are also pretty fascinating. You see a lot of feelings of persecution:
Clearly, no dissent is allowable from the doctrines of
tobacco control in Diethelm’s and McKee’s perspective. This perspective
brands hundreds of reputable scientists throughout the world as
denialists, no different from Holocaust deniers. While I disagree
wholeheartedly with these scientists, I will stand up for their right to
express their dissenting opinions without having their characters
assassinated because of the direction, rather than the scientific
reasonableness, of their positions.
Criticism of scientifically untenable positions is suppression of
dissent! It’s punishing heresy! Pointing out that denialists use
dishonest methods is like Hitler! This is the classic example of
self-persecution you always see when it’s made clear the methods of
denialists are not different from holocaust denial to evolution denial.
This, whether they like it or not, is a factual statement.
Denialism is pretty predictable and consistent in form no matter what
the topic. The response is inevitably “You’re comparing me to a
holocaust denier!”, when in reality all we’re doing is comparing the
tactics. Holocaust deniers and tobacco/cancer denialists are both human
beings, is that an unfair comparison? Or, one could argue that those who
deny tobacco smoke causes cancer might actually be worse than holocaust deniers, as holocaust deniers, while they are despicable bigots, are not defending an ongoing campaign of death.
The authors’ reply is perfect.
Finally, D&M consider Galileo as a reference that
tobacco “denialists” should refrain from citing in support of their
unacceptable views. What is not understood here is that the problems
related to tobacco and drug research and policy are very similar. From
there, Galileo is and will remain a universal reference [13].
The ego on these people is astounding. There might be one or two
people who are as paradigm-shifting as Galileo in a generation, or even a
century, but all these cranks seem very comfortable in assuming his
mantle. Some humility please. Their reply again
is perfect. But I worry. At some point this will just devolve into
arguing with cranks; something to be avoided at all costs. And when you
consider one of the complaining letters is from a guy who doesn’t even
think nicotine is addictive, well, what’s the point of arguing?
Diethelm, P., & McKee, M. (2008). Denialism: what is it and how
should scientists respond? The European Journal of Public Health, 19
(1), 2-4 DOI: 10.1093/eurpub/ckn139
A fascinating paper well worth reading is Denialism: what is it and how should scientists respond? (Diethelm & McKee 2009)
(H/T to Jeremy Kemp for the heads-up). While the focus is on public
health issues, it nevertheless establishes some useful general
principles on the phenomenon of scientific denialism. A vivid example is
the President of South Africa, Thabo Mbeki, who argued against the
scientific consensus that HIV caused AIDS. This led to policies
preventing thousands of HIV positive mothers in South Africa from
receiving anti-retrovirals. It's estimated these policies led to the
loss of more than 330,000 lives (Chigwedere 2008). Clearly the consequences of denying science can be dire, even fatal.
The authors define denialism as "the employment of rhetorical
arguments to give the appearance of legitimate debate where there is
none, an approach that has the ultimate goal of rejecting a proposition
on which a scientific consensus exists". They go on to identify 5 characteristics common to most forms of denialism, first suggested by Mark and Chris Hoofnagle:
Conspiracy theoriesWhen the overwhelming body
of scientific opinion believes something is true, the denialist won't
admit scientists have independently studied the evidence to reach the
same conclusion. Instead, they claim scientists are engaged in a complex
and secretive conspiracy. The South African government of Thabo Mbeki
was heavily influenced by conspiracy theorists claiming that HIV was not
the cause of AIDS. When such fringe groups gain the ear of policy
makers who cease to base their decisions on science-based evidence, the
human impact can be disastrous.
Fake expertsThese are individuals purporting to
be experts but whose views are inconsistent with established knowledge.
Fake experts have been used extensively by the tobacco industry who
developed a strategy to recruit scientists who would counteract the
growing evidence on the harmful effects of second-hand smoke. This
tactic is often complemented by denigration of established experts,
seeking to discredit their work. Tobacco denialists have frequently
attacked Stanton Glantz, professor of medicine at the University of
California, for his exposure of tobacco industry tactics, labelling his
research 'junk science'.
Cherry pickingThis involves selectively drawing
on isolated papers that challenge the consensus to the neglect of the
broader body of research. An example is a paper describing intestinal
abnormalities in 12 children with autism, which suggested a possible
link with immunization. This has been used extensively by campaigners
against immunization, even though 10 of the paper’s 13 authors
subsequently retracted the suggestion of an association.
Impossible expectations of what research can deliverThe
tobacco company Philip Morris tried to promote a new standard for the
conduct of epidemiological studies. These stricter guidelines would have
invalidated in one sweep a large body of research on the health effects
of cigarettes.
Misrepresentation and logical fallaciesLogical
fallacies include the use of straw men, where the opposing argument is
misrepresented, making it easier to refute. For example, the US
Environmental Protection Agency (EPA) determined in 1992 that
environmental tobacco smoke was carcinogenic. This was attacked as
nothing less than a 'threat to the very core of democratic values and
democratic public policy'.
Why is it important to define the tactics of denialism? Good faith
discussion requires consideration of the full body of scientific
evidence. This is difficult when confronted with rhetorical techniques
which are designed to distort and distract. Identifying and publicly
exposing these tactics are the first step in redirecting discussion back
to a focus on the science.
This is not to say all global warming skeptic arguments employ
denialist tactics. And it's certainly not advocating attacking peoples'
motives. On the contrary, in most cases, focus on motives rather than
methods is counterproductive. Here are some of the methods using
denialist tactics in the climate debate:
Cherry pickingThis usually involves a focus on a
single paper to the neglect of the rest of peer-review research. A
recent example is the Lindzen-Choi paper that finds low climate
sensitivity (around 0.5°C for doubled CO2). This neglects all the research using independent techniques studying different time periods
that find our climate has high sensitivity (around 3°C for doubled
CO2). This includes research using a similar approach to Lindzen-Choi
but with more global coverage.
Update 16 April 2012: Many thanks to Mark Hoofnagle
for pointing out that the 5 characteristics of science denial didn't
originate in Diethelm and McKee's paper but in an article written by Mark and Chris Hoofnagle.
This is an article very worth reading for anyone interested in climate
change and public discourse about science. Credit has been updated
accordingly.
Climate contrarians often exhibit what we have described as the 5 characteristics of scientific denialism.
These characteristics involve various ways in which people will deny
scientific reality by rejecting and misrepresenting data and evidence.
These characterisitcs are often exhibited in the form of a Gish Gallop,
which describes the technique of repeating a large number of incorrect
or misleading statements in such a short amount of time that it
becomes difficult to refute them all.
The first characteristic of scientific denialism Carter checks off is that of Fake Experts:
“These are individuals purporting to be experts but whose views are inconsistent with established knowledge.”
In this article, Carter peppers out one wholly unsubstantiated claim
after another. We’ll look at the accuracy of these claims below
(suffice it to say most of Carter’s assertions are false), but just as
importantly, Carter makes no effort to support his assertions. The
article contains no references, no links, just seemingly factual
statements which the reader is expected to believe, presumably because
Carter is a climate expert. In fact, the article closes by giving
Carter’s supposed qualifications:
Bob Carter, a paleoclimatologist at James Cook
University, Australia, and a chief science advisor for the
International Climate Science Coalition, is in Canada on a 10-day tour…
Carter’s article deals with climate science and economics,
so being a paleoclimatologist would certainly make him a credible
speaker on the science – if it were true. However, Carter has
published very few climate-related papers. His only climate science
publication in the past 7 years is McLean et al. (2009),
on which Carter was listed as the third of three authors, which made
assertions not supported by its scientific evidence and was immediately refuted by Foster et al. (2010), and was the basis of one of the worst global temperature predictions in history.
Bob Carter has a long and distinguished scientific career – in
marine geology. He is a marine geologist, not a paleoclimatologist.
Normally we don’t place very much emphasis on a person’s background
because the content and accuracy of their comments are what matters,
and that is true of Carter’s article as well (whose content we will
address below). However, in this case Carter has asked the Financial Post
readers to believe him by posing as a fake expert, claiming
credentials which he has not earned. And citing fake experts is a
classic characteristic of scientific denialism. Carter has perhaps
taken this to the extreme by himself playing the role of the fake
expert.
Misrepresentations and Logical Fallacies
Another characteristic of scientific denialism involves
misrepresentations and logical fallacies, and Carter’s article has both
in spades.
“Over the last 18 months, policymakers in Canada, the
U.S. and Japan have quietly abandoned the illusory goal of preventing
global warming by reducing carbon dioxide emissions. Instead, an
alternative view has emerged regarding the most cost-effective way in
which to deal with the undoubted hazards of climate change.
This view points toward setting a policy of preparation for, and
adaptation to, climatic events and change as they occur, which is
distinctly different from the former emphasis given by most Western
parliaments to the mitigation of global warming by curbing carbon
dioxide emissions.”
Here Carter starts out with a glaring logical fallacy, assuming
that the failure to pass greenhouse gas emissions reduction legislation
in these 3 countries means they must have decided that adapting to the
consequences of climate change is more cost-effective. This is pure
fiction.
In the USA there is currently a major divide between the two major political parties
regarding climate change, and the lack of climate legislation is a
consequence of that divide, not a result of climate adaption becoming
cheaper than mitigation. Similarly the current Canadian government is a
conservative one which still speaks about the importance of reducing greenhouse gas emissions, despite its failure to take serious actions to accomplish that goal.
In fact, at the same time as Carter’s article was published, all 3 nations were participating in the Bonn Climate Change Conference,
attempting to negotiate agreements to reduce emissions. All 3 nations
have agreed to a goal (limiting global warming to 2°C above
pre-industrial levels) which cannot be achieved without major emissions
cuts. The USA has implemented regulations on CO2 emissions and new fuel efficiency standards, not to mention individual state legislation (like California’s carbon cap and trade system). Canada and Japan have also implemented various energy efficiency policies.
Bob Carter has not given us any reason to believe otherwise. He has
simply asserted that because policymakers have failed to reduce
emissions, they must believe that adaption is cheaper. This logical
fallacy is Carter’s second characteristic of scientific denial.
Radios-Own Goal
Carter proceeds to try and convince his readers that the planet
hasn’t even warmed significantly. He approaches this argument by first
attempting to pooh-pooh the surface temperature record, as climate
contrarians so often do, and then proceeding to claim the satellites
show no significant warming. In the process, Carter scores a major
‘own goal’.
“For many different reasons, which include various types
of bias, error and unaccounted-for artifacts, the thermometer record
provides only an indicative history of average global temperature over
the last 150 years.”
“The 1979-2011 satellite MSU (Microwave Sounding Units)
record is our only acceptably accurate estimate of average global
temperature, yet being but 32 years in length it represents just one
climate data point. The second most reliable estimate of global
temperature, collected by radiosondes on weather balloons, extends back
to 1958, and the portion that overlaps with the MSU record matches it
well.
Taken together, these two temperature records indicate that no significant warming trend has occurred since 1958…”
This gross misrepresentation of the radiosonde record
(misrepresentation being another of those pesky characteristics of
scientific denial) is where Carter scores an own goal. While there is
no basis to the claim that the radiosonde record (instruments on
weather balloons) is more reliable than the surface temperature record,
more importantly, the radiosonde record actually shows more warming than the surface record (Figure 2).
Carter’s argument amounts to ‘you shouldn’t trust the warming in the
surface temperature record, you should trust the radiosonde record’,
yet the radiosonde record shows even more warming. However, to realize
this, Carter’s audience would have to fact check him and look up the
data themselves, as we have.
Perhaps Carter was banking on the
financial newspaper and climate denialist blog readers not bothering to
fact check the claims of a “paleoclimatologist.”
Note also that Carter has cherrypicked lower atmosphere
temperatures, whereas the vast majority of global warming (about 90%)
goes into heating the oceans, which continue to accumulate energy at a rapid rate.
Step Dysfunctions
Next up, Carter misrepresents the temperature data once again.
“…both [radiosondes and satellites] exhibit a 0.2C step
increase in average global temperature across the strong 1998 El NiƱo.”
Carter proceeds to illustrate a third characteristic of scientific denial, cherrypicking.
“In addition, the recently quiet Sun, and the lack of
warming over at least the last 15 years — and that despite a 10%
increase in atmospheric carbon dioxide level, which represents 34% of
all post-industrial emissions — indicates that the alarmist global
warming hypothesis is wrong and that cooling may be the greatest
climate hazard over coming decades.”
Here Carter cherrypicks the low solar activity of the past decade,
blaming it for the dampened surface warming, while ignoring that
temperatures and solar activity have not been remotely correlated for
the past 40 years (Figure 3).
Figure 3: Global surface temperature (red, NASA GISS) and total solar irradiance (blue, 1880 to 1978 from Solanki, 1979 to 2009 from PMOD).
For the nitpickers amongst us, we should also note that the
CO2 increase over the past 15 years (29 parts per million by volume
[ppmv]) is only 26% of the increase since pre-industrial times
(approximately 112 ppmv), not 34%, as Carter asserts. However,
arithmetic errors are the least of Carter’s problems in this article.
‘Paleoclimatologist’ Unfamiliar with Paleoclimate Data
Carter proceeds to misrepresent the paleoclimate record.
“…numerous high-quality paleoclimate records, and
especially those from ice cores and deep-sea mud cores, demonstrate
that no unusual or untoward changes in climate occurred in the 20th and
early 21st century.”
The Hockey League disagrees, including Figure 4. There is also a new hockey stick from Australia (Gergis et al. 2012), where Carter resides.
Figure 4: Various northern hemisphere temperature reconstructions (Mann et al 2008).
Anthropogenic Denial
Next up, Carter misrepresents the body of climate change attribution research.
“…no compelling empirical evidence yet exists for a measurable, let alone worrisome, human impact on global temperature.”
“…a policy of adaptation is also strongly precautionary
against any (possibly dangerous) human-caused climate trends that might
emerge in the future.”
Adaption is not precautionary, it’s reactionary. A precautionary
policy would involve efforts to prevent dangerously rapid climate
change from happening by reducing greenhouse gas emissions.
Carter’s Unscientific Approach
Although Carter’s article may seem convincing to his intended
audience, it is not because his arguments are at all factually correct,
but instead because he has employed a Gish Gallop of scientific denial
characteristics. However, this is clearly not an appropriate approach
for a scientist speaking to the general public through the mainstream
media. A scientist should always support his claims and ensure that
they are factually accurate. And this is certainly not Carter’s first
such Gish Gallop – the Commonwealth Scientific and Industrial Research
Organization (CSIRO) previously debunked another of Carter’s articles here, and Skeptical Science has examined several others here.
If Bob Carter wants to influence the climate discussion, we believe
he should subject his arguments to peer-reviewed scrutiny, rather than
cobbling together Gish Gallops of scientific denial and
misrepresentations for the financial media and denialist blogs.
This piece was originally published at Skeptical Science and was reprinted with permission.