FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG

OCCUPY THE SCIENTIFIC METHOD


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Wednesday, December 30, 2015

Scientists Are Beginning to Figure Out Why Conservatives Are…Conservative





Ten years ago, it was wildly controversial to talk about psychological differences between liberals and conservatives. Today, it's becoming hard not to.


| Tue Jul. 15, 2014 5:00 AM EDT



Scientists are using eye-tracking devices to detect automatic response differences between liberals and conservatives.

You could be forgiven for not having browsed yet through the latest issue of the journal Behavioral and Brain Sciences. If you care about politics, though, you'll find a punchline therein that is pretty extraordinary.
Behavioral and Brain Sciences employs a rather unique practice called "Open Peer Commentary": An article of major significance is published, a large number of fellow scholars comment on it, and then the original author responds to all of them. The approach has many virtues, one of which being that it lets you see where a community of scholars and thinkers stand with respect to a controversial or provocative scientific idea. And in the latest issue of the journal, this process reveals the following conclusion: A large body of political scientists and political psychologists now concur that liberals and conservatives disagree about politics in part because they are different people at the level of personality, psychology, and even traits like physiology and genetics.
That's a big deal. It challenges everything that we thought we knew about politics—upending the idea that we get our beliefs solely from our upbringing, from our friends and families, from our personal economic interests, and calling into question the notion that in politics, we can really change (most of us, anyway).
It is a "virtually inescapable conclusion" that the "cognitive-motivational styles of leftists and rightists are quite different."
The occasion of this revelation is a paper by John Hibbing of the University of Nebraska and his colleagues, arguing that political conservatives have a "negativity bias," meaning that they are physiologically more attuned to negative (threatening, disgusting) stimuli in their environments. (The paper can be read for free here.) In the process, Hibbing et al. marshal a large body of evidence, including their own experiments using eye trackers and other devices to measure the involuntary responses of political partisans to different types of images. One finding? That conservatives respond much more rapidly to threatening and aversive stimuli (for instance, images of "a very large spider on the face of a frightened person, a dazed individual with a bloody face, and an open wound with maggots in it," as one of their papers put it).
In other words, the conservative ideology, and especially one of its major facets—centered on a strong military, tough law enforcement, resistance to immigration, widespread availability of guns—would seem well tailored for an underlying, threat-oriented biology.
The authors go on to speculate that this ultimately reflects an evolutionary imperative. "One possibility," they write, "is that a strong negativity bias was extremely useful in the Pleistocene," when it would have been super-helpful in preventing you from getting killed. (The Pleistocene epoch lasted from roughly 2.5 million years ago until 12,000 years ago.) We had John Hibbing on the Inquiring Minds podcast earlier this year, and he discussed these ideas in depth; you can listen here:
Hibbing and his colleagues make an intriguing argument in their latest paper, but what's truly fascinating is what happened next. Twenty-six different scholars or groups of scholars then got an opportunity to tee off on the paper, firing off a variety of responses. But as Hibbing and colleagues note in their final reply, out of those responses, "22 or 23 accept the general idea" of a conservative negativity bias, and simply add commentary to aid in the process of "modifying it, expanding on it, specifying where it does and does not work," and so on. Only about three scholars or groups of scholars seem to reject the idea entirely.
That's pretty extraordinary, when you think about it. After all, one of the teams of commenters includes New York University social psychologist John Jost, who drew considerable political ire in 2003 when he and his colleagues published a synthesis of existing psychological studies on ideology, suggesting that conservatives are characterized by traits such as a need for certainty and an intolerance of ambiguity. Now, writing in Behavioral and Brain Sciences in response to Hibbing roughly a decade later, Jost and fellow scholars note that
There is by now evidence from a variety of laboratories around the world using a variety of methodological techniques leading to the virtually inescapable conclusion that the cognitive-motivational styles of leftists and rightists are quite different. This research consistently finds that conservatism is positively associated with heightened epistemic concerns for order, structure, closure, certainty, consistency, simplicity, and familiarity, as well as existential concerns such as perceptions of danger, sensitivity to threat, and death anxiety. [Italics added]
Back in 2003, Jost and his team were blasted by Ann CoulterGeorge Will, and National Review for saying this; congressional Republicans began probing into their research grants; and they got lots of hate mail. But what's clear is that today, they've more or less triumphed. They won a field of converts to their view and sparked a wave of new research, including the work of Hibbing and his team.
"One possibility," note the authors, "is that a strong negativity bias was extremely useful in the Pleistocene," when it would have been super-helpful in preventing you from getting killed.
Granted, there are still many issues yet to be worked out in the science of ideology. Most of the commentaries on the new Hibbing paper are focused on important but not-paradigm-shifting side issues, such as the question of how conservatives can have a higher negativity bias, and yet not have neurotic personalities. (Actually, if anything, the research suggests that liberals may be the more neurotic bunch.) Indeed, conservatives tend to have a high degree of happiness and life satisfaction. But Hibbing and colleagues find no contradiction here. Instead, they paraphrase two other scholarly commentators (Matt Motyl of the University of Virginia and Ravi Iyer of the University of Southern California), who note that "successfully monitoring and attending negative features of the environment, as conservatives tend to do, may be just the sort of tractable task…that is more likely to lead to a fulfilling and happy life than is a constant search for new experience after new experience."
All of this matters, of course, because we still operate in politics and in media as if minds can be changed by the best honed arguments, the most compelling facts. And yet if our political opponents are simply perceiving the world differently, that idea starts to crumble. Out of the rubble just might arise a better way of acting in politics that leads to less dysfunction and less gridlock…thanks to science.

Sunday, December 27, 2015

WHAT WAS DARWIN'S ALGORITHM?





To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.


CONVERSATION : LIFE


The synthetic path to investigating the world is the logical space occupied by the physicist Murray Gell-Mann, the biologist Stuart Kauffman, the computer scientist Christopher G. Langton, and the physicist J. Doyne Farmer, and their colleagues in and around Los Alamos and the Santa Fe Institute.
The Santa Fe Institute was founded in 1984 by a group that included Gell-Mann, then at the California Institute of Technology, and the Los Alamos chemist George Cowan. Some say it came into being as a haven for bored physicists. Indeed, the end of the reductionist program in physics may well be an epistemological demise, in which the ultimate question is neither asked nor answered but instead the terms of the inquiry are transformed. This is what is happening in Santa Fe.
Murray Gell-Mann, widely acknowledged as one of the greatest particle physicists of the century (another being his late Caltech colleague, Richard Feynman), received a Nobel Prize for work in the 1950s and 1960s leading up to his proposal of the quark model. At a late stage in his career, he has turned to the study of complex adaptive systems.


The synthetic path to investigating the world is the logical space occupied by the physicist Murray Gell-Mann, the biologist Stuart Kauffman, the computer scientist Christopher G. Langton, and the physicist J. Doyne Farmer, and their colleagues in and around Los Alamos and the Santa Fe Institute.
The Santa Fe Institute was founded in 1984 by a group that included Gell-Mann, then at the California Institute of Technology, and the Los Alamos chemist George Cowan. Some say it came into being as a haven for bored physicists. Indeed, the end of the reductionist program in physics may well be an epistemological demise, in which the ultimate question is neither asked nor answered but instead the terms of the inquiry are transformed. This is what is happening in Santa Fe.
Murray Gell-Mann, widely acknowledged as one of the greatest particle physicists of the century (another being his late Caltech colleague, Richard Feynman), received a Nobel Prize for work in the 1950s and 1960s leading up to his proposal of the quark model. At a late stage in his career, he has turned to the study of complex adaptive systems.
Gell-Mann's model of the world is based on information; he connects the reductionist, fundamental laws of physics — the simple rules — with the complexity that emerges from those rules and with what he terms "frozen accidents" — that is, historical happenstance. He has given a name to this activity: "plectics," which is the study of simplicity and complexity as it is manifested not just in nature but in such phenomena as language and economics. At the institute, he provides encouragement, experience, prestige, and his vast reservoir of scientific knowledge to a younger group of colleagues, who are mostly involved in developing computational models based on simple rules that allow the emergence of complex behavior.
Stuart Kauffman is a theoretical biologist who studies the origin of life and the origins of molecular organization. Twenty- five years ago, he developed the Kauffman models, which are random networks exhibiting a kind of self-organization that he terms "order for free." Kauffman is not easy. His models are rigorous, mathematical, and, to many of his colleagues, somewhat difficult to understand. A key to his worldview is the notion that convergent rather than divergent flow plays the deciding role in the evolution of life. With his colleague Christopher G. Langton, he believes that the complex systems best able to adapt are those poised on the border between chaos and disorder.
Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a "new" biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.
Christopher G. Langton has spent years studying evolution through the prism of computer programs. His work has focused on abstracting evolution from that upon which it acts. He has created "nature" in the computer, and his work has given rise to a new discipline called AL, or artificial life. This is the study of "virtual ecosystems," in which populations of simplified "animals" interact, reproduce, and evolve. Langton takes a bottom-up approach to the study of life, intelligence, and consciousness which resonates with the work of Marvin Minsky, Roger Schank, and Daniel C. Dennett. By vitalizing abstraction, Langton hopes to illuminate things about life that are not apparent in looking at life itself.
J. Doyne Farmer is one of the pioneers of what has come to be called chaos theory — the theory that explains why much of nature appears random even though it follows deterministic physical laws. It also shows how some random-seeming systems may have underlying order which makes them more predictable. He has explored the practical consequences of this, showing how the game of roulette can be beaten using physics; he has also started a company to beat the market by finding patterns in financial data.
Farmer was an Oppenheimer Fellow at the Center for Nonlinear Studies at the Los Alamos National Laboratory, and later started the complex systems group, which came to include some of the rising stars in the field, such as Chris Langton, Walter Fontana, and Steen Rasmussen. In addition to his work on chaos, he has made important theoretical contributions to other problems in complex systems, including machine learning, a model for the immune system, and the origin of life.


Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.

Evolution as an algorithm (Part One)






Evolution as an algorithm


While Monod characterised evolution in terms of its most basic features, Daniel Dennett has championed a conception of evolution at the next higher level of abstraction. He proposes that Darwin’s theory of natural selectionshould be thought of as an algorithm.Dennett, Darwin's Dangerous Idea: Evolution and the Meanings of Life 51.

Some features of the world can be satisfactorily described in terms of laws and equations. Newton’s inverse-square law of gravitation is a perfect example. Others require statistical descriptions. But a faithful abstraction of natural selection needs to capture its cumulative and temporal character. Algorithms do this in ways that differential equations cannot.

Unlike typical discoveries in the sciences, an algorithm once uncovered, is no longer up for debate. The closest analogue is with mathematical theorems. Once Pythagoras had developed his theorem relating the lengths of the sides of right triangles, it could not be undeveloped.Although it could be reformulated for non-Euclidean geometries, etc. There is much to be gained from thinking of natural selection in algorithmic terms, and it is as unlikely to be refuted as Pythagoras’ theorem. This is one more reason why Dennett refers to natural selection as ‘Darwin’s Dangerous Idea.’

It is once we start thinking of life in algorithmic terms, that the power of Darwin’s theory becomes shockingly clear. It is a matter of common experience that offspring inherit traits from their parents, and that no two descendants are completely alike. Darwin recognised that whichever offspring had been born with variations that were somehow more profitable than its peers - however slight these variations may be - they would pass on these advantageous traits to more offspring than their less advantaged contemporaries. The advantageous traits would then spread and become commonplace within the population. This kind of system lends itself to algorithmic modelling. Imagine two variables representing the fitness of ‘normal’ members of a species (variable a), and a mutant, b. The mutation is very minor, perhaps corresponding to a slight strengthening of teeth, giving b a 1% fitness advantage in cases where that strength is helpful. We are in the abstract world of mathematics and algorithms, so if b > a on average it is inevitable that b will continue to increase and the number of b organisms will come to significantly outnumber the a organisms.Note at this level of description there is no competition for finite resources and yet the mechanism of natural selection still operates. The only question is how many generation it will take. The new fitness value for the overall population will have become normalized at 101% compared to where we started. The stage is now set for the eventual emergence of another beneficial mutation that will see the whole species renormalized to a still higher value of fitness. Of course, neutral mutations and deleterious mutations will occur as well, but at the simplistic level of description provided here, these have essentially no net effect because beneficial mutations are inherited more often - by definition, and therefore inevitably overwhelm the non-beneficial mutations.

Importantly, at this level of description there is no difference between so-called ‘micro’ and ‘macro evolution.’ While common sense allows that descendents with stronger teeth may come to outnumber those with weak teeth (micro-evolution), when viewed in abstract algorithmic terms, the same mechanism accounts for any adaptation whatsoever, including macro-evolutionary changes. Darwin was quite correct to observe “I can see no limit to this power”Charles Darwin, The Origin of Species by Means of Natural Selection: Or, the Preservation of Favoured Races in the Struggle for Life (Harmondsworth: Penguin, 1985) 443. See also 168. and conclude that it could serve to drive the origin of species.

However loudly Darwin’s critics protest, this level of explanation of adaptation is powerful and irrefutable. Dennett is correct to claim natural selection is about as likely to be refuted as is a return to a pre-Copernican geocentric view of the cosmos.Dennett, Darwin's Dangerous Idea: Evolution and the Meanings of Life 20. Once understood, the idea is so obvious as to be self-evident.

Unfortunately, its immense explanatory power and irrefutable nature is also its Achilles’ heel. Expressed in the abstract terms laid out so far it can explain any and every adaptation; we have not specified the interval between generations, so by default the value of b reaches infinity almost immediately, as does the population of b organisms. In order to serve as an explanation for adaptations in terrestrial biology, the algorithm of natural selection needs to be properly ‘parameterised.’ The same holds true for Newton’s ‘f = ma.’ This formula tells us nothing useful about an actual event in the world until parameters of force, mass or acceleration are known.

In evolution, specifying parameters is no easy task. Real-world populations compete for multiple resources, and lives are lived out in specific but changing environments. One of the key parameters is the net effect of natural selection. Since it is not the only force acting on populations, depending on the parameters that are plugged into the algorithm, it is possible that other factors could overwhelm it temporarily, or even in the long run. However, if on average, it has the slightest net effect, natural selection will serve as a possible explanation for any adaptation (in fact, every adaptation) that is logically possible in any given environment.

The present situation is one where the mechanism and theoretical power of natural selection is not in doubt, but its place within an account of the actual terrestrial biological history is dependent upon it being correctly parameterised and placed within a larger model of the 3.8 billion year history of life on Earth.See S. Conway Morris, Life's Solution: Inevitable Humans in a Lonely Universe (Cambridge: Cambridge University Press, 2003) 108.