This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Thursday, June 26, 2014

Why Liberals Are More Intelligent Than Conservatives

Psychology Today: Here to Help

The Scientific Fundamentalist

A look at the hard truths about human nature.

Liberals think they’re more intelligent than conservatives because they are

Sarah PaulsonHarriet Hayes:  I don’t even know what the sides are in the culture wars.

Matt Albie:  Well, your side hates my side because you think we think you are stupid, and my side hates your side because we think you are stupid.

Studio 60 on the Sunset Strip, Nevada Day, Part I

It is difficult to define a whole school of political ideology precisely, but one may reasonably define liberalism (as opposed to conservatism) in the contemporary United States as the genuine concern for the welfare of genetically unrelated others and the willingness to contribute larger proportions of private resources for the welfare of such others.  In the modern political and economic context, this willingness usually translates into paying higher proportions of individual incomes in taxes toward the government and its social welfare programs.  Liberals usually support such social welfare programs and higher taxes to finance them, and conservatives usually oppose them.

Defined as such, liberalism is evolutionarily novel.  Humans (like other species) are evolutionarily designed to be altruistic toward their genetic kin, their friends and allies, and members of their deme (a group of intermarrying individuals) or ethnic group.  They are not designed to be altruistic toward an indefinite number of complete strangers whom they are not likely ever to meet or interact with.  This is largely because our ancestors lived in a small band of 50-150 genetically related individuals, and large cities and nations with thousands and millions of people are themselves evolutionarily novel.

The examination of the 10-volume compendium The Encyclopedia of World Cultures, which describes all human cultures known to anthropology (more than 1,500) in great detail, as well as extensive primary ethnographies of traditional societies, reveals that liberalism as defined above is absent in these traditional cultures.  While sharing of resources, especially food, is quite common and often mandatory among hunter-gatherer tribes, and while trade with neighboring tribes often takes place, there is no evidence that people in contemporary hunter-gatherer bands freely share resources with members of other tribes.

Because all members of a hunter-gatherer tribe are genetic kin or at the very least friends and allies for life, sharing resources among them does not qualify as an expression of liberalism as defined above.  Given its absence in the contemporary hunter-gatherer tribes, which are often used as modern-day analogs of our ancestral life, it may be reasonable to infer that sharing of resources with total strangers that one has never met or is not likely ever to meet – that is, liberalism – was not part of our ancestral life.  Liberalism may therefore be evolutionarily novel, and the Hypothesis would predict that more intelligent individuals are more likely than less intelligent individuals to espouse liberalism as a value.

Analyses of large representative samples, from both the United States and the United Kingdom, confirm this prediction.  In both countries, more intelligent children are more likely to grow up to be liberals than less intelligent children.  For example, among the American sample, those who identify themselves as “very liberal” in early adulthood have a mean childhood IQ of 106.4, whereas those who identify themselves as “very conservative” in early adulthood have a mean childhood IQ of 94.8.

Political ideology
Even though past studies show that women are more liberal than men, and blacks are more liberal than whites, the effect of childhood intelligence on adult political ideology is twice as large as the effect of either sex or race.  So it appears that, as the Hypothesis predicts, more intelligent individuals are more likely to espouse the value of liberalism than less intelligent individuals, possibly because liberalism is evolutionarily novel and conservatism is evolutionarily familiar.

The primary means that citizens of capitalist democracies contribute their private resources for the welfare of the genetically unrelated others is paying taxes to the government for its social welfare programs.  The fact that conservatives have been shown to give more money to charities than liberals is not inconsistent with the prediction from the Hypothesis; in fact, it supports the prediction.  Individuals can normally choose and select the beneficiaries of their charity donations.  For example, they can choose to give money to the victims of the earthquake in Haiti, because they want to help them, but not to give money to the victims of the earthquake in Chile, because they don’t want to help them.  In contrast, citizens do not have any control over whom the money they pay in taxes benefit.  They cannot individually choose to pay taxes to fund Medicare, because they want to help elderly white people, but not AFDC, because they don’t want to help poor black single mothers.  This may precisely be why conservatives choose to give more money to individual charities of their choice while opposing higher taxes.

Incidentally, this finding substantiates one of the persistent complaints among conservatives.  Conservatives often complain that liberals control the media or the show business or the academia or some other social institutions.  The Hypothesis explains why conservatives are correct in their complaints.  Liberals do control the media, or the show business, or the academia, among other institutions, because, apart from a few areas in life (such as business) where countervailing circumstances may prevail, liberals control all institutions.  They control the institutions because liberals are on average more intelligent than conservatives and thus they are more likely to attain the highest status in any area of (evolutionarily novel) modern life.

Monday, June 2, 2014

Maybe classical clockwork can explain quantum weirdness


Maybe classical clockwork can explain quantum weirdness

Game of Life may be a model for deriving quantum odds from cause-and-effect laws, Nobel laureate ’t Hooft says

Second of two parts (Read part 1)
Quantum physics is like life. Not nasty, brutish and short, but rather unpredictable, occasionally interesting, and often depressing. At least it has been depressing for many scientists, like Einstein, who thought science ought to predict what happens, not just give you the odds for what might happen, like meteorologists forecasting rain.

At least with quantum physics, unlike weather forecasts, the odds are always accurate. But that doesn’t satisfy everybody who wants a truly deep understanding of nature’s laws.

So even though quantum theory’s predictions are always remarkably reliable, physicists have for decades been debating what the mathematical apparatus for making those predictions, known as quantum mechanics, really means.

Some interpretations suggest that reality is ill-defined until observations and measurements are made. It’s like turning a spinning coin into either heads or tails by catching and looking at it. Others say there are multiple parallel universes, so all spinning coins turn up as heads in one universe and tails in another. Or maybe — a minority view — quantum coins are just like real coins: whether they turn up heads or tails is completely deterministic, obeying strict laws of cause and effect. If you knew all the forces, the strength of the flip and the gravity and air resistance and everything else, you could predict a coin’s heads-or-tails outcome correctly every time. Some people hope that quantum physics will turn out to be like that, with everything foreordained by tock-after-tick deterministic clockwork, as with classical Newtonian physics.

Sadly for Newton fans, the weird outcomes of many quantum experiments have seemingly ruled out any such return to certainty. Take, for instance, the confusing phenomenon of quantum entanglement. Two particles from a common source can be separated by a vast distance, yet a measurement of one instantly determines what can be measured about the other. No signal can be sent through spacetime for ordinary cause-and-effect to explain that link. It’s quantum magic. Unless some invisible properties, or “hidden variables,” are determining the connection.

Over the years, various experiments have supposedly ruled out such hidden variables. But not necessarily the ones proposed by Nobel laureate Gerard ’t Hooft. If you trace all causes and effects back to the beginning of the universe, then maybe quantum mysteries such as entanglement can be explained in a classical cause-and-effect way, he argues.

“It may well be that, at its most basic level, there is no randomness in nature, no fundamentally statistical aspect to the laws of evolution,” he writes in a new paper. “Everything, up to the most minute detail, is controlled by invariable laws. Every significant event in our universe takes place for a reason, it was caused by the action of physical law, not just by chance.”

’t Hooft calls his view of quantum mechanics the cellular automaton interpretation. In other words, he thinks quantum physics is like Life. A cellular automaton is like a grid on which black and white squares change color on the basis of simple rules. The prototypical example is the game known as Life, invented several decades ago by the mathematician John Conway.

Think of the universe as made up of rows of pixels on a computer screen. The configuration of pixels changes from row to row by applying an algorithm, a set of rules for telling each pixel what color to become based on the current colors of its neighboring pixels. This approach is equivalent, ’t Hooft points out, to saying that the states of nature can be described by a sequence of integers that evolve over time, as determined by an algorithm that tells them how to change.

Such an algorithm, ’t Hooft contends, can reproduce all the mysterious features of quantum physics. All observable phenomena will still obey the probabilities computed using quantum math. In this view, even though the future is determined by the past, humans could never predict the future, because they don’t know the underlying algorithm. And even if they did, they couldn’t calculate it faster than the evolution of the universe itself.

If he is right, the foundations of reality could be described by a deterministic theory. “It will be a theory that describes phenomena at a very tiny distance scale in terms of evolution laws that process bits and bytes of information,” ’t Hooft writes.
Mathematical technicalities abound in his 202-page paper. He shows with elaborate mathematics how his idea can work for “toy” models, such as a world with particles that move in only one dimension and don’t interact with each other. He suggests ways that ideas from superstring theory might help make things work in more complicated systems. He goes on to outline how further work might develop a complete theory capable of explaining the entirety of particle physics, with all its quantum features, from a nonquantum foundation.

Recapitulating ’t Hooft’s 200 pages of arguments into a few paragraphs omits a lot of nuance. But at the core of his approach is the notion that ultimate elements of reality, whatever they are, do not correspond to the templates for reality conceived by the human mind. Concepts such as particles and fields used in today’s standard physics are human inventions. Subatomic particles, atoms, molecules — the things that exhibit quantum weirdness — are templates imposed by human theory on the underlying truly real objects in nature. Quantum mysteries arise because of lack of awareness of the underlying level. An electron can be in two places at once because an electron is not a basic element of reality — it’s a template that subsumes multiple “beables,” the submicroscopic states of true reality.

“It is due to our intuitive thinking that our templates represent reality in some way, that we hit upon the apparently inevitable paradoxes” of quantum physics, ’t Hooft writes. He believes that a cellular automaton can describe the real states underlying the templates deterministically, with math that can be transformed into quantum theory’s seemingly probabilistic descriptions of reality.

“We consider cases where one has a classical, deterministic automaton on the one hand, and anapparently quantum mechanical system on the other,” he writes. “Then, the mathematical mapping is considered that shows these two systems to be equivalent in the sense that the solutions of one can be used to describe the solutions of the other.”
Yet while ’t Hooft makes a lot of progress, he acknowledges that the battle is far from won. There are difficulties to be overcome before he can reproduce all the victories achieved by the standard model of particle physics. Not to mention incorporating solutions to its remaining puzzles, such as how to formulate a quantum theory of gravity.
“It may take years, decades, perhaps centuries to arrive at a comprehensive theory of quantum gravity, combined with a theory of quantum matter that will be an elaborate extension of the standard model,” he writes. Only then will it be possible to identify the beables, the basic ingredients for deterministic theory, and figure out the relationship between the beables and the human templates such as particles and fields. “Only then can we tell whether the cellular automaton interpretation really works.”
’t Hooft has been developing these ideas for many years. So far they have not caught on among other quantum physicists. But his efforts are much more subtle and sophisticated than the many other attempts to restore cause-and-effect determinism to the universe. And even if he hasn’t yet succeeded in establishing that this approach will work, he has offered sufficient evidence that his views should be taken seriously. And that is one of his goals.
“We hope to inspire more physicists to … to consider seriously the possibility that quantum mechanics as we know it is not a fundamental, mysterious, impenetrable feature of our physical world, but rather an instrument to statistically describe a world where the physical laws, at their most basic roots, are not quantum mechanical at all. Sure, we do not know how to formulate the most basic laws at present, but we are collecting indications that a classical world underlying quantum mechanics does exist.”
Follow me on Twitter: @tom_siegfried

Nobel laureates offer new interpretations of quantum mysteries


Nobel laureates offer new interpretations of quantum mysteries

Like the way flipping a coin represents the dual possibilities of heads or tails, the quantum mathematical expression for computing probabilities of measurement outcomes describes multiple realities existing simultaneously. Some physicists have therefore suggested that the quantum mathematical expression does not have actual physical significance.    
First of two parts

Writing about the paradoxical nature of quantum mechanics poses a peculiar paradox of its own. If you explain it well enough that your readers understand it, you've somehow committed a gross error, because (as Feynman famously said) nobody really understands quantum mechanics.

But don’t worry. There is no danger that anybody reading this blog will come away understanding it. Nevertheless there are some new developments in the never-ending quest to explain quantum physics that, even if hard to understand, are worth knowing about. Especially considering where these new developments come from. Two giants of 20th century physics have recently offered 21st century views on how to interpret the quantum math that requires the subatomic world to be so weird.

The giants are Steven Weinberg (Nobel 1979) and Gerard ’t Hooft (Nobel 1999). They both played key roles in forging the modern understanding of particles and forces known as the standard model. They share a deep concern about the issues afflicting efforts to understand the foundations of quantum mechanics. But they offer very different views on what to do about it.

For decades, physicists seeking a solid foundation for quantum mechanics have been proposing new interpretations of its math. There are now more quantum interpretations than Batman, Superman, Spider-Man and X-Men movies combined. None have succeeded in ending the debate over two enduring problems: what happens when a measurement is made on a quantum system, and what the hell is going on with “spooky” quantum entanglement.

Measurement is a crucial concept in quantum mechanics, because it doesn’t work like the traditional measurements of classical physics. In the old days everybody thought objects had properties, like the way a coin can show heads or tails. You find out which property it is showing by looking at it. But in quantum mechanics, the property doesn’t exist before the measurement. Quantum particles, such as photons or electrons, are like spinning coins, neither heads nor tails until you catch one.

Quantum mechanical math is therefore probabilistic. It tells you the odds of getting heads or tails. But once you make the measurement, the result is definite. No more probabilities. So the quantum mathematical expression used for computing the probabilities, called the wave function (or state vector), apparently just “collapses.” (There is a technical distinction between wave function and state vector that will be ignored unless it really matters.)

For the traditional “Copenhagen” interpretation, Weinberg notes, that collapse just expresses a “mysterious division between the microscopic world governed by quantum mechanics and a macroscopic world of apparatus and observers that obeys classical physics.” Hence some experts wonder whether the state vector is actually representative of reality at all. If it is, some have argued, then all the various possible outcomes must actually occur in some universe — “the endless creation of inconceivably many branches of history,” as Weinberg puts it. In other words, some important football games would have ended differently because the other team won the overtime coin toss.

Entanglement is even more mysterious, Weinberg suggests, because the state vector can change as a result of a measurement made very far away. When two particles interact, they form a composite quantum system, described by a single state vector, even when one particle flies far away from the other. The state vector tells the odds for outcomes of measurements on either of the two particles. But once you measure one of them, the odds for different outcomes for the other particle instantaneously change, no matter how far away the other particle is. Sounds like voodoo, which is maybe why Einstein called it “spooky.”

“The susceptibility of the state vector to instantaneous change from a distance casts doubts on its physical significance,” Weinberg writes in his new paper. If a statement describing a system’s state can be changed instantly by a faraway measurement, “it seems reasonable to infer that such statements are meaningless,” he declares. “That is, it seems worth considering yet another interpretation of quantum mechanics.”

Weinberg’s title for the paper on his new interpretation is “Quantum mechanics without state vectors.” It asserts that the state vector is not, in fact, the proper representation of reality. That role should rather be assigned to something called the density matrix.

“Density” in this context refers to probability densities; a matrix is just a mathematical expression in which numbers are arranged in rows and columns. When the state vector is unknown, quantum calculations use a density matrix to compute the odds of different measurement outcomes. (The density matrix represents the information you possess about the relative likelihood of various possible state vectors describing the system you’re going to measure.)

As Weinberg points out, a given set of state vectors will tell you what the density matrix is. But a given density matrix doesn’t tell you what the state vectors are, because different sets of state vectors can give the same density matrix. It’s kind of like the answer is 42, but you don’t know whether that came from 7x6 or 2x21 or 3x14. Since 42 is the number you need, there’s no reason to care about its possible factors. So Weinberg advocates doing away with all the fuss about state vectors and concentrating on density matrices instead.

“In speaking of ‘quantum mechanics without state vectors’ I mean only that a statement that a system is in any one of various state vectors with various probabilities is to be regarded as having no meaning, except for what it tells us about the density matrix,” he writes. “With this definition of a physical state, even in entangled states nothing that is done in one isolated system can instantaneously affect the physical state of a distant isolated system.”

Weinberg goes on to explore ways that new mathematical features of quantum mechanics might emerge if the density matrix is taken as the proper description of physical states. Whether deep new insights into physics will result from such explorations remains to be seen.

But even if quantum mechanics as we know it remains essentially unchanged, it’s still thinkable that something deeper will ultimately explain its weirdness. That’s the tactic of ’t Hooft, who for many years has argued that quantum probabilities mask a unique cause-and-effect “deterministic” reality, hidden from human view.
Various analyses and experiments seem to have ruled out the notion that “hidden variables” determine the fate of particle measurements for which quantum math can give only the odds. ’t Hooft does not dispute these experiments. But he suggests that quantum odds can nevertheless emerge from a deeper layer of reality in which everything is specified deterministically. He spells his views out in a new 202-page paper. Consequently it will require another 1,000-word blog post to explore his interpretation in sufficient detail to guarantee that it will be properly misunderstood.

 Follow me on Twitter: @tom_siegfried