FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG

OCCUPY THE SCIENTIFIC METHOD


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Wednesday, December 30, 2015

Scientists Are Beginning to Figure Out Why Conservatives Are…Conservative





Ten years ago, it was wildly controversial to talk about psychological differences between liberals and conservatives. Today, it's becoming hard not to.


| Tue Jul. 15, 2014 5:00 AM EDT



Scientists are using eye-tracking devices to detect automatic response differences between liberals and conservatives.

You could be forgiven for not having browsed yet through the latest issue of the journal Behavioral and Brain Sciences. If you care about politics, though, you'll find a punchline therein that is pretty extraordinary.
Behavioral and Brain Sciences employs a rather unique practice called "Open Peer Commentary": An article of major significance is published, a large number of fellow scholars comment on it, and then the original author responds to all of them. The approach has many virtues, one of which being that it lets you see where a community of scholars and thinkers stand with respect to a controversial or provocative scientific idea. And in the latest issue of the journal, this process reveals the following conclusion: A large body of political scientists and political psychologists now concur that liberals and conservatives disagree about politics in part because they are different people at the level of personality, psychology, and even traits like physiology and genetics.
That's a big deal. It challenges everything that we thought we knew about politics—upending the idea that we get our beliefs solely from our upbringing, from our friends and families, from our personal economic interests, and calling into question the notion that in politics, we can really change (most of us, anyway).
It is a "virtually inescapable conclusion" that the "cognitive-motivational styles of leftists and rightists are quite different."
The occasion of this revelation is a paper by John Hibbing of the University of Nebraska and his colleagues, arguing that political conservatives have a "negativity bias," meaning that they are physiologically more attuned to negative (threatening, disgusting) stimuli in their environments. (The paper can be read for free here.) In the process, Hibbing et al. marshal a large body of evidence, including their own experiments using eye trackers and other devices to measure the involuntary responses of political partisans to different types of images. One finding? That conservatives respond much more rapidly to threatening and aversive stimuli (for instance, images of "a very large spider on the face of a frightened person, a dazed individual with a bloody face, and an open wound with maggots in it," as one of their papers put it).
In other words, the conservative ideology, and especially one of its major facets—centered on a strong military, tough law enforcement, resistance to immigration, widespread availability of guns—would seem well tailored for an underlying, threat-oriented biology.
The authors go on to speculate that this ultimately reflects an evolutionary imperative. "One possibility," they write, "is that a strong negativity bias was extremely useful in the Pleistocene," when it would have been super-helpful in preventing you from getting killed. (The Pleistocene epoch lasted from roughly 2.5 million years ago until 12,000 years ago.) We had John Hibbing on the Inquiring Minds podcast earlier this year, and he discussed these ideas in depth; you can listen here:
Hibbing and his colleagues make an intriguing argument in their latest paper, but what's truly fascinating is what happened next. Twenty-six different scholars or groups of scholars then got an opportunity to tee off on the paper, firing off a variety of responses. But as Hibbing and colleagues note in their final reply, out of those responses, "22 or 23 accept the general idea" of a conservative negativity bias, and simply add commentary to aid in the process of "modifying it, expanding on it, specifying where it does and does not work," and so on. Only about three scholars or groups of scholars seem to reject the idea entirely.
That's pretty extraordinary, when you think about it. After all, one of the teams of commenters includes New York University social psychologist John Jost, who drew considerable political ire in 2003 when he and his colleagues published a synthesis of existing psychological studies on ideology, suggesting that conservatives are characterized by traits such as a need for certainty and an intolerance of ambiguity. Now, writing in Behavioral and Brain Sciences in response to Hibbing roughly a decade later, Jost and fellow scholars note that
There is by now evidence from a variety of laboratories around the world using a variety of methodological techniques leading to the virtually inescapable conclusion that the cognitive-motivational styles of leftists and rightists are quite different. This research consistently finds that conservatism is positively associated with heightened epistemic concerns for order, structure, closure, certainty, consistency, simplicity, and familiarity, as well as existential concerns such as perceptions of danger, sensitivity to threat, and death anxiety. [Italics added]
Back in 2003, Jost and his team were blasted by Ann CoulterGeorge Will, and National Review for saying this; congressional Republicans began probing into their research grants; and they got lots of hate mail. But what's clear is that today, they've more or less triumphed. They won a field of converts to their view and sparked a wave of new research, including the work of Hibbing and his team.
"One possibility," note the authors, "is that a strong negativity bias was extremely useful in the Pleistocene," when it would have been super-helpful in preventing you from getting killed.
Granted, there are still many issues yet to be worked out in the science of ideology. Most of the commentaries on the new Hibbing paper are focused on important but not-paradigm-shifting side issues, such as the question of how conservatives can have a higher negativity bias, and yet not have neurotic personalities. (Actually, if anything, the research suggests that liberals may be the more neurotic bunch.) Indeed, conservatives tend to have a high degree of happiness and life satisfaction. But Hibbing and colleagues find no contradiction here. Instead, they paraphrase two other scholarly commentators (Matt Motyl of the University of Virginia and Ravi Iyer of the University of Southern California), who note that "successfully monitoring and attending negative features of the environment, as conservatives tend to do, may be just the sort of tractable task…that is more likely to lead to a fulfilling and happy life than is a constant search for new experience after new experience."
All of this matters, of course, because we still operate in politics and in media as if minds can be changed by the best honed arguments, the most compelling facts. And yet if our political opponents are simply perceiving the world differently, that idea starts to crumble. Out of the rubble just might arise a better way of acting in politics that leads to less dysfunction and less gridlock…thanks to science.

Sunday, December 27, 2015

WHAT WAS DARWIN'S ALGORITHM?





To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.


CONVERSATION : LIFE


The synthetic path to investigating the world is the logical space occupied by the physicist Murray Gell-Mann, the biologist Stuart Kauffman, the computer scientist Christopher G. Langton, and the physicist J. Doyne Farmer, and their colleagues in and around Los Alamos and the Santa Fe Institute.
The Santa Fe Institute was founded in 1984 by a group that included Gell-Mann, then at the California Institute of Technology, and the Los Alamos chemist George Cowan. Some say it came into being as a haven for bored physicists. Indeed, the end of the reductionist program in physics may well be an epistemological demise, in which the ultimate question is neither asked nor answered but instead the terms of the inquiry are transformed. This is what is happening in Santa Fe.
Murray Gell-Mann, widely acknowledged as one of the greatest particle physicists of the century (another being his late Caltech colleague, Richard Feynman), received a Nobel Prize for work in the 1950s and 1960s leading up to his proposal of the quark model. At a late stage in his career, he has turned to the study of complex adaptive systems.


The synthetic path to investigating the world is the logical space occupied by the physicist Murray Gell-Mann, the biologist Stuart Kauffman, the computer scientist Christopher G. Langton, and the physicist J. Doyne Farmer, and their colleagues in and around Los Alamos and the Santa Fe Institute.
The Santa Fe Institute was founded in 1984 by a group that included Gell-Mann, then at the California Institute of Technology, and the Los Alamos chemist George Cowan. Some say it came into being as a haven for bored physicists. Indeed, the end of the reductionist program in physics may well be an epistemological demise, in which the ultimate question is neither asked nor answered but instead the terms of the inquiry are transformed. This is what is happening in Santa Fe.
Murray Gell-Mann, widely acknowledged as one of the greatest particle physicists of the century (another being his late Caltech colleague, Richard Feynman), received a Nobel Prize for work in the 1950s and 1960s leading up to his proposal of the quark model. At a late stage in his career, he has turned to the study of complex adaptive systems.
Gell-Mann's model of the world is based on information; he connects the reductionist, fundamental laws of physics — the simple rules — with the complexity that emerges from those rules and with what he terms "frozen accidents" — that is, historical happenstance. He has given a name to this activity: "plectics," which is the study of simplicity and complexity as it is manifested not just in nature but in such phenomena as language and economics. At the institute, he provides encouragement, experience, prestige, and his vast reservoir of scientific knowledge to a younger group of colleagues, who are mostly involved in developing computational models based on simple rules that allow the emergence of complex behavior.
Stuart Kauffman is a theoretical biologist who studies the origin of life and the origins of molecular organization. Twenty- five years ago, he developed the Kauffman models, which are random networks exhibiting a kind of self-organization that he terms "order for free." Kauffman is not easy. His models are rigorous, mathematical, and, to many of his colleagues, somewhat difficult to understand. A key to his worldview is the notion that convergent rather than divergent flow plays the deciding role in the evolution of life. With his colleague Christopher G. Langton, he believes that the complex systems best able to adapt are those poised on the border between chaos and disorder.
Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a "new" biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.
Christopher G. Langton has spent years studying evolution through the prism of computer programs. His work has focused on abstracting evolution from that upon which it acts. He has created "nature" in the computer, and his work has given rise to a new discipline called AL, or artificial life. This is the study of "virtual ecosystems," in which populations of simplified "animals" interact, reproduce, and evolve. Langton takes a bottom-up approach to the study of life, intelligence, and consciousness which resonates with the work of Marvin Minsky, Roger Schank, and Daniel C. Dennett. By vitalizing abstraction, Langton hopes to illuminate things about life that are not apparent in looking at life itself.
J. Doyne Farmer is one of the pioneers of what has come to be called chaos theory — the theory that explains why much of nature appears random even though it follows deterministic physical laws. It also shows how some random-seeming systems may have underlying order which makes them more predictable. He has explored the practical consequences of this, showing how the game of roulette can be beaten using physics; he has also started a company to beat the market by finding patterns in financial data.
Farmer was an Oppenheimer Fellow at the Center for Nonlinear Studies at the Los Alamos National Laboratory, and later started the complex systems group, which came to include some of the rising stars in the field, such as Chris Langton, Walter Fontana, and Steen Rasmussen. In addition to his work on chaos, he has made important theoretical contributions to other problems in complex systems, including machine learning, a model for the immune system, and the origin of life.


Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.

Evolution as an algorithm (Part One)






Evolution as an algorithm


While Monod characterised evolution in terms of its most basic features, Daniel Dennett has championed a conception of evolution at the next higher level of abstraction. He proposes that Darwin’s theory of natural selectionshould be thought of as an algorithm.Dennett, Darwin's Dangerous Idea: Evolution and the Meanings of Life 51.

Some features of the world can be satisfactorily described in terms of laws and equations. Newton’s inverse-square law of gravitation is a perfect example. Others require statistical descriptions. But a faithful abstraction of natural selection needs to capture its cumulative and temporal character. Algorithms do this in ways that differential equations cannot.

Unlike typical discoveries in the sciences, an algorithm once uncovered, is no longer up for debate. The closest analogue is with mathematical theorems. Once Pythagoras had developed his theorem relating the lengths of the sides of right triangles, it could not be undeveloped.Although it could be reformulated for non-Euclidean geometries, etc. There is much to be gained from thinking of natural selection in algorithmic terms, and it is as unlikely to be refuted as Pythagoras’ theorem. This is one more reason why Dennett refers to natural selection as ‘Darwin’s Dangerous Idea.’

It is once we start thinking of life in algorithmic terms, that the power of Darwin’s theory becomes shockingly clear. It is a matter of common experience that offspring inherit traits from their parents, and that no two descendants are completely alike. Darwin recognised that whichever offspring had been born with variations that were somehow more profitable than its peers - however slight these variations may be - they would pass on these advantageous traits to more offspring than their less advantaged contemporaries. The advantageous traits would then spread and become commonplace within the population. This kind of system lends itself to algorithmic modelling. Imagine two variables representing the fitness of ‘normal’ members of a species (variable a), and a mutant, b. The mutation is very minor, perhaps corresponding to a slight strengthening of teeth, giving b a 1% fitness advantage in cases where that strength is helpful. We are in the abstract world of mathematics and algorithms, so if b > a on average it is inevitable that b will continue to increase and the number of b organisms will come to significantly outnumber the a organisms.Note at this level of description there is no competition for finite resources and yet the mechanism of natural selection still operates. The only question is how many generation it will take. The new fitness value for the overall population will have become normalized at 101% compared to where we started. The stage is now set for the eventual emergence of another beneficial mutation that will see the whole species renormalized to a still higher value of fitness. Of course, neutral mutations and deleterious mutations will occur as well, but at the simplistic level of description provided here, these have essentially no net effect because beneficial mutations are inherited more often - by definition, and therefore inevitably overwhelm the non-beneficial mutations.

Importantly, at this level of description there is no difference between so-called ‘micro’ and ‘macro evolution.’ While common sense allows that descendents with stronger teeth may come to outnumber those with weak teeth (micro-evolution), when viewed in abstract algorithmic terms, the same mechanism accounts for any adaptation whatsoever, including macro-evolutionary changes. Darwin was quite correct to observe “I can see no limit to this power”Charles Darwin, The Origin of Species by Means of Natural Selection: Or, the Preservation of Favoured Races in the Struggle for Life (Harmondsworth: Penguin, 1985) 443. See also 168. and conclude that it could serve to drive the origin of species.

However loudly Darwin’s critics protest, this level of explanation of adaptation is powerful and irrefutable. Dennett is correct to claim natural selection is about as likely to be refuted as is a return to a pre-Copernican geocentric view of the cosmos.Dennett, Darwin's Dangerous Idea: Evolution and the Meanings of Life 20. Once understood, the idea is so obvious as to be self-evident.

Unfortunately, its immense explanatory power and irrefutable nature is also its Achilles’ heel. Expressed in the abstract terms laid out so far it can explain any and every adaptation; we have not specified the interval between generations, so by default the value of b reaches infinity almost immediately, as does the population of b organisms. In order to serve as an explanation for adaptations in terrestrial biology, the algorithm of natural selection needs to be properly ‘parameterised.’ The same holds true for Newton’s ‘f = ma.’ This formula tells us nothing useful about an actual event in the world until parameters of force, mass or acceleration are known.

In evolution, specifying parameters is no easy task. Real-world populations compete for multiple resources, and lives are lived out in specific but changing environments. One of the key parameters is the net effect of natural selection. Since it is not the only force acting on populations, depending on the parameters that are plugged into the algorithm, it is possible that other factors could overwhelm it temporarily, or even in the long run. However, if on average, it has the slightest net effect, natural selection will serve as a possible explanation for any adaptation (in fact, every adaptation) that is logically possible in any given environment.

The present situation is one where the mechanism and theoretical power of natural selection is not in doubt, but its place within an account of the actual terrestrial biological history is dependent upon it being correctly parameterised and placed within a larger model of the 3.8 billion year history of life on Earth.See S. Conway Morris, Life's Solution: Inevitable Humans in a Lonely Universe (Cambridge: Cambridge University Press, 2003) 108.

Sunday, November 29, 2015

Carl Sagan on Humility, Science as a Tool of Democracy, and the Value of Uncertainty


Brain Pickings





Carl Sagan on Humility, Science as a Tool of Democracy, and the Value of Uncertainty

“Science is a way to call the bluff of those who only pretend to knowledge… It can tell us when we’re being lied to. It provides a mid-course correction to our mistakes.”


BY MARIA POPOVA

“Without science, democracy is impossible,” Bertrand Russell wrote in his foundational 1926 treatise on education and the good life. Three generations later,Carl Sagan (November 9, 1934–December 20, 1996) — another one of our civilization’s most inspired minds and greatest champions of reason — picked up where Russell left off to make an elegant case for the humanizing power of science, its vitality to democracy, and how applying the scientific way of thinking to everyday life refines our intellectual and moral integrity.
In his 1995 masterwork The Demon-Haunted World: Science as a Candle in the Dark (public library) — the source of his indispensable Baloney Detection Kit — Sagan writes:
Avoidable human misery is more often caused not so much by stupidity as by ignorance, particularly our ignorance about ourselves… Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us — then, habits of thought familiar from ages past reach for the controls.
The true power of science, Sagan suggests, lies not in feeding into our culture’s addiction to simplistic and ready-made answers but in its methodical dedication to asking what Hannah Arendt called the “unanswerable questions” that make us human, then devising tools for testing their proposed answers:
There is much that science doesn’t understand, many mysteries still to be resolved. In a Universe tens of billions of light-years across and some ten or fifteen billion years old, this may be the case forever.
[…]
Science is far from a perfect instrument of knowledge. It’s just the best we have. In this respect, as in many others, it’s like democracy. Science by itself cannot advocate courses of human action, but it can certainly illuminate the possible consequences of alternative courses of action.
The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything — new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.

Art by Olivier Tallec from Louis I, King of the Sheep
Art by Olivier Tallec from Louis I, King of the Sheep

The scientific way of thinking, Sagan asserts, counters our perilous compulsion for certainty with systematic assurance that uncertainty is the only arrow of progress and error the only catalyst of growth:
Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science — by far the most successful claim to knowledge accessible to humans — teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.
We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.
In this continual self-assessment, Sagan argues, lies the singular potency of science as a tool for advancing society:
The reason science works so well is partly that built-in error-correcting machinery. There are no forbidden questions in science, no matters too sensitive or delicate to be probed, no sacred truths. That openness to new ideas, combined with the most rigorous, skeptical scrutiny of all ideas, sifts the wheat from the chaff. It makes no difference how smart, august, or beloved you are. You must prove your case in the face of determined, expert criticism. Diversity and debate are valued. Opinions are encouraged to contend — substantively and in depth.
[…]
Science is part and parcel humility. Scientists do not seek to impose their needs and wants on Nature, but instead humbly interrogate Nature and take seriously what they find. We are aware that revered scientists have been wrong. We understand human imperfection. We insist on independent and — to the extent possible — quantitative verification of proposed tenets of belief. We are constantly prodding, challenging, seeking contradictions or small, persistent residual errors, proposing alternative explanations, encouraging heresy. We give our highest rewards to those who convincingly disprove established beliefs.
Embracing this ethos is an exercise in willingly refining our intellectual and ideological imperfections. Sagan captures this with elegant simplicity:
Valid criticism does you a favor.
He returns to the greatest promise of science as fertilizer for intellectual and spiritual growth, a democratic tool of social change, and a framework for civilizational advancement:
Science is a way to call the bluff of those who only pretend to knowledge. It is a bulwark against mysticism, against superstition, against religion misapplied to where it has no business being. If we’re true to its values, it can tell us when we’re being lied to. It provides a mid-course correction to our mistakes.
[…]
Finding the occasional straw of truth awash in a great ocean of confusion and bamboozle requires vigilance, dedication, and courage. But if we don’t practice these tough habits of thought, we cannot hope to solve the truly serious problems that face us.
Complement the enduringly elevating The Demon-Haunted World with Sagan onscience and spiritualitythe vital balance between skepticism and openness, hisreading list, and this wonderful animated adaptation of his famous Pale Blue Dotmonologue, then revisit cosmologist Lisa Randall on the crucial difference in how art, religion, and science explain the universe and Neil deGrasse Tyson’s touching remembrance of Sagan.



Friday, October 9, 2015

Machines do not think - The Contradiction with Autonomous Systems





Machines do not think - The Contradiction with Autonomous Systems


Autonomy’ is currently a buzz-word for unmanned systems and is wrongly used throughout the robotic community without differentiating or even providing a deeper understanding of what the term actually implies. To make matters worse, civil industry, military and even the public have a varying perception of autonomy.

 JAPCC Flyer on Autonomous Systems November 2012



In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.” 
Quote taken from the movie ‘Terminator 2 – Judgment Day’

Introduction



 To overcome current limitations of Unmanned Aircraft Systems (UAS), more and more automatic functions have been and will be implemented in current and future UAS systems. In the civil arena, the use of highly automated robotic systems is already quite common, e.g. in the manufacturing sector. But what is commonly accepted in the civilian community maybe a significant challenge when applied to military weapon systems. Calling a manufacturing robot ‘autonomous’ can be done without causing intense fear among the public. On the other hand, the public vision of an autonomous unmanned aircraft is that of a self-thinking killing machine as depicted by James Cameron in his Terminator science fiction movies. This then raises the question of what an autonomous system actually is and what differentiates it from an automatic system.

 Defining Autonomous



Autonomous in philosophical terms is defined as the possession or right to self-government, self-ruling or self-determination. Other synonyms linked to autonomy are independence and sovereignty.

The word itself derives from the Greek language, meaning literally ‘having its own law’. Immanuel Kant, a German philosopher of the 18th century, defined autonomy as the capacity to deliberate and to decide based on a self-given moral law.



In technical terms, autonomy is defined quite differently from the philosophical sense of the word. The U.S. National Institute
of Standards and Technology (NIST) defines a fully autonomous system as being capable of accomplishing its assigned mission, within a defined scope, without human intervention while adapting to operational and environmental conditions.Furthermore, it defines a semi-autonomous system as being capable of performing autonomous operations with various levels of human interaction.



Most people have an understanding of the term ‘autonomous’only in the philosophical sense. A good example of the contradiction between public perception and technical definition is that of a simple car navigation system. After entering a destination address as the only human interaction, the system will determine the best path depending on the given parameters, i.e. take the shortest way or the one with the lowest fuel consumption. It will alter the route without human interaction if an obstacle (e.g. traffic jam) makes it necessary or if the driver turns the wrong way. Therefore, the car navigation system is technically autonomous, but no one would call it that because of the commonly perceived philosophical definition of the term.

 Public Acceptance



Because of this common understanding of the term ‘autonomous’, the public’s willingness to accept highly complex autonomous weapon systems will most likely be very low. Furthermore, the decision to use aggressive names for some unmanned military aircraft will undermine the possibility
of acceptance. Since it is unlikely that the public’s perception of the classic definition of the term ‘autonomous’ will change, we must change the technical definition of what the so called ‘autonomous’ systems really are. Even if a system appears to behave autonomously, it is only an automated system, because it is strictly bound to its given set of rules, as broad as they might be and / or as complex the system is.


Thinking Machines?


Calling a system autonomous in the way Immanuel Kant defines Autonomy would imply the system is responsible for its own decisions and actions. This thought may be ridiculous at the first glance, but based on this premise some important aspects of future UAS development should be considered very carefully. How should a highly automated system react if it is attacked? Should it use only defensive measures or should it engage the attacker with lethal force? Who is legally responsible for combat actions if performed automatically without human interaction?International Law on Armed Conflict has no chapters concerning autonomous or automated weapon systems. Fortunately, there is no need for change as long as unmanned systems adhere to the same rules that apply to manned assets. This implies that there is always a human in the loop to make a final legal assessment and decision if and how to engage a target. Although software may identify targets based on a given pattern which can be digitized into recognizable patterns and figures, it cannot cope with the legal aspects of armed combat which not only require a deeper understanding of the Laws of Armed Conflict but also consideration of ethical and moral factors.

Conclusion



 The current stage of technology is far from building autonomous systems in the way it is literally defined and it’s doubtful this level of development will be reached in the near term. The approach to create a technical definition, separate from the
one that already exists that is based on the classic, commonly used one will only cause confusion. Therefore, the current use of the technical term ‘autonomous’ should be changed to the term ‘automated’ to avoid misunderstandings and to assure the use of the same set of terms as a basis for future comprehension. The definition of automated could be subdivided into several levels of automation, which includes fully automated as the top level definition. This would be used for highly complex systems which are incorrectly called ‘autonomous’ today. But even fully automated systems must have human oversight and authorization to engage with live ammunition. Due to ethical and legal principles, decision making and responsibility must not be shifted from man to machine, unless we want to risk a ‘Terminator’ like scenario.





1. Encyclopedia of Philosophy (http://www.iep.utm.edu/autonomy/).2. ‘Critique of Practical Reason’, Immanuel Kant.3. National Institute of Standards and Technology, U.S. Department of Commerce(http://www.nist.gov/el/isd/ks/upload/NISTSP_1011-I-2-0.pd


article by Maj André Haider

DEU Army UCAV SME, Combat Air Branch Joint Air Power Competence Centre

Maj André Haider is an Artillery officer in the German Army with over fifteen years’ experience in command & control and operational planning. His last post was Deputy Commander of the Ger-man Army’s MLRS Rocket Artillery Battalion andhe is currently assigned to the Joint Air Power Competence Centre as an Unmanned Systems Subject Matter Expert