Artificial Intelligence thinks like a group of people that causes anxiety
- Transfer
Artificial intelligence was created for organizational decision making and public administration; he needs human ethics, says Johnny Penn of the University of Cambridge.
Artificial Intelligence (AI) is everywhere, but it is not completely invented in a historical way. To understand the impact of AI on our lives, it is important to assess the environment in which it was created. After all, statistics and state control have developed hand in hand for hundreds of years.
Consider computer science. Its origin can be traced not only by analytical philosophy, pure mathematics and Alan Turing, but also, surprisingly, by the history of public administration. In the book Government Machine: The Revolutionary History of Computer, published in 2003, John Hagar of University College London draws charts of the development of the British civil service, as it increased from 16,000 employees in 1797 to 460,000 by 1999. He noted the anomalous similarity between the functionality of the human bureaucracy and the electronic computer. (He admitted that he could not say whether this observation was trivial or deep).
Both systems processed a large amount of information using a hierarchy of pre-established but adaptable rules. But one of them came from the other. This showed an important link between the organization of the social structures of people and the digital tools designed to serve them. Mr. Aghar links the very origin of computer science with the Analytical Machine of Charles Babbage, developed in the 1820s in Britain. Its development was subsidized by the government, suggesting that it would serve as its sponsor. Babbage's projects, says Mr. Aghar, should be viewed as “the materialization of state activity.”
This relationship between computer systems and human organizational structures repeats the history of AI. In the 1930s and 1940s, Herbert Simon (pictured below), a political scientist at the University of Chicago, who later taught at Carnegie Mellon University, decided to develop a “scientific” approach to the foundation of the management structure. Simon had previously studied under the leadership of Rudolf Carnap, a member of the Vienna Circle of Logical Positivists. This confirmed his conviction that existing theories lack empiricism. His doctoral thesis in 1947 became the book “Administrative Behavior”, which served as the basis by which all activities in an organization could be understood using a decision matrix.

Simon says
He made an enormous contribution to many scientific fields, not only in political science and economics, but also in computer science and artificial intelligence. He coined the term “satisficing” (to accept what he wanted, rather than striving for the best) and developed the idea of “bounded rationality”, for which he received the Nobel Prize in economics in 1978. But back in the 1950s, Simon was a consultant at the RAND Corporation, an influential think tank supported by US air forces.
In RAND, Simon and two of his colleagues: young mathematician Allan Newell and former insurance actuary J. Clifford Shaw attempted to simulate the solution of a person’s problems in terms of how a computer performs a specific operation. For this, Simon borrowed elements from the system that he developed in the Administrative Behavior to teach the computer to "think", like a man, Simon made him think like a group of people.
The product of the work of the three scientists was a virtual machine, called the Logical Theorist, named the first working prototype of artificial intelligence. The printouts of a working Theorist during the 1956 Summer Research Project in Dartmouth were forced to turn their attention to artificial intelligence, which gave its name and marked the beginning of the entire scientific field. In the notes from the Dartmouth conference, one participant wrote that the Theorist helped overcome the fear of funding this field of study. This was important because the AI fund was skeptical that this area of research was useful.
How did Simon view his scientific achievements? A year after the Dartmouth conference, he and Newell presented their findings in the publication Heuristic Problem Solving: The Next Advancement in Operations Research. The key expression in the title: “operations research” appeared in Great Britain during the Second World War, in order to apply scientific principles and statistics to optimize military activities, and then for corporate purposes. Artificial intelligence was meant for business.
In a speech to operational practitioners in London in 1957, Simon observed Frederick Taylor, the father of the science management movement, and Charles Babbage, his mental ancestors. “Physicists and electrical engineers had nothing to do with the invention of a digital computer,” said Simon. The real inventor, in his opinion, was the economist Adam Smith. He explained this relationship: French engineer Gaspard de Proni intended to create logarithms using methods created from Smith's Wealth of Nations. Babbage, inspired by Prony, applied this conjecture to mechanical equipment. In the mid-1950s, Simon turned it into program code.
Tradition lives on. Many modern systems of artificial intelligence do not so much imitate human thinking as the less talented minds of bureaucratic institutions; Our machine learning methods are often programmed to achieve superhuman scale, speed, and accuracy through identity, ambition, or morality at the human level.
Capitalism in the code
These lines of the history of artificial intelligence: the adoption of corporate decisions, state power and the use of statistics in the war - were not preserved in the understanding of artificial intelligence that was accessible to the people.
Instead, news of technical breakthroughs or experts expressing fears is accompanied by images, if not in the form of a well-armed Terminator, then by mind, a robot, neon microchips, or absurd mathematical equations. Each of them is not so strong evidence of the authority of the natural sciences or computer science regarding, say, “soft” sciences, borrowing Simon's terminology, political science, management, or even economics, i.e. areas for which he went to Stockholm to get his Nobel Prize.
Perhaps as a result of this erroneous impression, public debate continues to this day about what advantages, if any, social sciences can bring to artificial intelligence research. According to Simon, artificial intelligence itself originated in social science.
David Runsiman, a political scientist from the University of Cambridge, argued that in order to understand artificial intelligence, we must first understand how it operates in the capitalist system in which it is embedded. “Corporations are another form of artificial thinking, they are designed to be able to make decisions on their own,” he explains.
“Many of the fears that people now have about the coming era of intelligent robots are the same that they have had about corporate associations for several hundred years,” says Mr. Runciman. The concern is that we “can never learn to control these systems”.
For example, after an oil spill in 2010, when 11 people died and the Gulf of Mexico was devastated, no one went to jail. The threat Mr. Runsiman warns of lies in the fact that artificial intelligence methods, like tactics of avoiding social responsibility, will be used with impunity.
Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O'Neal show how various algorithmic systems reinforce violence, destroy human dignity and undermine basic democratic mechanisms, such as responsibility, if they are created irresponsibly. The harm should not be intentional; biased data sets used to train predictable models are also detrimental. Given the costly labor required to identify and eliminate the harm done, it is necessary to create something like an “ethical service” created as a separate industry. Ms. O'Neill, for example, has now launched her own service that tests algorithms.
In the 1950s, when he coined the term “artificial intelligence” for the Dartmouth conference, John McCarthy, one of the first pioneers in this field, wrote in his notes: “As soon as one epistemological system is programmed and working, nothing else will be taken seriously except that it will manage the intellectual programs. " For this reason, the initial slogan of DeepMind “Know the mind. Use it to know everything else, it looks almost imperial.
McCarthy’s suggestion was that influence, not power, could solve the scientific consensus in his field. DeepMind does not need to “know” the intellect (provided that this is even possible), it just needs to beat the competition. This new company slogan: “Know the mind. Use it to know everything else, ”suggests that he, too, is aware of the need for diplomacy in the era of the total power of Artificial Intelligence.
Stephen Cave, director of the Levergulma Future Research Center, showed that the definition of intelligence has been used throughout history as a tool of dominance. Aristotle turned to the "natural law" of the social hierarchy in order to explain why women, slaves, and animals should be subject to rational people. Given this cruel legacy, corporate and computing agency policies should resolve complex issues shaped by gender, sex and colonialism regarding other personality traits.
The main duty of artificial intelligence is that it provides large-scale automated categorization. For example, machine learning can be used to distinguish a malignant mole from a benign one. This “duty” becomes a threat when it is aimed at solving the problems of everyday life. Reckless labels can harass and harm when they assert false authority. In protest against unfair labels that are used to “learn” the world, many young people today are proudly challenging undesirable categorizations, whether they are traditional gender or sexual pairings.
Cars that think again
It may be surprising to many that the social, material and political causes of the origin of artificial intelligence are not well understood. Indeed, a lot has been written about the history of artificial intelligence: by Simon in 1996 and by Newell in 2000. Most of these stories, however, adhere to some restrictions, considering it "mainly in intellectual terms," according to the information technology historian Paul Edwards.
Each of the two almost official stories of artificial intelligence is a story of thoughts: “Machines that Think” by Pamela McCordac, who “created a template for most subsequent stories” after the first publication in 1979; and “Artificial Intelligence: An Exhilarating Story” by Daniel Crevier, published in 1993. Both books relied mainly on detailed interviews with key researchers.
Perhaps no one, as a result, sought to understand artificial intelligence in a wider context, including the development of operational research, "big science", actuarial sciences and US military funding, since it has evolved since the Second World War. Deleted from these stories, the AI may be separated from its historical and political context.
Without this context, artificial intelligence may also seem detached from the system of sciences that created it. In his 1957 conversation with professionals in the field of operational research, Simon noted the diversity of the past in his scientific field. He described the contributions of the French weavers and mechanics of the jacquard loom, as well as Smith, de Prony, Babbage and his colleagues in the soft sciences as a cumulative “debt” that has yet to be repaid.
This new knowledge could appear so unexpectedly, and from so many places, that excited Simon in his work — and may cause us to think the same way today. Modern AI can not only reflect the organizational dogma that characterized its birth, but also reflect our humanity.
Artificial Intelligence (AI) is everywhere, but it is not completely invented in a historical way. To understand the impact of AI on our lives, it is important to assess the environment in which it was created. After all, statistics and state control have developed hand in hand for hundreds of years.
Consider computer science. Its origin can be traced not only by analytical philosophy, pure mathematics and Alan Turing, but also, surprisingly, by the history of public administration. In the book Government Machine: The Revolutionary History of Computer, published in 2003, John Hagar of University College London draws charts of the development of the British civil service, as it increased from 16,000 employees in 1797 to 460,000 by 1999. He noted the anomalous similarity between the functionality of the human bureaucracy and the electronic computer. (He admitted that he could not say whether this observation was trivial or deep).
Both systems processed a large amount of information using a hierarchy of pre-established but adaptable rules. But one of them came from the other. This showed an important link between the organization of the social structures of people and the digital tools designed to serve them. Mr. Aghar links the very origin of computer science with the Analytical Machine of Charles Babbage, developed in the 1820s in Britain. Its development was subsidized by the government, suggesting that it would serve as its sponsor. Babbage's projects, says Mr. Aghar, should be viewed as “the materialization of state activity.”
This relationship between computer systems and human organizational structures repeats the history of AI. In the 1930s and 1940s, Herbert Simon (pictured below), a political scientist at the University of Chicago, who later taught at Carnegie Mellon University, decided to develop a “scientific” approach to the foundation of the management structure. Simon had previously studied under the leadership of Rudolf Carnap, a member of the Vienna Circle of Logical Positivists. This confirmed his conviction that existing theories lack empiricism. His doctoral thesis in 1947 became the book “Administrative Behavior”, which served as the basis by which all activities in an organization could be understood using a decision matrix.

Simon says
He made an enormous contribution to many scientific fields, not only in political science and economics, but also in computer science and artificial intelligence. He coined the term “satisficing” (to accept what he wanted, rather than striving for the best) and developed the idea of “bounded rationality”, for which he received the Nobel Prize in economics in 1978. But back in the 1950s, Simon was a consultant at the RAND Corporation, an influential think tank supported by US air forces.
In RAND, Simon and two of his colleagues: young mathematician Allan Newell and former insurance actuary J. Clifford Shaw attempted to simulate the solution of a person’s problems in terms of how a computer performs a specific operation. For this, Simon borrowed elements from the system that he developed in the Administrative Behavior to teach the computer to "think", like a man, Simon made him think like a group of people.
The product of the work of the three scientists was a virtual machine, called the Logical Theorist, named the first working prototype of artificial intelligence. The printouts of a working Theorist during the 1956 Summer Research Project in Dartmouth were forced to turn their attention to artificial intelligence, which gave its name and marked the beginning of the entire scientific field. In the notes from the Dartmouth conference, one participant wrote that the Theorist helped overcome the fear of funding this field of study. This was important because the AI fund was skeptical that this area of research was useful.
How did Simon view his scientific achievements? A year after the Dartmouth conference, he and Newell presented their findings in the publication Heuristic Problem Solving: The Next Advancement in Operations Research. The key expression in the title: “operations research” appeared in Great Britain during the Second World War, in order to apply scientific principles and statistics to optimize military activities, and then for corporate purposes. Artificial intelligence was meant for business.
In a speech to operational practitioners in London in 1957, Simon observed Frederick Taylor, the father of the science management movement, and Charles Babbage, his mental ancestors. “Physicists and electrical engineers had nothing to do with the invention of a digital computer,” said Simon. The real inventor, in his opinion, was the economist Adam Smith. He explained this relationship: French engineer Gaspard de Proni intended to create logarithms using methods created from Smith's Wealth of Nations. Babbage, inspired by Prony, applied this conjecture to mechanical equipment. In the mid-1950s, Simon turned it into program code.
Tradition lives on. Many modern systems of artificial intelligence do not so much imitate human thinking as the less talented minds of bureaucratic institutions; Our machine learning methods are often programmed to achieve superhuman scale, speed, and accuracy through identity, ambition, or morality at the human level.
Capitalism in the code
These lines of the history of artificial intelligence: the adoption of corporate decisions, state power and the use of statistics in the war - were not preserved in the understanding of artificial intelligence that was accessible to the people.
Instead, news of technical breakthroughs or experts expressing fears is accompanied by images, if not in the form of a well-armed Terminator, then by mind, a robot, neon microchips, or absurd mathematical equations. Each of them is not so strong evidence of the authority of the natural sciences or computer science regarding, say, “soft” sciences, borrowing Simon's terminology, political science, management, or even economics, i.e. areas for which he went to Stockholm to get his Nobel Prize.
Perhaps as a result of this erroneous impression, public debate continues to this day about what advantages, if any, social sciences can bring to artificial intelligence research. According to Simon, artificial intelligence itself originated in social science.
David Runsiman, a political scientist from the University of Cambridge, argued that in order to understand artificial intelligence, we must first understand how it operates in the capitalist system in which it is embedded. “Corporations are another form of artificial thinking, they are designed to be able to make decisions on their own,” he explains.
“Many of the fears that people now have about the coming era of intelligent robots are the same that they have had about corporate associations for several hundred years,” says Mr. Runciman. The concern is that we “can never learn to control these systems”.
For example, after an oil spill in 2010, when 11 people died and the Gulf of Mexico was devastated, no one went to jail. The threat Mr. Runsiman warns of lies in the fact that artificial intelligence methods, like tactics of avoiding social responsibility, will be used with impunity.
Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O'Neal show how various algorithmic systems reinforce violence, destroy human dignity and undermine basic democratic mechanisms, such as responsibility, if they are created irresponsibly. The harm should not be intentional; biased data sets used to train predictable models are also detrimental. Given the costly labor required to identify and eliminate the harm done, it is necessary to create something like an “ethical service” created as a separate industry. Ms. O'Neill, for example, has now launched her own service that tests algorithms.
In the 1950s, when he coined the term “artificial intelligence” for the Dartmouth conference, John McCarthy, one of the first pioneers in this field, wrote in his notes: “As soon as one epistemological system is programmed and working, nothing else will be taken seriously except that it will manage the intellectual programs. " For this reason, the initial slogan of DeepMind “Know the mind. Use it to know everything else, it looks almost imperial.
McCarthy’s suggestion was that influence, not power, could solve the scientific consensus in his field. DeepMind does not need to “know” the intellect (provided that this is even possible), it just needs to beat the competition. This new company slogan: “Know the mind. Use it to know everything else, ”suggests that he, too, is aware of the need for diplomacy in the era of the total power of Artificial Intelligence.
Stephen Cave, director of the Levergulma Future Research Center, showed that the definition of intelligence has been used throughout history as a tool of dominance. Aristotle turned to the "natural law" of the social hierarchy in order to explain why women, slaves, and animals should be subject to rational people. Given this cruel legacy, corporate and computing agency policies should resolve complex issues shaped by gender, sex and colonialism regarding other personality traits.
The main duty of artificial intelligence is that it provides large-scale automated categorization. For example, machine learning can be used to distinguish a malignant mole from a benign one. This “duty” becomes a threat when it is aimed at solving the problems of everyday life. Reckless labels can harass and harm when they assert false authority. In protest against unfair labels that are used to “learn” the world, many young people today are proudly challenging undesirable categorizations, whether they are traditional gender or sexual pairings.
Cars that think again
It may be surprising to many that the social, material and political causes of the origin of artificial intelligence are not well understood. Indeed, a lot has been written about the history of artificial intelligence: by Simon in 1996 and by Newell in 2000. Most of these stories, however, adhere to some restrictions, considering it "mainly in intellectual terms," according to the information technology historian Paul Edwards.
Each of the two almost official stories of artificial intelligence is a story of thoughts: “Machines that Think” by Pamela McCordac, who “created a template for most subsequent stories” after the first publication in 1979; and “Artificial Intelligence: An Exhilarating Story” by Daniel Crevier, published in 1993. Both books relied mainly on detailed interviews with key researchers.
Perhaps no one, as a result, sought to understand artificial intelligence in a wider context, including the development of operational research, "big science", actuarial sciences and US military funding, since it has evolved since the Second World War. Deleted from these stories, the AI may be separated from its historical and political context.
Without this context, artificial intelligence may also seem detached from the system of sciences that created it. In his 1957 conversation with professionals in the field of operational research, Simon noted the diversity of the past in his scientific field. He described the contributions of the French weavers and mechanics of the jacquard loom, as well as Smith, de Prony, Babbage and his colleagues in the soft sciences as a cumulative “debt” that has yet to be repaid.
This new knowledge could appear so unexpectedly, and from so many places, that excited Simon in his work — and may cause us to think the same way today. Modern AI can not only reflect the organizational dogma that characterized its birth, but also reflect our humanity.