For the first time in Russian: Nicholas Carr. Does Google make us dumber?

    I really liked the article ... here is the source www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868 The
    material is 3 years old, but it is still relevant ...

    Nicholas Carr. Is Google making us stupid? (The Atlantic. July / August 2008)
    Translation: Alina Lepeshkina

    “Dave, stop! Yes, stop it! Wait, Dave! Can you stop? ”Pleads the HAL supercomputer of the irreconcilable astronaut Dave Bowman in the famous and unusually dramatic scene in Stanley Kubrick’s 2001 film: A Space Odyssey. Bowman, whom the malfunctioning machine almost condemned to death in the depths of space, calmly and coldly turns off the memory circuits that regulate its artificial intelligence. “Dave, my mind exists,” HAL says lonely. "I'm feeling it. I feel".
    I feel it too. A few years ago, I had an alarming thought that someone or something was digging in my brain, reconfiguring the nervous system, reprogramming memory.

    Not that I am losing my mind - at least it doesn't seem so to me - but it is changing. I don’t think the way I used to think. This is most felt during reading.
    Before diving into a book or long article was easy. My mind followed the narration, or flashed through the turns of the arguments, I could spend hours walking through the vast expanses of texts. It is unlikely that this will happen again. Now I often lose concentration after just two or three pages. I hustle, lose the thread of reasoning, start looking for what else to do. I feel like making my moody brain go back to the text. Thoughtful and serious reading that was natural turned into a problem.
    I think I know what is going on. For more than a decade, I spend a lot of time online, search for information and go from site to site, and sometimes add something to the huge Internet databases.
    The web is a find for me as a writer. The study, which used to take days in bookstores or in library halls, is now being completed in minutes. A few Google queries, clicks on hyperlinks - and a fact or a meaningful quote is ready. Even if I don’t work, I go online with the same pleasure if I hadn’t done this for a thousand years.
    I read and write e-mails, look at headings or blog posts, watch videos and listen to podcasts, or simply travel from link to link. (Unlike the footnotes with which they are sometimes confused, hyperlinks not only indicate the work associated with it, but lead you away from them).

    For me, like many others, the Network is becoming a universal medium, a channel for most of the information that comes through the vision and hearing to the brain. The benefits of instant access to such an incredibly rich arsenal of information are many, and they are already widely described and properly evaluated. “The absolute end to silicon memory,” Clive Thompson wrote in Wired magazine, “may be a good place to think.” But this soil has a price.

    As media theorist Marshall McLuhan noted in the 1960s, media are not just passive channels of information. They supply thought material, but they also shape the thinking process itself. And it seems that the Internet is curbing my ability to concentrate and contemplate. Now my brain expects to receive information in the same way that the Network disseminates it: in a rapidly moving stream of particles. Once I was a scuba diver in a sea of ​​words. Now I'm scrolling across the surface like a guy on a motor boat.

    Network as a way of thinking

    And I am not alone. When I talk about my reading problems to friends and acquaintances - most of them are writers, many of them say that they notice something like this. The more they use the Web, the more often they are forced to keep their attention on long pieces of text. Some bloggers that I read also began to notice this phenomenon. Scott Cap, who runs a blog about online media, recently admitted that he had completely stopped reading books. “I was among the reading majority in college, a real bookworm,” he wrote. “And what happened?” He argues over the answer: “What if I read everything online not because it’s more convenient for me, but because of a changed way of thinking?”
    Bruce Friedman, who regularly writes on a blog about the use of computers in medicine, also talked about how the Internet has affected his mental habits. “I practically lost my ability to read and absorb long articles, both on the Web and in print,” he wrote. A pathologist who taught for a long time at the University of Michigan Medical School, Friedman continued his thoughts in a telephone conversation with me. His thinking is now, he says, like a staccato: he quickly looks at small passages of text from many resources at once.

    “I can no longer read War and Peace,” he remarks. “I lost that ability. Even a blog post consisting of more than three or four parts - you can’t easily swallow it. In that case, I will miss him. ”

    Such stories are not very convincing. We are still awaiting neurological and psychological experiments that will paint a real picture of how the Internet affects our cognitive ability. But recently published studies of Internet search habits by scientists from University College London indicate that we may be halfway through the complete transformation of our thinking and reading habits. During a five-year research program, scientists analyzed user activity data from two popular search engines, one operated by the British Library, the other by the United Kingdom's educational consortium, both of which provide access to journal articles, e-books, and other written information sources. It turned out that people using these sites, showed one of the forms of “rushing activity”, jumping from one resource to another and rarely returned to any of the sites that they had already visited. They usually read no more than one or two pages before jumping to another site. Sometimes they save long texts, but this does not guarantee that they will be read. The authors of the study report:
    Obviously, people do not read online in the usual sense; there are all indications that a new kind of “reading” is emerging: users quickly, “diagonally,” view headings, page contents, and quotes. This actually means that they come online to avoid traditional reading.
    Thanks to the widespread distribution of text on the Internet, not to mention the popularity of SMS, we can read more than we did in the 1970s and 80s, when television was the only media that we could choose in return. But we are already considering another type of reading, behind which another type of thinking is possibly located - perhaps even a different sense of oneself.

    “We are not only what we read,” says Marian Wolf, a psychologist at Tufts University and author of Proust and Squid: The History and Science of the Reading Brain. "We are the way we read."
    Wolf is worried that the reading style that the Internet offers, a style that relies primarily on responsiveness and immediacy, can weaken our thoughtful reading ability, which has existed since the time when press technologies have come a long way to widespread distribution. When we read online, she says, we tend to become “just information decoders.” Our ability to interpret the text, to build semantic connections (as it happens with thoughtful reading, when we are not distracted), are not used to a greater extent.

    Reading, Wolf explains, is not an instinct. It is not inherent in genes, like the ability to speak. We must teach the mind to recognize the characters that we see in the language we understand.
    And the media, like other technologies that we use in the study and practice of such a craft as reading, play an important role in the formation of neural circuits inside the brain. Experiments demonstrate that the psychology of reading in those who use ideographic writing (for example, among the Chinese) is very different from that of people whose language is based on the alphabet. Changes extend to many brain regions, including those that control natural cognitive functions such as memory and interpretation of visual and auditory stimuli. We can assume that the patterns used under the influence of the Internet will differ from those that arise when reading books in general of a printed word.

    How a typewriter influenced Nietzsche’s style

    In 1882, Friedrich Nietzsche bought a typewriter - the firm Malling-Hansen, to be precise. His eyesight worsened, and focusing his eyes on the page became difficult and painful, which often led to terrible headaches. He had to write less: he was even afraid that soon he would have to give it up altogether. The machine saved him, at least for a while. He learned "blind" typing, could write even with his eyes closed, using only his fingertips. Thoughts and words could again pour into the pages.
    But the machine affected his labors. One of Nietzsche’s friends - the composer - noticed changes in the style of his writing. His already compressed prose became even tougher, even telegraphic. “Perhaps you can even acquire a new language with this tool,” a friend wrote in a letter, noting that in his own work, “musical thought and language often depend on the quality of the pen and paper.”

    “You are right,” replied Nietzsche, “our writing equipment also shapes our thoughts.” Influenced by the typewriter, writes German media researcher Friedrich Kittler, in Nietzsche’s prose “aphorisms replaced arguments, wordplay — reflections, telegram style — rhetoric.”

    The human brain is infinitely flexible. People believed that our brain network of approximately 100 billion neurons inside our skull was already completely formed by the time of coming of age. But brain researchers have found that this is not so. James Old, professor of neurology, head of the Krasnov Institute of Advanced Studies, says that even a mature mind is very plastic. Nerve cells regularly break old connections and form new ones. “The brain,” Old believes, “has the ability to reprogram itself on the fly, changing the way it functions.”
    Since we are using what sociologist Daniel Bell called “intelligent technology” - tools that increase our mental, rather than physical, potential — we inevitably fall under their influence.

    Mechanical watches that were in general use in the 14th century are a convincing example. In Technique and Civilization, historian and cultural critic Lewis Mumford describes how watches “separated time from everyday life and helped create faith in an independent world of mathematically measurable sequences.” The theoretical framework of divided time has become a reference point, both for action and for thought.

    Methodical ticking brought to life the scientific mind and scientific person. But also something took away. As later described in his 1976 book, “The Power of the Computer and the Human Mind: From Judgment to Calculation,” computer scientist Joseph Weizenbaum, the concept of the world that emerged thanks to the development of chronometers, “remains a truncated version of the previously existing one, therefore it is based on the rejection of that direct experience that formed the basis for the really established old reality. ” When deciding when to eat, when to work, when to sleep, and when to get up, we stopped listening to our feelings and began to obey the clock.

    The process of adaptation to new intelligent technologies is manifested in a change in the metaphor that we are used to explaining to ourselves. When the mechanical clock appeared, people began to think of the brain as a “clock-like” mechanism. Today, in the era of software, we are used to presenting it as a computer-like system. But change, as neuroscience tells us, has gone very far from this metaphor. Due to the plasticity of our brain, adaptation occurs at the biological level.
    All-consuming media

    The Internet seems to have a far-reaching impact on human consciousness. In a work published in 1936, the British mathematician Alan Turing wrote that a digital computer, which at one time represented exclusively theoretical development, can be programmed to perform the functions of any other information processing device. And what do we see today? The Internet, an omnipotent computer system, combines most of the other intelligent technologies. It becomes our card and our watch, press and typewriter, our calculator and telephone, radio and television.
    The World Wide Web absorbs media, they are recreated in the Internet incarnation. Media content is saturated with hyperlinks, blinking banners and other digital trinkets. A new e-mail message, for example, can notify of your arrival when you view the latest news. As a result, it disperses attention and the concentration evaporates.
    The influence of the Internet is not limited to the limits of a computer monitor, not at all. As the human brain gradually adjusts to the quilt of online media, traditional media must adapt to new audience expectations. Television programs add a running line and pop-ups, magazines and newspapers cut their articles, post brief summaries, and fill pages with easy-to-read pieces of information.
    When, in March 2008, the New York Times decided to devote the second and third pages of each issue to excerpts from the articles, the newspaper’s design director, Tom Bodkin, explained that the “cuts” would give their rushing readers a quick idea of ​​the news of the day, eliminating the “less effective ”method of actually turning pages and reading materials. Old media have no choice but to play by the rules of new media.
    Never before has a communications system played such a big role in our lives - or had such a powerful impact on our thoughts - like the Internet today. Nevertheless, among everything that has been written about the Internet, very little concerns how we reprogram it. Networked intellectual ethics remains unclear.

    Google and the New Taylorism
    At the time Nietzsche started using a typewriter, a young man named Frederick Winslor Taylor brought a stopwatch to a steel mill in Philadelphia and began a series of experiments aimed at improving the efficiency of factory workers. With the consent of the owner of the enterprise, he recruited a group of workers, put them at various metalworking machines, and recorded and recorded the time of each of their actions, as well as machine operations. By destroying each workflow into a sequence of small, discrete steps, and after testing possible ways to complete them, Taylor created a set of precise instructions — an algorithm, as we say today — that every worker should act on. Midval employees were outraged by the severity of the new regime, claiming that he had humiliated them to the level of machine tools,
    More than a century after the invention of the steam engine, the Industrial Revolution finally found its philosophy and its philosopher. Taylor's rigorous industrial choreography — his “system,” as he liked to call it — was accepted by producers all over the country, and after a while around the world. In search of maximum speed, maximum efficiency and output, the factory owners used his research to organize the work process. The goal, as Taylor defines it in his famous treatise of 1911, “Principles of Scientific Management”, is to identify and approve for each type of work one best way to carry it out, thereby initiating a “gradual replacement of science with the practical method of mechanical skill”.
    One day, his system will be applied to all processes of physical labor, Taylor assured his followers, this will lead to a restructuring of not only industry but also society, and create a utopia of absolute efficiency. “In the past, man was the first,” he assured; "In the future, the first is the system."
    The Taylor system is still largely with us; it still remains the ethics of industrial production. And now, thanks to the growing power with which computer engineers and software programmers are holding our intellectual lives in their hands, Taylor’s ideas are just as successful in controlling the realm of the mind.
    The Internet is a machine designed to efficiently and automatically collect, transmit and work with information, and legions of programmers are looking for the “best method” - the perfect algorithm - to perform each mental movement that we are used to describe as “thinking”.
    Google Centers, in Mountain View, California, is Googleplex, the temple of high-speed Internet, and the local religion is Taylorism. Google, says its CEO Eric Schmidt, is a "company founded around the science of measurement" and it seeks to "systematize everything" that is being done in it. Based on the terabyte of behavioral data that it collects through its search engine and other sites, the company conducts thousands of experiments a day, reports the Harvard Business Review, and uses its results to refine algorithms that control how people search for information and make sense out of it. .
    What Taylor did for manual labor, Google does for mental work.

    The company has proclaimed that its mission is to "organize worldwide information and make it universally accessible and useful." They strive to develop a “perfect search tool,” which is defined as something that “understands exactly what you mean and gives you exactly what you want.” From the point of view of Google, information is a kind of product, a utilitarian resource that can be mined and processed with economic efficiency. Indeed, the more information we can get, the faster we can extract the essence from it, the more productive we are as thinkers.
    Where is the end to this? Sergey Brin and Larry Page, the talented young people who founded Google when they worked at Stanford for doctoral dissertations in computer science, honestly talk about their desire to turn the search engine into artificial intelligence, similar to the HAL machine, which can be connected directly to the brain. “The search engine is as smart as a person, or even smarter,” Page said a few years ago. “For us, working in search is a way to work for artificial intelligence.” In a 2004 interview with Newsweek, Breen said, “Of course, it would be great if information from all over the world went right into your brain, or an artificial mind more advanced than yours.” Page once told a convention of scientists that Google "is actually trying to create artificial intelligence, and to do so on a large scale."
    These are natural ambitions, even admirable, because they come from a pair of mathematical wiseacres with a huge amount of cash at their disposal and a small army of computer scientists in the service. In essence, Google’s science venture is motivated by a desire, according to Eric Schmidt, “to solve problems that have not yet been solved”; and artificial intelligence is the hardest of tasks. Why don't Brin and Page want to bite this nut?
    But still their naive assumption that we would be better if our brains were improved or even replaced by artificial intelligence is alarming. It inspires the belief that intelligence is the result of a mechanical process, of a series of discrete steps that can be isolated, measured and optimized.
    In the Google universe, in the world that we fall into when we go online, there is little room for vague thoughts. The ambiguity is not a loophole for insight, but a mistake that will be eliminated. And the human brain is just an outdated computer that needs a faster processor and a larger hard drive.
    The idea that our minds should work like high-speed computers has not only come under the influence of the Internet, it is also the dominant business model. The faster we navigate the Web - the more links we click and the more pages we browse - the more opportunities Google and other companies have to get information about us and feed us ads.
    Most owners of online advertising are financially interested in collecting snippets of data that we reserve by jumping from link to link - and the more of these “shreds”, the better. And it is not in their economic interests to encourage unhurried or slow reading and concentrated thinking.
    What is dangerous about the Galaxy Internet?
    Perhaps I'm worried in vain. Just as there is a tendency to glorify technological progress, there is the opposite - to expect the worst from every new tool or machine. In Plato's Fedra, Socrates mourned the development of writing. He was afraid that people would get used to relying on the written word as a substitute for the knowledge that they had always kept in their heads, and that they could, according to one of the characters in the Dialogues, “stop training memory and become forgetful.” And because they could “get a lot of information without proper instructions,” they would “consider themselves fully informed, while for the most part they are completely ignorant.” They "will be filled with dubious wisdom instead of real." Socrates was not mistaken - new technologies often produced exactly the effect that he was afraid of - but he was short-sighted. He could not have foreseen
    Guttenberg’s discovery of the printing press in the 15th century initiated another wave of “gnashing of teeth” about new technologies. The Italian humanist, Hieronimo Scarziofiko, was afraid that the easy availability of books would lead to intellectual laziness, making people “less diligent,” and weaken their minds. Others claimed that cheaply printed books and leaflets would undermine religious authority, devalue scholars and scribes, and help spread sedition and debauchery.
    As Clay Shirky, a professor at New York University wrote, “Most of the arguments against the print press were correct, even prophetic.” But, again, pessimists were not able to imagine the many benefits that a printed word can give.
    Therefore, of course, you should be skeptical of my skepticism. Maybe those who don’t support internet critics like luddites or nostalgists are right. And for our hyperactive, drowning in information minds, there will come a golden age of intellectual discovery and universal wisdom. And yet, the Internet is not an alphabet, and although it can replace a print press, it creates something completely different.
    The kind of deep reading that the printing press promotes is valuable not only for the knowledge that we extract from the author’s words, but also for the resonance that they cause in the bowels of our mind.
    In the space that opens as a result of a concentrated reading of a book, or any other act of contemplation, for that matter, we look for our own associations, draw our own conclusions and analogies, and derive our own ideas. Deep reading, argued Marianne Wolf, is indistinguishable from deep thinking.
    If we lose this ability, or replace everything with abstract "content", then we sacrifice something important, not only for ourselves, but also for our culture. In a recent essay, playwright Richard Foreman eloquently described what was at stake:
    “I am a child of Western culture, in which the ideal (my ideal) is as complex as a cathedral, a highly educated and whole person - a man or woman who carry a unique and personally recreated cast of the entire Western heritage. [But now] I see in each of us (including me) a substitute for a complex internal organization with a new kind of “I” developing under the pressure of information overload, the technology of “instant access”.
    Since we no longer mix with the “repertoire of our cultural heritage,” Forman concludes, we run the risk of becoming “pancake people,” spreading across a wide network of information that is accessible to us with the click of a button.

    The Dark Prophecy of Kubrick
    The scene from The Space Odyssey haunts me. What makes her so sharp and so strange is the emotional reaction of the computer to the analysis of his intellect: his despair, when the schemes go out one after another, the way he asks the astronaut in a childish way: “I feel it. I feel. I am afraid ”- and the return in the finale to what we might call a state of innocence. The outpouring of feelings by HAL sharply contrasts with senseless human figures who go about their business like robots. Their thoughts and feelings are subject to the scenario as if they acted according to an algorithm. In the world of science fiction films, people have become so much like cars that the most humane character is just the car. This is the essence of Kubrick’s dark prophecy: we are used to relying on computers that have become our guides in understanding the world,
    Posted by: Vladimir Stepanov

    Also popular now: