Artificial Intelligence of the Wild West World HBO: Then and Now

Original author: ASH EGAN
  • Transfer
image

Just like today, in 1973 (the year Michael Crichton released The World of the Wild West) everyone was fascinated by the idea of ​​artificial intelligence. The film was a huge box office success, although it was released in the same year that people began to cool towards the idea of ​​AI: massive depletion of AI resources, deceived expectations and, as a result, fading interest in subsequent years.

In 2016, the World of the Wild West returned to the screen, and fundamental changes in the technology of deep learning of machines, public information resources and computing power fundamentally change the future for AI. The computing power and capabilities of technology are now sufficiently developed so that AI can complement and push the development of society compared to the complete collapse of hopes in 1973.

The new version of HBO's Wild West World, created by Jonathan Nolan and Lisa Joy, has become one of the most popular series to date. Futuristic Western realities add fuel to the fire of the ubiquitous obsession with AI, and the popularity of the show proves that people are fascinated by the potential of AI. The success of the Wild West World reflects a robust AI ecosystem in which venture capital funds, corporations, and consumers interact actively.

The main tasks of AI have not changed since 1973: automating tasks and eliminating the constraints of organizations, as well as facilitating everyday everyday activities. Government, academics, and corporations were driving AI progress, while consumers were sky-high (as evidenced by the box-office success of the Wild West World), but they weren’t able to acquire technology or understand how to use it and what it carries in restrictions to itself.

Consumer interest was not enough to continue funding the technology, and just a few months after the film was released, James Lighthill published a rather pessimistic report on AI, causing a subsequent 7-year lull. In the reportLighthill pointed to the widening gap between feverish expectations and reality, which led universities to cut grants, the government and military cut funding for AI development projects, and resources began to go to other projects.

Today, Ray Kurzweil, Ren-Sun Huang, Andrew Eun, Yang Lekun, Yoshua Bengio and other specialists make bold statements about the potential of AI, and corporations are urgently preparing to master the possibilities of image recognition, voice and dialogue with machines. The current AI revolution goes far beyond university and military research and flows into our daily lives. Progress is driven by 6 components that were not or were not enough in the seventies: scientific base, computing power, available information resources, specialists and investments.

1. How to Recognize Cat Muzzles

Although the fundamental components for creating AI have been around for 50 years, today's obsession with the idea spawned Andrew Un's research in 2012 (Stanford). Eun and his team made a breakthrough in the field of uncontrolled learning with the help of neural networks, the premise of deep learning (a series of algorithms remotely simulating the work of the brain). Eun is a visiting professor at Stanford (founder and lead specialist of the Google Brain team , leader of a team of 1,200 AI developers at Baidu) decided to test unsupervised learning, or a dataset for learning, without using a model, but through a neural network. He and his team used YouTube as an array of data, and with the assistance of the luminaries of the field of AI and 16,000 computers, he checked whether his deep learning model could recognize faces. It can, even cat muzzle can, which became known as the “cat experiment”. Only thanks to the improvement of the algorithms of deep learning, which led to dozens of years of scientific research, it became possible to conduct such a test.

Until 2012, traditional machine learning implied the use of algorithms to obtain the final variable. Eun tests have shown that deep learning (as well as the construction of neural networks) have enormous potential.

In 1973, budgetary funding for AI, computing power, and deep learning methods were limited, there was no understanding of how to process data with complex algorithms. Natural language processing was only in its infancy, and the concept of Turing Completeness arose only a few years ago. Researchers in the 70s were mistaken about the progress of AI, as discussed in the work of Lighthill.

2. GPUs and computing power

Deep learning or processing data through neural networks requires tremendous computing power that was not available at the time of the Creation of the Wild West World. Even before the in-depth learning process begins, you need to collect, synthesize, download and distribute data across huge databases and distributed computing systems.

Scientists and enthusiasts today use GPUs for training using data arrays. Neural networks must train on processor chips for 400 hours, distilling research data from decades into algorithms. Deep learning uses huge amounts of data, the processing of which requires a well-scalable performance, high memory bandwidth, low power consumption and the speed of arithmetic operations. Hadoop and Spark frameworks offer acceptable databases, and Nvidia, in turn, is 20 years ahead of the production of graphics chips, which are ideal for complex algorithms and calculations. Nvidia chips, including graphic ones, are used in most smart hardware, such as robotic cars and drones,

In 1973, the computing power of computers was logarithmically weaker than now (as seen in the image of Jonathan Kumi)

3. The amount of data compared to the cost of storage

Today we generate a huge amount of data that can be downloaded into the training model and which give a more accurate result in image recognition, speech and natural language processing. Data is received from devices such as fitness bracelets, Apple watches, and IoT devices. And the amount of data is many times more at the corporate level. The increase in the amount of big data (big data) from mobile devices, from the Internet, the Internet of things creates entire mountains of data, just right for AI.

With all modern cloud technologies, large companies can store data and have constant access to them, without spending fabulous money. The growing popularity of Box and Dropbox exchangers, offers from Google, Amazon and Microsoft have reduced the cost of data storage.

In 1973, there was incomparably less data - user watches did not track sleep cycles or health indicators - nothing of what our favorite applications can do today. In companies, all data was stored locally, and if you did not have money for servers and their maintenance, then you remained overboard.

4. Interaction and public information resources

AI specialists are attracted by open IT resources available through corporations represented in today's hyper-competitive market. The IBM Watson super computer was the first sign, and competitors are now eager to offer their own services. Google provides the infrastructure through Google TensorFlow, while Microsoft offers the CNTK deep learning framework. Universities provide access to their research: Berkeley University shares its Caffe framework, while the University of Montreal has opened access to its Python-Theano library.

While many organizations collaborate and share research in the field of AI in order to attract the best specialists, some are preparing for potential negative consequences if one organization breaks ahead. Some of the players in this area are OpenAI, the Open Academic Society and Semantic Scholar libraries. At a conference on neural network information processing systems in December, Apple announced that it would open access to its AI developments and allow full-time researchers to publish their work.

At the time of the “Wild West World”, there was practically no interaction on the exchange of research, since all the research was mainly carried out in government structures and defense enterprises. Organizations shared programming languages ​​like LISP, and scientists compared their work in face-to-face meetings, but there were no Internet communication capabilities yet, which hindered collaboration and AI development.

5. AI Specialist - King and God

Students grasped the field of AI, and take courses on data analysis and natural language processing, and universities, in turn, attract relevant resources to create such courses. The number of students in computer science, mathematics, engineering and research courses is increasing due to scholarship programs that provide adequate funding.

In 1973 there were few such programs, and European universities closed many AI research projects after the publication of Lighthill's pessimistic report.

6. Investment insanity

Today, the growth of investment in AI is not happening by the day, but by the hour. Since 2011, AGRT (cumulative annual growth rate) has been 42%, according to Venture Scanner. Large venture capital funds and technology companies are fond of AI, and invest in specialists, companies and initiatives. Several takeover deals were made, which were essentially a beautiful purchase for talented employees to create or strengthen an existing AI team.

In 1973, investments came mainly from defense and government organizations, such as DARPA (Defense Advanced Research and Development Agency). When the fever began to subside, DARPA significantly cut its budget for the development of AI and tools, large researchers, became impoverished. At that time, venture capital was just starting to emerge, and funds were more interested in semiconductor manufacturing rather than AI.

AI is already present in our lives (e.g. Prisma, Siri, and Alexa). It will leak into all areas of organizations: operation and development of software, security, sales, marketing, customer support, and many others. The 6 components listed above will become strong evidence of the potential of AI, and the wave of AI developments will be similar to the Internet boom of the 90s, and the boom of mobile zero developments. Many are still aware of this potential, presented in image recognition, video, speech and machine translation technologies.

To prepare for the upcoming changes, organizations need to clearly understand the areas of technology application, its limitations and future potential. Companies like Facebook see AI as a kind of philosophy rather than technology, as Facebook technical director Mike Schroepfer put it at a web summit in November.

image
In the photo: Anthony Hopkins and Jeffrey Wright in the "Wild West World"
Photo by HBO

Also popular now: