The ubiquitous supercomputing

    Cray-1, one of the first supercomputers, was created in the mid-1970s, with peak performance of 133 Mflops. Modern smartphones have a performance of about 1000 Mflops.

    The heyday of cloud computing systems, as well as the development of mobile artificial intelligence services , require the increasing use of supercomputer computing. At the same time, there is a temptation to perceive this tendency as if a little more, and in every smartphone with access to fast mobile Internet a truly omniscient and unusually brainy assistant will “settle down”. More precisely, assistant, people still prefer female voices when it comes to voice warning systems and artificial intelligence. By the way, this in itself is a rather curious psychological phenomenon.

    However, how wide, even potentially, are the possibilities of such “supercomputing on demand”? Obviously, they are wide enough, but not unlimited.

    I'll start with the banal: the world around us is gradually and non-stop changing. For example, periods of relative prosperity in the economy, which are characterized by a gradual smooth rise in prices, are replaced by stages of sharp and deep changes during economic crises, when prices for almost all goods and services soar in a short period. Does anyone else remember how in the first half of 1998 the dollar was worth about 6 rubles? And how much was a loaf of bread at that time?

    As applied to computing technologies, “normal” translational changes are characterized by the same Gordon Moore’s law (the old man was probably pretty fed up with such fame). But if you think about it, then the intensity of development implied by this law cannot be described as “gradual”. Indeed, according to Moore (doubling the number of transistors every 24 months), over 10 years the performance of processors grows by almost two orders of magnitude!

    We perceive this process as something ordinary and even natural, as if it should be so. But just think about it: the Cray-1A supercomputer, released in 1976, cost $ 8.9 million (approximately 38 million at today's prices) and had a peak performance of 133 Mflops (160 according to other sources). These machines, which at that time were the most perfect example of computer technology, were created (however, like all supercomputers) by order of various research institutes and organizations, which were almost in line for them. “Crai” were used for complex calculations, including meteorological and thermonuclear processes. And now the average smartphone, costing roughly $ 700, has a performance of about 900 Mflops (mobile GPUs give out 50-70 Gflops),

    Today, most smartphones have such computing power that, if desired, they can be used for very serious tasks like pattern recognition or complex rendering, which until recently had been assigned to supercomputers of varying degrees of power, requiring special infrastructure, software and trained personnel. And now anyone can download mobile applications with similar functionality, but using cloud computing power, providing users with a finished result.

    In order not to be unfounded, here is an example: Leafsnap application(alas, only for iOS), developed by specialists from Columbia University, the University of Maryland and the Smithsonian Institution. With this program, your smartphone recognizes and identifies the tree species from a photograph of its leaf. And the impressive abilities of mobile voice recognition systems have long been heard by everyone .

    Smartphone + cloud = ...

    Despite the multiple formal superiority of modern smartphones over the supercomputers of the past, their capabilities are not so great by modern standards. And broadband internet and cloud technologies come to the rescue here. At the same time, the computing capabilities of such a bundle are unusually large, and it doesn’t matter what we use as a client device: a smartphone, tablet or desktop computer. For example, every time you search for something in the same Google, the most powerful computing resources of the search infrastructure are involved, daily scanning the Internet, collecting ordered data and displaying search results for billions of queries. In fact, Google’s computing infrastructure can be called one of the most powerful existing supercomputers: in 2012, they used about 13 to process information 6 million processor cores - 20 times more than in the largest supercomputer at that time. Moreover, to solve a single problem, it was possible to combine the efforts of 600,000 cores at once.

    All this huge silicon power is used by Google not only for the needs of a search engine. Together with the development of research programs, it became possible to implement services that were already considered fiction 10 years ago. For example, searching in images is the same as searching in text; finding the fastest way to get from one point to another, taking into account the current congestion of roads, and a number of others. Add to this such actively developing projects as an autonomous car and autonomous robots, the creation of which is impossible without the use of supercomputers.

    Artificial near-intelligence

    Another application of supercomputing is the so-called cognitive computing. The development of artificial intelligence algorithms, speech recognition and generation, and machine learning do not stand still. Already in some cases, artificial intelligence systems can create the impression of a really thinking computer. Although in fact, this becomes possible primarily due to the enormous speed of the used computing systems. One of the first such examples is the victory of IBM Deep Blue over Gary Kasparov in 1997. In 2011, the Watson supercomputer from the same IBM won the game Jeopardy (we have it called "Own game") in the fight against the best players.

    Of course, artificial intelligence technologies can be successfully applied not only in games and entertainment. Active work is being carried out in the direction of “teaching” computers to various scientific disciplines in order to make a tool that can independently apply the acquired knowledge in practice from a highly effective “digital thresher”. For example, to search for associations or generate hypotheses based on the context available to improve the quality of people’s responsible decisions. A prime example: IBM’s efforts to adapt Watson to establish medical diagnoses, personal financial advice and improved customer support when contacting call centers.

    Capacities on request

    Perhaps one of the most interesting applications of cloud supercomputing may be performing calculations on demand. For this, the same smartphone and Internet access are enough today. Such a service allows a third-party developer to create applications with new previously unavailable capabilities or to conduct complex data analysis without acquiring the corresponding “iron” infrastructure.

    There is already quite a competition in this market between the Compute Engine (Google), Web Services (Amazon) and Azure (Microsoft). Moreover, in the struggle for customers, it has already reached price wars. Typically, such services offer a free trial period and per-minute charging for the use of computing power, advertising their services for a wide range of consumers, from corporations and government organizations to small startups, and even private entrepreneurs. For example, this summer a couple of researchers from the UK (let alone British scientists alone!) Created an application for “mining” virtual currency, using only test periods from companies offering cloud computing power.

    What's next?

    Probably, in order to understand the possibilities of affordable supercomputing, and at the same time to come up with applications for it, we need to achieve a new level of knowledge and skills. This can be compared to a situation where the rapid spread of computers and the Internet has led to the need to increase appropriate literacy among the widest segments of the population. People had to learn in droves to use computers and all kinds of programs just to stay afloat. So now the time has come to master new knowledge related to computers. To understand the capabilities of cloud computing, the mass user needs to acquire knowledge of the basic principles of logic and statistics. For example, to firmly understand the difference between cause and effect in order to separate specific tasks into parallel flows, which are best suited to the architecture of modern supercomputers. And to simplify the formulation of tasks, it will not be superfluous to familiarize yourself with methods of data visualization. Yes, it sounds pretty ... unbelievable, still I want to mention good.

    But most importantly, we should not forget that no matter how all these cloud technologies are developed, they are not a replacement, but just an addition to human abilities.

    In order to take full advantage of the benefits of cloud supercomputing, we will have to expand the network of broadband Internet access many times. Indeed, tasks for the solution of which large computational powers are involved require rather intensive data exchange with a client device. And there will be tens of millions. And the industry is already starting to respond to user requests and expectations, investing hundreds of billions of dollars in upgrading its network infrastructure.

    Perhaps in the future, when artificial intelligence will surpass human intelligence (if this happens at all), we will become largely dependent on all sorts of advantages provided by supercomputer intelligence. But until then, we have a lot of time and opportunities to enjoy the benefits that are already available and will be acquired in the coming years.

    Also popular now: