OpenAI developers are going to train AI using Reddit

Gen Nvidia's CEO Jen-San Huang shows DGX-1 Ilona Mask, co-founder of OpenAI OpenAI, an
open- source nonprofit organization that does research in the field of artificial intelligence, received a supercomputer DGX-1 from Nvidia. Now experts from OpenAI are working on a “weak” form of AI. Scientists are developing systems capable of processing a huge number of primary data, producing a structured array of information. This requires powerful computers, and the DGX-1 is a very powerful computing system. Nvidia claims that the DGX-1 is based on a new generation of graphics processors that provide processing speeds comparable to the 250 servers of the x86 architecture.
The creators of OpenAI are Ilon Musk and Sam Altman. The main tasks of the organization are to attract scientists working in the field of AI to work together. The results of the research are planned to be made open and accessible to everyone. According to the founders, all this will help prevent the emergence of "evil" artificial intelligence. About such a threat many times said Musk.
Positioned DGX-1 as the world's first supercomputer for deep learning, with sufficient computing power for the development of AI. With the help of this supercomputer, researchers from OpenAI will be able to train a weak form of AI much faster than when working with ordinary servers. At the first stage, AI will learn to understand the textual communication of people. A pool of reason in this case will serve as Reddit. Resource messages will be “fed” by the AI, and he will learn to understand the relationship between individual words, their groups and sentences. The task is complicated by the fact that the text with Reddita is saturated with jargon, abbreviations, these are not academic texts.
Andrei Karpathy, a researcher at OpenAI, says that self-learning AI will become smarter over time. “Deep learning is a special class of models, because the larger the model itself, the better it works,” he says.
Language remains an important issue for developers of artificial intelligence. And although many problems have been solved, more needs to be solved. Not so long ago, Google conducted an experiment when a self-learning system was trained using dialogues from films. After some time, it turned out that the AI provides quite tolerable answers to relatively complex questions. Sometimes an attempt to bring together a weak form of AI and people gives unexpected results. In March of this year, Microsoft launchedand on Twitter, a teen bot Tay, and he just learned a bad day. After that, Microsoft employees considered it best to delete almost all Tay messages and sent him to "sleep."
OpenAI scientists want to see if a robot can learn a language by interacting with people in the real world. The experiment with Reddit is the first stage in the planned series of experiments. Thanks to the DGX-1, training will take place much faster than planned. AI can be trained on the example of larger than planned data arrays. In addition to text, AI will be trained by trial and error to perform a series of actions. This method can be used to teach artificial intelligence to play video games. Perhaps he will be able to understand that the accumulation of game coins in a number of games allows you to increase the number of points earned and improves the character's abilities.

Why is all this necessary? In practical terms, a weak form of AI can be used in many areas. For example, create a robot that can do housework. And such a robot will work better and better over time, as it is able to learn. A weak form of AI is also needed to improve the performance of autonomous cars currently being developed by a number of companies, including Google. As for the strong form of AI, here it is worth fearing that he would not start using people for his own purposes, which Musk repeatedly mentioned.
A number of researchers question the feasibility of the OpenAI task of creating “good” artificial super-intelligence. According to representatives of the organization itself, the threat of creating a malicious AI is leveled by the fact that the technology, the results of developments are available to all. At the same time, Nick Bostrom, a scientist from Oxford, believes that it is openness that can lead to negative results. That research results are shared with everyone, says Bostrom, and may become a problem. “If you have a“ make everybody bad ”button, you are unlikely to want to share it with everyone,” says Bostrom. But in the case of OpenAI, this button will be available to everyone. If the organization will hide the results of its work, which it considers dangerous, then the existence of OpenAI will lose its meaning, because the organization will not be open.
Nevertheless, according to Bostrom, there should also be positive results. The main result is a reduction in the likelihood of superintelligence control by any company in the case of creating a strong form of AI. Monopolizing a strong form of AI is bad, and OpenAI is likely to reduce the likelihood of such a development.