Google is looking for ideas to build human-readable artificial intelligence applications

    Google has been consistently promoting the idea that the most promising way to develop AI for humans will be to include a place for humans by default in machine algorithms. It will be safer and more efficient. It’s safer, because people can maintain control over the processes inside the AI, more efficiently, because a number of tasks are solved by algorithms purely mechanically, and people - creatively.

    In addition to these two reasons, which can be argued with realism, the interaction of the machine and man will allow us to save jobs. The first two monitoring tools under the PAIR initiative (People + AI Research) were just recently announced and offered for widespread use. The goal of the initiative is to make friends with AI in the maximum number of industries and fields of application through collaboration.

    The Google initiative is of interest to the corporation itself. By publicly discussing tools for interacting with a person with machines, the company gets allies and future users of its solutions representatives of all levels who determine the future use of AI - scientists, industry experts and users. Of particular interest to Google are medicine, agriculture, entertainment, and manufacturing solutions. The experiments of the general public in these areas will give corporations new convenient cases for the use of AI and will prepare consumers for the use of new applications.



    PAIR is led by Fernanda Viegash and Martin Wattenberg, who specialize in recognizing the processes that occur when processing large amounts of data. And this is the essence of machine learning. It is in the uncontrolled self-learning of machines that most futurologists see a threat. It is necessary in time to see the direction of thoughts of the machine. For this, Fernanda and Martin have developed two big data visualization tools and plan to visualize machine learning processes - Facets Overview and Facets Dive. One is for function monitoring, the other is for detailed studies of the conversion of each part of the data set.

    Designed toolsable to capture abnormal values ​​of functions, the absence of typical signs and normal results, failures of testing and tuning. And most importantly - through flexible settings, the software allows you to see patterns and structures that are not obvious or did not exist initially. What statistical generalizations are for people, then for machines there are grounds for conclusions, to assess the validity and acceptability for a person who they are not able to. We need to see what patterns and "conclusions" of data the machine built for itself in order to correct errors in time - dangerous or safe for us.

    Background to the PAIR Initiative


    Earlier, in the direction of creating useful cases for people, Google already co-founded the Partnership on Artificial Intelligence with its colleagues , introduced the “Human-Oriented Interaction” nomination in its award for researchers, and published recommendations for developers of machine-learning programs.

    Google identified 7 common mistakes that are useful to avoid when creating applications that are demanded by end users:

    1. Do not expect machine learning to determine which problems to solve. Look for problems yourself. Engage in marketing before you sit down to code.
    2. Think about how justified the solution to the problem is with machine learning. There are many mathematical and software tools that work easier, faster, or more accurately on narrow tasks. Heuristic analysis can be inferior to machine learning in accuracy, but requires less time and calculation. Imagine how a person could solve the problem, which way you could improve his results for the indicators of each of the 4 sectors of the error matrix , what expectations and stereotypes users of similar tasks have now.
    3. Try changing the input conditions of the task and simulate how a person who imitates the thinking of a machine could solve a problem.
    4. Evaluate possible errors in the algorithms, how critical they are for the loyalty of future users. Errors can lead to a simultaneous increase in the frequency of occurrence of false and true decisions, and vice versa - to a reduction in all decisions in general. You need to understand what is more important: completeness or accuracy, and find a balance.
    5. Keep in mind that users will “grow wiser” as they get used to new technologies. Part of the "stupid" algorithms must be turned off in time, otherwise users will begin to get annoyed at their presence.
    6. Use reinforced learning by motivating users to put the correct tags and tags.
    7. Encourage developers to imagine how users can apply and test the future application.

    Also popular now: