Google has published 7 principles of ethics AI


    For several months, Google has been fighting with Microsoft and Amazon for the multi-million dollar Pentagon contract for cloud services and artificial intelligence systems (see the leak of internal correspondence of top managers ). Unfortunately for the leadership, stick in the wheels insert ... own staff. In March 2018, they began collecting signatures against the development of military technology at Google, which previously had the main motto of the principle “Don't be evil”.

    The leadership had to partially make concessions. Executive Director Sundar Pichai yesterday announced a set of principles.which promises to continue to adhere to Google. Among them - a ban on the use of internal developments in the field of Artificial Intelligence for weapons, illegal surveillance and technologies that cause "common harm". But Google will continue to work with the military in other areas. Thus, the cloud unit still retains the chances of winning the fight for the tender.


    American military UAVs

    Sundar Pichai wrote in the official blog that Google is investing heavily in AI research and development, making these technologies widely available to everyone through tools and open source. “We recognize that such a powerful technology raises equally serious questions about its use. “How AI is developed and used will have a significant impact on society for many years to come,” the CEO believes. - As a leader in AI, we feel a deep responsibility. Therefore, today we announce the seven principles of our work. These are not theoretical concepts; these are specific standards that will actively manage our research and product development and will influence our business decisions. ”

    So, here are the seven principles of Google in the field of AI, in abbreviated form.

    1. Social benefits


    Achievements in the field of AI will have a transformative effect in a wide range of areas, including health, safety, energy, transport, manufacturing and entertainment. Considering the potential development and use of AI technologies, the company will take into account a wide range of social and economic factors and will operate where the total likely benefits significantly exceed the projected risks and disadvantages.

    2. Combating discrimination


    Algorithms and AI data sets can reflect, increase or decrease discrimination. Google recognizes that distinguishing fair from unfair prejudice is not always easy and that it differs in different cultures and societies. The company promises to “strive to avoid unfair consequences for people, especially those associated with such delicate characteristics as race, ethnicity, gender, nationality, income, sexual orientation, abilities, and political or religious beliefs.”

    3. Security


    Google’s AI systems will be “adequately cautious” and developed “in accordance with best practice in the field of artificial intelligence security research.”

    4. Accountability


    AI systems will provide appropriate opportunities for feedback, appropriate explanations of their actions and possible appeals. Man will retain control over them.

    5. Principles of privacy


    Google promises to use its privacy principles in the development and use of AI technologies, and to ensure proper transparency and control over the use of data.

    6. High standards of scientific excellence


    Technological innovation springs from the scientific method and commitment to open requests, intellectual rigor, integrity and cooperation. AI tools have the potential to unlock new areas of research and knowledge in critical areas such as biology, chemistry, medicine, and environmental science. Google promises to strive for high standards of scientific excellence, responsibly share knowledge about AI, publish training materials, best practices and research that allow more people to develop useful AI applications.

    7. Provide AI technology only to those who adhere to these principles.


    Many technologies have dual applications, including AI. Google promises to work on limiting potentially harmful or offensive applications, as well as to evaluate their likely use.

    Sundar Pichai provided a list of technologies that are unacceptable for Google’s artificial intelligence.

    1. Technologies that cause or may cause general harm.
    2. Weapons or other technologies, the main purpose or implementation of which is to cause or direct relief of harm to people.
    3. Technologies that collect or use information to observe violate internationally accepted norms.
    4. Technologies whose purpose is contrary to the generally accepted principles of international law and human rights.

    In some ways, these principles are reminiscent of Asimov's robotics laws. It remains to hope that other companies that develop AI systems will also officially declare their adherence to these principles and will comply with them.

    Also popular now: