Smart contracts for robots and artificial intelligence

Published on February 26, 2017

Smart contracts for robots and artificial intelligence

pic by ** Cheylin **

Worldwide, dozens of articles are constantly published on the need to create legislation for robots and artificial intelligence (AI). According to one of the leading legal thinkers in the new area of ​​rights, Professor Ono Academic College (Israel) Gabriel Hallevy: “ Today we are in a vacuum - a legal vacuum. We do not know how to relate to these creatures . ” And most recently, Bill Gates himself said that since robots begin to take people's jobs, they have to pay taxes.
According to the research of Ryan Keilo , in American law, the robot is interpreted as a programmed machine that performs the will of man. Consequently, in all cases it is the creators who are responsible for the actions of the robot. This approach does not cause controversy until autonomous robotic systems are not widespread. But what if we are talking about, for example, the Tesla plant , which employs 160 robots of all sorts? For any emergency, the responsibility can be hanged on the programmer-developer, supplier company, the main shop floor and so on.

On all continents, controversy is raging how to get out of the situation. Disregarding extremist appeals to extend the current administrative and criminal law to robots and punish them until dismantling, there remain several approaches. Some suggest that guilty robots be taken from the owners and transferred to perform socially useful work. Others, who are more cautious, see a way out in compulsory registration of robots, with their subsequent insurance for compensation of damage.

With all the variety of approaches to the law for robots and AI, the question remains: who exactly will be responsible if individuals or corporations are harmed by the robot or AI? There are three unsolved problems that impede the practical development of legislation for robots in terms of determining responsibility: 

The first problem . Robotic systems, controlled by AI and capable of learning, are very complex stand-alone devices. A large number of people and companies participate in their creation and operation. Among lawyers, this problem is known as the problem of a long chain. As a rule, each robot and AI have different corporations producing hardware and software. Moreover, in complex systems, manufacturers of hardware and software are also not one, but several companies and single developers. Do not forget about the providers that provide telecommunications. Often complex robotic complexes tied to the Internet of things . However, this is not all. There are also organizations that purchase and use these robots. So the length of the chain comes to 12-15 persons.

The second problem. Real life is different from games (like not only chess and checkers, but also, for example, poker) non-deterministic character. In life, the context and characteristics of the situation play a huge role. Depending on the situation, the question of responsibility, culpability, etc. is resolved in different ways. In law for people, this context is taken into account through the jury. It is the jury who passes the verdict, trying on the laws and precedents to the context of a specific situation. 

The third problem. In practice, both today and in the near future, complex robotic complexes will be completely autonomous only in a small number of cases. This is partly due to the position of state institutions and public opinion. Therefore, a significant number of creators and operators of robotic systems controlled by AI, rely on hybrid intelligence - the joint work of man and machine. Accordingly, the human-machine interaction protocol needs to be written into the legislation for robots. As practice shows, it is man in many systems that is the most vulnerable link. 

In addition, this problem has another side. The main concerns associated with the use of autonomous robotic systems are their intentional or unintentional damage to living beings. In the case of deliberate damage, the situation is clear: we must look for cybercriminals. In the case of involuntary harm - the situation is not so clear. Based on the history of human interaction with technology, it can be said with confidence that in most future troubles with robots people who violate safety engineering and various rules will be at fault.

With all the fierceness of discussions about precedents for the formation of administrative and, possibly, criminal law for robots and AI, the key topic of the definition of responsibility is not given due attention. It is possible to argue long about the need to punish a crime, but, until there is no clear and accepted by society, corporations and states the method of determining responsibility for crimes and punishable actions, the discussions will be theoretical. 

In the development of proposals for the creation of legislation for robots and AI, the mainstream is the desire to use for robots legal solutions and rules that apply to humans. The reverse situation has developed in the subject of " smart contracts ". Here, flexible contextual law is attempted to be replaced by algorithmic procedures. But rigid algorithms have few chances to replace the flexible and contextual legislation used by individuals and companies. 

In life, as well as in arithmetic, you can get a plus out of two minuses. Smart contracts based on the blockchain are an ideal tool for solving problems of identifying and allocating responsibility under the law for robots and AI. Being essentially a cryptographically protected distributed database, the blockchain is suitable as the basis of legislation for robots and AI. 

Autonomous automated systems controlled by AI, despite their complexity and versatility, remain algorithmic devices. The interaction between various software and hardware blocks of complex construction is best recorded and executed through the blockchain.

With this approach, smart contracts act as a legal module in any complex robotic complex, including managed AI, defining the scope and limit of responsibility for all involved in the creation and operation of this complex. 

Smart contracts for robots can simultaneously perform at least three functions:

  • First , they allow for each product to create a kind of distributed black box. The black box is not in the cybernetic sense, but in the sense of the black box used in airplanes. In the event that something goes wrong, you can clearly determine, based on the readings of the sensors, sensors, and also the execution of the program, which particular component is responsible for the failure and who or what is responsible for it. In such situations, a smart contract provides the necessary information to investigators, insurance companies, and courts. 

  • Secondly, a smart contract can act as an embedded security system for an automated standalone device. Violation of certain conditions may interrupt the execution of transactions and the entire operation of the device. 

    Unlike the black box functions, the security function is in practice more difficult to implement. In order to integrate this circuit into working smart contracts, it is necessary to synchronize the speed of the transaction with the program command transmitted from one component of the autonomous complex to another. When this can be done, manufacturers will be able to provide consumers with much safer robots.

  • Thirdly , the use of smart contracts can hypothetically increase the consciousness of people when working with robots. Understanding the fact that every action will be permanently recorded in the blockchain can make a careless employee think a hundred times before violating safety precautions.

Any rules are hard-coded algorithms. They involve the implementation of strictly defined actions in strictly defined situations. Therefore, the blockchain is the best suited for entering into smart contracts between manufacturers of complex autonomous systems on the one hand, and with their users - on the other. Working with autonomous systems, people should not only receive opportunities that did not previously exist, but also bear responsibility for their own actions, written in an algorithmic language. In the event of incidents, the presence of a smart contract, together with the readings of the sensors and sensors, will allow to establish who exactly the automated system or the person is to blame for.

Things are easy. Leave people to people and machines to machines and to translate standards, technical rules, safety regulations, etc. language smart contracts for robots and people interacting with them.