Search
Close this search box.
Search
Close this search box.
/
/
/
SMART ROBOTS: LEGAL ASPECTS AND RESPONSIBILITY
SMART ROBOTS: LEGAL ASPECTS AND RESPONSIBILITY

Share the news:

SMART ROBOTS: LEGAL ASPECTS AND RESPONSIBILITY

Automation is a fundamental instrument for the digital transformation of sectors as diverse as the industry, healthcare and defense. Robots have become increasingly self-sufficient thanks to artificial intelligence (AI) algorithms.

The AI concept is now 65 years old. It was proposed during a conference at Dartmouth College in New Hampshire in 1956 by three great scientists, John McCarty, Marvin Minsky and Claude Shannon, who defined it as “the science and ingenuity of making smart machines, especially smart computing programs”. This use of the word “smart” generates high expectations. In practice, we are talking about computer programs that replicate some functions that were only available to humans through learning until now.

Artificial intelligence has had ebb and flow stages when it did not meet expectations. However, in recent years, it has had a greater impact due to the availability of other technologies such as new communications networks, greater computing ability and the capture of information from the environment thanks to sensors. Today’s possibility of accessing a large amount of data to train models that refine processing algorithms is also fundamental.

Artificial intelligence has evolved from a “structured AI” based on decision trees to an “unstructured AI” where machines are able to “learn” from experience to improve their efficiency. This process is called machine learning or, within this field, deep learning, in which computing is based on neural networks with several layers of depth that analyze information to make decisions.

From a legal point of view, AI poses exciting challenges. As with other technologies, the European Union has led the way in regulating this area by publishing a White Paper in 2020 entitled “Artificial Intelligence: A European approach towards excellence and trust” and with the proposal for a legal framework for AI on April 21, 2021, which, in line with the White Paper, is based on the risk level of the tools developed.

This proposal establishes different risk categories. Two of them are “unacceptable risk”, which are banned systems to manipulate human behavior, and “high risk”, which includes technologies that, for example, can endanger the lives of citizens.

In the case of Spain, in 2020, the Ministry of Economic Affairs and Digital Transformation promoted a National Artificial Intelligence Strategy whose goal is, among others, to include six axes similar to those established by the EU. Nevertheless, it does not enter into the discussion about risk criteria established by the EU in the White Paper.

Regarding these risks, the application of AI has demonstrated the need to establish ethical principles to avoid biases when applying algorithms in certain aspects such as discrimination on the grounds of race or sex. The European Commission and companies such as Telefónica have promoted reference frameworks to avoid these risks from the very development of algorithms.

From this legal point of view, another relevant aspect is the ownership of robots that implement AI elements. It must be taken into account that these robots are, in many cases, merely software without a mechanical component that supports them. We are talking about chatbots that help clients in customer service or bots that facilitate the automation of processes. The world is increasingly becoming a software.

The discussion on robot ownership has even raised the question of their possible treatment as natural or legal persons. From both a technical and a legal point of view, we consider that this approach, as of today, makes no sense.

In order to equate robots with natural persons in the future, the Spanish Civil Code (art. 30) should change radically. Besides, it does not make any sense that robots are considered legal entities, although some think that they should pay taxes.

Moreover, in 2017, the European Parliament proposed a new legal entity for robots called “electronic personality” to which responsibilities could be placed.

Nowadays, we can consider a smart robot as a product or service formed by a set of intangible assets protected by intellectual property rights. In this regard, AI-related elements are quite relevant when it comes to identifying the ownership of these assets and, therefore, of the robot. Among these assets, training models, algorithms and the software that implements them are included. Each of these elements may have a different form of protection (copyright, patent…) as well as different ownership.

As with any other asset, all these rights must be explicitly or implicitly conveyed in commercial agreements for the assignment of goods or the provision of services in order to preserve or transfer ownership.

Considering this set of intellectual property assets, although robots can act “autonomously”, they have an economic value that may be subject to ownership (art. 333, Spanish Civil Code). Therefore, a robot is a lifeless object that may be subject to ownership and has no rights and obligations. As a result, the manufacturer or owner of the robot is liable for the malfunction or misuse, as the case may be.

There are three major fundamental types of liability related to the development and use of AI in smart robots:

  1. Liability for damages caused to the client in connection with the development of AI algorithms (development liabilities).
  2. Liability for damages caused to the user or to third parties due to the use of the developed AI (use liabilities).
  3. Client’s liability in connection with infringement of third-party intellectual property rights due to the use of AI developed by such client. And the AI developer’s liability for possible infringement of third-party intellectual property rights (liabilities related to intellectual property).

Current standards on robot liability refer, in most cases, to the person to whom the action or omission of the robot is attributed, whether it is the manufacturer, the operator, the owner or the user, and that may have anticipated, foreseen and prevented the robot’s behavior that caused the damage. From a technical point of view, this approach poses implementation challenges. Consider the case of an autonomous vehicle. It seems obvious that, in the case of a malfunction, the manufacturer would be responsible, but the owner could be equally responsible for programming the vehicle, as well as the infrastructure that issues signals to facilitate autonomous driving. In these cases, the source of liability is ultimately the risk creation. Then, the question could be “who creates risk in the case of robots that cause damage?”

Many of the damages caused by robots, such as those resulting from algorithmic error and biases or privacy violations, among many other possibilities, are not explicitly provided for in any liability regulation up to date. When we talk about a robot that has caused damage, we may refer to the fact that it underwent some kind of human manipulation to obtain benefits or that it has a manufacturing defect.

One option for assessing the supplier’s liability for damages caused by the use of AI is to ask what a reasonable designer or developer would have done in the same circumstances.

This is a short-term solution, but it encounters difficulties in situations where there are no humans to operate the AI. The more unpredictable the failure, the less responsible is the AI supplier or developer (e.g. An autonomous car on a car-free highway is not the same as in an urban environment with traffic).

Some suggest that the negligence test should be done with respect to what a “reasonable computer” would have done in the same situation. This would require an assessment of practices, standards and customs of the AI development industry.

The problem is that a person is easy to imagine, unlike a reasonable computer. Humans share many similarities, so a diligence standard can be more or less defined, but AI is heterogeneous by nature.

In the case of damage arising from development risks, some legal systems attribute the corresponding liability to the product manufacturer, but others grant an exception to be exempted from such liability. The Law mentions merely a “defect”, in other words, it does not limit the type of defect on which a development risk can influence. Thus, in the case of manufacturing defects, the manufacturer is exonerated if they can prove that the state of knowledge did not allow them to detect it at the key moment; in the case of design defects, if they can also prove that it was not possible for them to choose a safer option; and in the case of lack of sufficient warnings or instructions, if they can prove that it was not feasible for them to do so because the state of knowledge did not allow them to identify the risk in question.

In practice, it should be noted that products may involve several manufacturers. Therefore, the determination of liability as an equivalent to the lack of expertise in manufacturing the component that failed or caused the failure can lead to a very lengthy or cross-liability process that frustrates the victim. For this reason, modern legal systems attribute the liability to the seller or the person who markets the defective product, thus facilitating the possibility of compensation to the victim.

With this approach, an ideal solution in such cases would be to include the seller in the manufacturers’ liability, either jointly and severally or simultaneously.

In 2019, the European Commission promoted the preparation of a “Report on liability arising from artificial intelligence and other emerging digital technologies”. Among the report conclusions, are the following:

  • – Digitization brings fundamental changes to our environments, some of which have an impact on liability law. This affects, in particular, the (a) complexity, (b) opacity, (c) openness, (d) autonomy, (e) predictability, (f) data management and (g) vulnerability of emerging digital technologies.
  • – While current liability standards offer solutions to the risks posed by emerging digital technologies, results may not always seem appropriate due to the failure to achieve (a) a fair and efficient allocation of loss, particularly because it cannot be attributed to those whose objectionable behavior caused the damage, to those who benefited from the activity that caused the damage, to those who controlled the risk or to those who were the cheapest cost avoiders or cheapest insurance takers; (b) a legal framework’s consistent and appropriate response to threats to the interests of individuals, particularly because victims of damages caused by the use of emerging digital technologies receive less or no compensation compared to victims in a functionally equivalent situation involving human conduct and conventional technology and (c) effective access to justice, particularly because lawsuits becomes unduly burdensome or costly for victims.
  • – Therefore, it is necessary to make adaptations and amendments to current liability standards, bearing in mind that, due to the diversity of emerging digital technologies and their respectively diverse range of risks, it is impossible to reach a single solution suitable for the entire spectrum of risks.
  • – Comparable and homogeneous risks should be addressed by means of similar liability standards, which should also determine which losses are recoverable and to what extent.
  • – For liability purposes, autonomous systems are not required to be endowed with a separate legal personality.
  • – Strict liability is an appropriate response to risks posed by emerging digital technologies if, for example, they are used in non-private environments and may cause significant damage.

In general, the legal system must respond to the challenges posed by smart robots that use AI algorithms for their operation. Law often lags behind technology, but I encourage lawyers to learn a little more about the advances of AI for both regulation and use in automating legal functions.

This new regulation on smart robots must consider, at least, responsibilities and ethical issues. Regarding the former, the focus should be on establishing a legal system that addresses civil liability for damages caused by smart robots, as is the case of autonomous vehicles.

Thanks to the application of Artificial Intelligence, this is the most impactful general-goal technology created since the Internet. Bear in mind that large industries, such as industrial robotics or the autonomous vehicle sector, in which Europe still has a lot to contribute, can be promoted or inhibited by means of proper regulation in this field.


Luis Ignacio Vicente, PONS IP Strategic Adviser

José Carlos Erdoazin, PONS IP of Counsel

LEGAL NOTICE PRESS ARTICLES REGULATED BY CEDRO:
Some of the journalistic articles included in this website are protected by Copyright. If you wish to carry out the reproduction, distribution, public communication or transformation, in any medium and in any way, of any article with the employees of your company or with external personnel, contact CEDRO to obtain your own authorization (licenses@cedro.org /cedrocat@cedro.org)

If you liked this content, share it:

Listen to our podcast

“Invention Privileges”

episodio 2
Las marcas en la nueva economía digital
El segundo episodio de nuestro podcast “Privilegios de Invención” está dedicado a uno de los derechos de propiedad industrial más...
episodio 1
Patentes Biotecnológicas
El primer episodio estará dedicado a uno de los grandes campos de la innovación a nivel mundial, uno de los...

NEWSLETTER

All the IP News

in your e-mail

Find out all the latest information on IP to boost the development of your organisation.

Subscribe to our bimonthly newsletter

In compliance with the provisions of the GDPR, the following is informed: Controller: PONS IP, S.A. (A-28750891). Purposes: send of electronic marketing communications related to the activities and services offered by PONS IP. Legitimation: Consent of the interested party [art. 6.1.a) GDPR]. Rights: Access, rectify, delete, limit, or oppose the treatment, request portability and revoke the consent given by sending an email to rgpd@ponsip.com, including as a reference "EXERCISE OF RIGHTS". More information.

International Awards

and Recognitions

International Awards and Recognitions