PONS IP, a leading national consultancy firm in intellectual and industrial property, has participated in the presentation, available here and held at the Cotec headquarters, of the report ‘Responsible use of generative AI: its business utility and its approach in large-scale language models (LLM)’.
The document, which can be downloaded here, is the result of a Cotec Working Group, coordinated by the companies Repsol and Tecnatom, in which more than 40 representatives of the Foundation’s Member organisations participated. The document includes a series of practical recommendations for the responsible implementation of this technology in the business sphere, in a forum that included the participation of group coordinators from Repsol, Juan José Casado, Digital Director, and Julia Díaz, Head of the Data Science Department. Cotec was represented by the General Director, Jorge Barrero, and the Director of Studies and Knowledge Management, Adelaida Sacristán. The event was brought to a close by the General Director of Digitalisation and Artificial Intelligence, Salvador Estevan.
Artificial Intelligence continues to advance and it is now up to the administrations to act accordingly. The European Union is the main institution that has decided to “champion” this cause and is therefore the first to be working on the regulation of Artificial Intelligence. In this respect, the EU has focused on three pillars. The first of these is the regulation itself of the use of AI. In order to determine whether a company can use this technology, the EU proposes assessing three risk scenarios, these risk scenarios being “unacceptable”, “very high” and “low”. In this way, the EU will set standards of diligence that study aspects of different areas such as quality, transparency and human supervision, among others. The EU thereby avoids setting very specific guidelines that could become obsolete in a very short period of time, given that AI is advancing at great speed.
Another of these pillars is that of prevention, that is, liability. The EU is working on the proposal for a Directive on Non-Contractual Liability rules specific to AI. In other words, it will regulate the degree of liability that companies using AI will assume in the event that it fails. Finally, the EU is looking at the gradual changes in legislation that are being made in other sectors that regulate products such as toys or medical tools, among others.
Our Director of Intellectual Property, Consultancy and Technological Innovation, Violeta Arnaiz, participated in this presentation of the report together with the Head of AI and Data Science at ORANGE, Francisco Borja Escalona; and the Director of IT Solutions for Energy at CAPGEMINI, María Luisa López, moderated by the Director of Innovation at UNIR, María Luisa Villegas.
DDuring the colloquium, Arnaiz commented on some of the difficulties that the EU may encounter when legislating on AI. The first of these, although it may seem obvious, is not trivial, as “addressing a change of such magnitude can be complicated for the EU because of the very nature of the Union. Different countries with different traditions and economic interests have to come to an agreement.” She also addressed the difficulty that the EU may encounter in trying to legislate in areas where it already has legislation. “Generative AI has impacted in a very obvious way, to cite an example, intellectual property law or patent law. On this point, the impact is well known by all, because until now artistic creation or the development of inventions were essentially human actions, and current laws are built on that premise. Now that the monopoly of exclusively human creation has been broken, AI is leaving us with many unknowns that have yet to be resolved,” she said.
When it comes to encouraging innovation, regulations can sometimes be seen as an obstacle. However, Violeta Arnaiz warned that this problem is not new with the advent of AI. It has happened before in areas of conflict, such as honour and freedom of expression; privacy and the right to information; and data protection and security.
Therefore, she concluded that “for this deliberation, the technique used by European legislators, in this case, has been to carry out IMPACT ASSESSMENTS, through which the degree of risk or danger involved in the adoption of these technologies in certain contexts can be measured. And once this exercise has been done, a traffic light type system is established, like the one we have discussed. It is one way, like other possible ways, to address that middle ground, and that is how the EU has done it up to this point”.
Proposals in the paper include the creation of strong governance around the different phases of the life-cycle of GAI-based developments. “In each of the developments where we implement solutions of this type, we should consider the ethical and social repercussions of this technology, being aware that preserving the privacy or security of the data we use is fundamental,” say the authors.
They also stress the importance of “ensuring that the GAI systems adopted are responsible in their use, ensuring that they do not promote discrimination or inequality, nor do they promote environmental degradation, as these systems are resource-intensive.
You can watch the meeting again here.