[ loading / cargando ]

Claudia Delgado, Godoy Hoyos

Colombia   

Reflections on the use of ChatGPT in Colombia following its influence in a Court decision in Cartagena  

ChatGPT is a tool based on trained language models (Large Language Models) from large volumes of data. As a neural network model, it is a technology capable of interacting and providing an experience closer to a natural and human conversation. Based on this understanding, we reflect on the use of AI in the legal landscape and specifically in judicial decision-making.

Use of AI in Law

The application of AI in law seeks to facilitate accessibility to legal information and simplify legal reasoning. In the legal field, legal operators use artificial intelligence tools to facilitate their work and optimize their operational tasks, including human-based systems (SBSH). SBSH are computer systems that use artificial intelligence techniques that attempt to imitate human reasoning, learning, memory, and communication processes. Among them we can find: (i) expert systems, (ii) case-based systems, (iii) decision support systems and (iv) automatic or machine learning.
 ChatGPT is classified as a neural network model and has great advantages. However, since it is in an experimental stage, it provides imprecise, erroneous answers and may include false information. Furthermore, as a machine learning system, it still has a long way to go in the ability to apply a set of rules that are developed by analyzing previous cases, and although it can provide answers, these must be analyzed, especially if there is uncertainty.
In this sense, although artificial intelligence can be useful, it is a tool with a tendency to generate bias. Its algorithms may have flaws as its data sets may be insufficient and, consequently, its use within the framework of a judicial process can undermine judicial decisions.
Regarding the Judgment handed out by the Juzgado Primero del Circuito Laboral de Cartagena, where the Judge used artificial intelligence (ChatGPT) to resolve the legal problem, it is noted that the questions and associated answers contributed to the decision-making process without actually excluding legal hermeneutics.
Notwithstanding the above, Juan David Gutiérrez Rodríguez - (PhD in public policy from Oxford University, Professor at the U. de los Andes and Partner at Avante Abogados) - explains that there are three ethical risks derived from the implementation of AI within the framework of a judicial process.

In the first place, the use of this technology can contribute to judges making biased decisions in a systematic way, due to the fact that the data used by the AI will influence the results, and if this data codifies a prejudiced relationship in society, then the algorithm can identify this pattern and base its results on it, so this technology would ultimately be reproducing underlying forms of discrimination in society.

Second, it should be noted that AI algorithms operate as a black box. We know what comes out of them, but its internal "decision-making" processes are opaque; the foregoing by virtue of the fact that AI algorithms have the capacity for automated learning and generate results through internal processes that are not traceable and are unknown even by their own programmers.

Third, asymmetries can be generated in the information provided as a result of the aforementioned. In short, AI implies a series of potential risks such as opacity in decision-making processes, bias and discrimination of all kinds, which is why it is necessary to develop a regulatory framework that contains general rules, ethical and legal principles that guide the regulatory design of this technology (Artificial Intelligence and Law. Problems, Challenges and Opportunities).
It is necessary to develop an AI that is respectful towards the rights of individuals and which has clear regulations on responsibility, transparency and accountability to guarantee the responsible use of this technology. All in all, it should be made clear that digital transformation is necessary, but it must observe a regulatory framework that offers guarantees.

In short, it is clear that the judicial system must be permeated with the use of new technologies; however, if these are to be used in the drafting of a judgement, they must be safe and effective and in no way should they replace the judge’s decision, but rather they should only serve as a source of complementary information so that it is the judicial officer who determines the meaning of the ruling, and not the other way around.

By Claudia Delgado, Senior Associate at Godoy Hoyos 
 

Suscribe to our newsletter;

 

Our social media presence

  

  

  
 

  2018 - All rights reserved