ChatGPT, an artificial intelligence model developed by OpenIA capable of answering complex questions and writing coherent answers, was released to the public at the end of last year. Since then, all sorts of questions have been raised about the impact of this tool on issues as diverse as education, democracy and the spread of misinformation online.
While all these discussions are taking place, artificial intelligence is making its way into a new field: the administration of justice. Last January 30, a judge in Cartagena resolved a tutela action - a judicial mechanism to protect fundamental rights in Colombia - based on the answers provided by ChatGPT.
In the case, the judge had to decide whether a health insurance company should exempt a minor with autism from paying for the service and also cover the child's transportation from his home to the centers where he receives his treatment, since his mother was not in a position to assume these expenses.
As in any other process, the judge evaluated the jurisprudence of the Constitutional Court on similar cases and the corresponding laws. However, before deciding, the sentence takes a strange turn. Under Law 2213 of 2022 - which allows the use of certain information technologies to conduct hearings, send lawsuits and fulfill other judicial procedures - the judge says that he will extend the arguments of his decision according to artificial intelligence.
Among others, the judge asked ChatGPT whether minors with this condition should be exempt from these payments, whether the fees that users must pay can be a barrier to access to health services and whether the Constitutional Court had made similar favorable decisions. To all these questions the tool answered in the affirmative, but only in the first one it cited a specific source: the law regulating the right to health, which in effect establishes that the care of subjects of special protection -including children and adolescents- will not be limited for economic reasons.
The judge also consulted ChatGPT on the merits of the case, asking whether a tutela should be granted in these cases, to which the tool responded:
If it is demonstrated that the right to health of the minor with autism is being affected by the requirement to pay moderator fees, it is likely that the tutela action will be granted and the health care provider will be ordered to comply with the exoneration. However, each case is unique and the final decision depends on the specific facts and circumstances of the case.
According to the judge, who finally granted the guardianship to the minor, turning to artificial intelligence in this way is not a way to replace his decision but to optimize the time spent in writing sentences. Beyond his argument, using ChatGPT or other artificial intelligence models as sources of law or auxiliaries of justice may present problems that will surely be debated from this episode.
Despite being a very advanced mechanism capable of answering questions very eloquently, ChatGPT's answers can be inaccurate or completely wrong. The model works with a base of large amounts of data that it scans in a few seconds to provide an answer or fulfill the task it is asked to perform.
According to ChatGPT itself, its purpose is to generate natural and coherent texts, so it may repeat false information when offered as input and ultimately spread misinformation. Chat responses depend to a large extent on how it is asked. This point should be viewed with more care in matters of administration of justice, similar to what happens in the interrogations of judicial processes, where the questions are intended to avoid inducing the answers of witnesses or parties.
In addition, ChatGPT is not connected to the internet and its information sources are scarce for events after 2021, so relevant laws and jurisprudence published after this date could be left out of the frame of reference and lead to incorrect answers.
But all these considerations are based on the best and most transparent scenario. That is, when judges make clear the use of these tools, as happened in this tutela. Because it is also possible, as will probably begin to happen at some point, that the use of artificial intelligence in judicial decisions goes unnoticed and there is no opportunity to question its competence.
You can read the full sentence below: