Initial reflections on the future implementation of artificial intelligence in the decisions of the constitutional chamber of the Judiciary in Paraguay


In times of digitalization of state services, creation of digital identities and automation of bureaucratic processes, there is a great interest in the use of Artificial Intelligence (AI) in public administration that legitimately seeks to strengthen the principles of efficiency, flexibility in public management, and institutional transparency.

In Paraguay, the Judiciary will implement Artificial Intelligence software in the Constitutional Chamber of the Supreme Court of Justice. In that sense, this publication seeks to analyze the challenges of the application of international human rights mechanisms in the use of AI in the Judiciary, its role in privacy, data protection and the impact of rights in general.


The term Artificial Intelligence was formally coined in 1956 during the Dartmouth conference1. And it has had different interpretations and developments since then2. Artificial intelligence is supported by intelligent algorithms or learning algorithms that, among many other purposes, are used to identify economic trends or personalized recommendations. (Harari, Yuval Noah, 2016). For its part, the European Commission defines it as a collection of technologies that combine data, algorithms, and computing power (European Commission, 2020).

With the advancement of AI, some risks have been identified, especially regarding the lack of understanding of how the results are produced. In other words, algorithms are seen as a “black box”, where the input of data and their output in the form of results are known, but what exactly happens in the middle is unknown. This has generated several alerts regarding the application of Human Rights mechanisms in the use of AI.

The trend in many public administrations is to bet on the development of AI algorithms, capable of making inferences, identifying patterns and making “intelligent” automated decisions. However, the authorities have not paid attention to understanding and analyzing the consequences of this black box on the exercise of people’s rights. It will be key to make progress on this point to avoid negative impacts and consequences on people’s lives.

AI does not think, it is programmed

Technosolutionism is a quick answer that does not comprehensively analyze the impact of technology on society. According to experts, it is suggested that the first step is to consider AI as an algorithm programmed by a human:

“To understand what a computer doesn’t do, we need to begin to understand what the computer does well and how it works.” (Meredith Broussard, 2018)

The technical phases for a technological solution are pure mathematics but they are not few processes, at least some of the following steps must be followed such as: obtain the data, clean the database, analyze if there is missing data, integrate them, homogenize the data, reduce the variables in the data, eliminate possible dimensions in the database, merge similar variables, select the algorithm, determine which one fits the model and after using the training data, you have to test with other data that validates the solution of this model. (Cathy O’Neil, 2016)

In addition to this, there is another concern, the asymmetry of information when AI is proposed by Big Tech and growing technology companies that offer their products and services in the Latin American region, without analysis of contexts and structural problems of each country (Jamila Venturini, 2019). Latin American states do not have institutional structures such as institutional frameworks or oversight bodies with institutional capacity to assess what solutions are really needed, for what, and what are the purposes and objectives of implementation, in addition to monitoring that implementation and providing feedback for AI improvements.

Other problems observed in the region is the data that feeds these “intelligence” systems. There are issues with access to accurate and high-quality data sets. This leads to serious problems when generating public policies without quality and accurate evidence. (IDB, 2017).

Furthermore, these concerns focus not only on the shortcomings of the machine learning system and the data that feed the AI, but rather on how this machine learning system is able to amplify biases and discriminations that already existed in the human implementation of the system.

And finally, an underexplored topic in discussions of technology implementation is intellectual property and the processes of privatization of public services, a central issue in any democracy. It may be necessary to have the support of the private sector to provide technology services but in a central act such as the automated issuance of a judicial sentence it should be taken with a grain of salt because it is an elemental function of building trust in the judicial system. Added to this are the problems of intellectual property derivations such as the ownership of corporate databases and their tension with the public interest in information that is in the public domain.

Combined with these concerns outlined above, the implementation of AI in institutional settings -rather than alleviating and addressing some of these structural problems- deepens and exacerbates them. Therefore, AI acquisitions in public institutions are biased towards the purchase of techno-solutionary mirrors without contemplating these elements to be considered in order to avoid harmful unintended consequences.

Artificial Intelligence and human rights

When we talk about human rights, we include not only civil and political rights, but also the practice of social, economic and cultural rights.

The implementation of these types of technologies such as AI is taking place without adequate regulatory frameworks that really balance the interests of the different actors, or that provide an adequate structure to obtain the positive aspect of artificial intelligence and minimize the risks.

“AI technologies can be of great use to humanity, but which also raise fundamental ethical concerns, for example, in relation to the biases they can embody and exacerbate, potentially leading to inequality and exclusion and posing a threat to cultural, social and ecological diversity, as well as generating social or economic divisions” (UNESCO, 2020).

It will be key for new regulations in this area to be based on human rights standards and not on ethical principles as is the current trend. Ethical considerations are valuable but cannot be the first and only minimum line for implementing this type of technology. UN and OAS have a lot of experience in special rapporteurships that have monitored the implementation of human rights in artificial intelligence (R3D, 2018). International human rights norms are minimum structures of protection around the world and is at least actionable against states that are not committed to international human rights standards.

It is important to highlight that AI does not only have an impact on privacy and personal data protection. That is, it is not a problem that will be solved by creating or improving personal data protection laws to regulate algorithmic decisions or artificial intelligence. It is also an issue related to inclusion and social justice, such as discrimination and the infringement of the freedoms of vulnerable groups. Therefore, other regulatory approaches that provide fair institutional structures for the implementation of AI and other technologies should be analyzed.

The impact assessment should identify the risks of adverse effects on human rights, the environment, and ethical and social implications in line with the values and principles set out in human rights norms and standards.

In other words, they should be based on and centered on the individual, not on public interest, national security or economic interest etc. They must ensure that people who may be impacted by AI decisions are provided with the necessary tools to enable them to understand and critically analyze this technology in order to determine whether its use may contribute to or harm their life situation (TEDIC, 2019).

The following table shows how to design a governance space to address emerging problems of AI and algorithms with human rights foundations.

Shared definitionsStandard resourcesLiterature
Accountability (term that includes transparency, accountability and oversight)
Mobilizing thematic communities of researchers, activists, public officials, private organizations and other stakeholders to define common concepts and methods.
Participation in public policy debates to incorporate accountability in upcoming regulations.Create learning content and guidance on legal and non-legal ways to enforce transparency and accountability around algorithmic use and human rights standards.
MonitoringMonitoring Mapping of algorithm and AI use across government (and enforcement agencies).Training of journalists in algorithm impact monitoring.

Training civic watchdog organizations on legal frameworks and best practices in human rights.Training of government lawyers on algorithmic risk.

* (Open Knowledge, AI and Algorithms, 2020)3

Study: Constitutional Chamber of the Judiciary in Paraguay

In the Paraguayan context, since 2019 the Judiciary is in negotiation to acquire a software with Artificial Intelligence (AI) and apply it in the Constitutional Chamber of the Supreme Court of Justice (Ever Benegas, 2020).4 This AI software called Prometea of Argentine origin, is used to automatically prepare judicial rulings through the use of an artificial intelligence system and supervised machine learning and aims to mitigate delinquency and optimize bureaucratic processes (IDB, 2020)

This tool of Argentine origin has already been applied in various international organizations such as the Inter-American Court of Human Rights (CorteIDH), the Ministry of Justice and Security of Buenos Aires, the General Prosecutor’s Office for Contentious, Administrative and Tax Matters of the City of Buenos Aires and the Constitutional Court of Colombia (Corvalán, Juan G, 2018).

“Prometea can predict the solution of a filing through AI. Its input is information from thousands of previous filings. All these documents are entered into the system. What the software does is to suggest an outcome based on what was resolved in previous analogous cases, thus replacing the task usually performed manually by a human. Its most striking role is in the prediction of solutions5. It also automates the control of procedural deadlines, i.e., it warns the prosecutor’s office when the time for the Public Prosecutor’s Office to issue a ruling is about to expire” (Tarricone, 2020).

However, there is an important concern when applying a solution such as Prometea, which is used in an administrative instance of tax execution procedures6 -which has limited possibilities of outcome because the legal procedure itself is limited- to a very different instance and legal process such as the constitutional chamber of a Supreme Court. That is, there is a great difference internally in the justice system: an administrative procedure is not the same as a criminal or civil one, in addition to the differences in the procedural instances of each system. Therefore, one must be very careful in taking a technological solution out of the nature and domain in which it was built.

“It’s like thinking about using a hair dryer to dry the floor of my house, because in both situations they dry out.”7 Micaela Mantegna, lawyer, expert researcher in AI.

In addition, the expert emphasizes that in order to be more efficient and avoid judicial arrears, it is not necessary to have AI, but rather clear rules systems on how to proceed in the different scenarios. In other words, a basic, transparent and easily supervised decision tree system that avoids the use of black boxes.

Another element to consider for this case study is the need for notification to the parties that their case is being submitted by AI. The technology is not infallible and can have range of minute errors that are not noticeable at first glance and take on newfound notoriety when cases of abuse are extreme.

“What’s dangerous here are those small errors, those small mistakes lost in an ocean of decisions. My biggest fear of AI is not killer robots, but AI that makes mistakes and no one notices.” Mantegna

Finally, Corvalan8 suggests a series of recommendations that should be taken into account when establishing appropriate supervision mechanisms: auditability, traceability and explainability that allow the evaluation of algorithms, data and design processes, as well as including an external review of AI systems.


Public administration is increasingly looking to use new forms of data analytics such as artificial intelligence in order to provide “better public services”. These reforms have consisted of digital service transformations that generally aim to “improve the citizen experience”, “make government more efficient” and “boost business and the wider economy”.

Following this trend, the Supreme Court of Justice of Paraguay, through the constitutional chamber, seeks to implement AI framed in the need for efficiency and mitigation of judicial delay. At the time of closing this article, the implementation of AI in the constitutional chamber is suspended. However, it will be of utmost importance that this time be used to analyze the risks and challenges of the application of AI in the judicial decisions of this chamber.

Most research recognizes that all technological innovations produce benefits but carry risks and harms. Among some central concerns raised in this article is the important discussion on Technosolutionism, the problems that can be generated if it is applied as a patchwork solution and comprehensive and structural solutions are not sought. The databases that feed the AIs have serious problems of quality and accuracy. How the privatization of public decision-making processes can even lead to problems of an intellectual property nature. This set of situations can generate harmful impacts on the full exercise of people’s rights.

At the same time, great care should be taken when extrapolating the Prometea system, which was designed to deal with administrative cases (rulings), to a scenario different from the one in which it was created. That is, to a judicial system that performs interpretations of language analysis to issue sentences.

Finally, in order for the Judiciary to acquire the AI system, it is recommended to adopt a prior regulatory framework that establishes the mechanisms for the acquisition and implementation of AI technology. They should also ensure the existence of measurable results, sanctions and means of redress in case of damages, as well as mechanisms for constant evaluation focused on human rights standards. Such evaluation should also be multidisciplinary, multi-stakeholder, multicultural, pluralistic and inclusive.

Public administration seeks once again to utilize new forms of data analysis such as AI, with the aim of offering “better public services”. These reforms exist within the transformation of digital services that has the aim of “improving the experience of the individual”, “to make the government more efficient” and “boost the companies and economy further”.


BID. (2017). El uso de datos masivos y sus técnicas analíticas para el diseño e implementación de políticas públicas en Latinoamérica y el Caribe | Publications.

Cathy O’Neil. (2016). Weapons of Math Destruction.

Corvalán, Juan G. (2018). Presentación Prometea: inteligencia artificial al servicio de “más derechos para más gente”. Sr. Juan Gustavo Corvalán, fiscal general adjunto en lo contencioso administrativo y tributario, de la república argentina durante el consejo permanente de la organización de estados americanos. Wednesday, August 22, 2018, Washington DC. Extended text of the conference.

European Commission. (2020). White Paper. On Artificial Intelligence – A European approach to excellence and trust.

Ever Benegas. (2020, February 25). Inteligencia Artificial versus Mora Judicial. DataBootCamp.

Harari, Yuval Noah. (2016). Homo Deus. Debate.

IDB. (2020). PROMETEA: Transformando la administración de justicia con herramientas de inteligencia artificial | Publications.

Venturini Jamila. (2019, October 10). Vigilancia, control social e inequidad: la tecnología refuerza vulnerabilidades estructurales en América Latina | Derechos Digitales.

Broussard, Meredith. (2018). Artificial Unintelligence. How computers Misunderstand the World. The MIT Press.

R3D. (2018, October 20). La inteligencia artificial es un gran reto para los derechos humanos en el entorno digital: relator especial de la ONU. R3D: Red en Defensa de los Derechos Digitales.

Tarricone, M. (2020, September 30). ¿Hasta qué punto pueden automatizarse las decisiones judiciales? Enterate cómo funciona el software que ya se usa en la Ciudad de Buenos Aires. Chequeado.

TEDIC. (2019, May 9). Ética y protección de datos en la inteligencia artificial.

UNESCO. (2020). Anteproyecto de recomendación sobre la ética de la inteligencia artificial.

1Dartmouth Conference:

2Artificial Intelligence timeline:

3Original translation

4At the time of finishing this article, the implementation of AI in the constitutional chambers is on hold.

5It only applies to the Contentious-Administrative and Tax Court of the Public Prosecutor’s Office (Executive Branch).

6This IA initiative originates in the Deputy Attorney General’s Office for Administrative and Tax Litigation of Argentina. (Executive Branch).

7Class on IA for the legal clinic at the National University of Asuncion, led by TEDIC. Held on December 8, 2020.

8Deputy Attorney General in Administrative and Tax Litigation of Argentina. Responsible for the implementation of Prometea in the Prosecutor’s Office.