AI & Big Data
2025/09/02

Artificial Intelligence Regulation: The Future in the Making

Explore artificial intelligence regulation in Brazil and globally: risks, impacts, and opportunities for innovation, finance, and governance....
Gabriel-A-Pignat

Gabriel Pignat

Especialista de Inovação na Evertec

Regulação da Inteligência Artificial: Brasil, EU AI Act e o futuro da governança tecnológica

Share:

Artificial intelligence (AI) is no longer a distant promise but a present reality in everyday life. From virtual assistants to recommendation systems, from medical diagnoses to automated decisions in public and private services, AI is transforming the way we live, work, and interact.

However, this technological revolution also brings ethical, social, and legal challenges that require immediate attention. Just as the internet triggered a profound reconfiguration of social and legal structures, AI demands a new regulatory framework capable of keeping pace with its speed and complexity.

Why regulate AI?

AI regulation is essential to ensure its development and use occur in an ethical, safe, and transparent way. The main reasons include:

  • Protection of fundamental rights: AI systems can directly affect people’s lives, influencing decisions on credit, healthcare, security, and employment. Regulation helps prevent abuse and discrimination.
  • Transparency and accountability: Algorithms must be understandable and auditable, allowing users and authorities to know how automated decisions are made.
  • Fostering responsible innovation: A clear regulatory environment provides legal certainty for innovative companies, encouraging investment in reliable and sustainable solutions.
  • Prevention of systemic risks: Regulation allows the identification and mitigation of risks such as algorithmic bias, security failures, data misuse, and unintended consequences.
  • Alignment with international standards: Establishing local rules aligned with legislations such as the EU AI Act positions Brazil and Latin America as key players in global AI governance.

Actors in the AI value chain

To regulate AI, it is necessary to understand its value chain, which involves different actors with complementary responsibilities. These include:

  • Data providers: Supply the data that fuels AI systems. The quality, diversity, and legality of this data directly impact algorithm performance and fairness.
  • Algorithm developers: Design AI models and define how data will be processed. They must adopt ethical practices such as bias testing, continuous validation, and accessible technical documentation.
  • Hardware and software manufacturers: Build the infrastructures supporting AI systems. Issues such as efficiency, interoperability, and cybersecurity are especially relevant here.
  • System integrators: Adapt and implement AI solutions in specific contexts such as companies, governments, or digital platforms, ensuring compliance with legal and operational requirements.
  • End users: Interact directly with AI systems. They must be informed about the limits, objectives, and functioning of the technology, especially in sensitive applications.
  • Regulators and policymakers: Establish rules, monitor compliance, and promote AI governance. They must work in coordination with civil society, the private sector, and academia.

Each link in this chain may be subject to different legal and ethical requirements. The challenge is how to regulate effectively and proportionally.

Regulation in Europe: EU AI Act

The European Union has emerged as a global leader in AI regulation with the creation of the EU AI Act, approved in 2024. It is the world’s first comprehensive legislation exclusively focused on AI systems, emphasizing safety, fundamental rights, and responsible innovation.

The EU AI Act follows a risk-based approach, classifying AI systems by impact level:

  • Unacceptable risk: Systems posing a clear threat to fundamental rights, such as subliminal manipulation or social scoring, are prohibited.
  • High risk: Includes applications in sensitive areas such as healthcare, transportation, education, and public safety. These systems must comply with strict requirements on transparency, data governance, technical documentation, and human oversight.
  • Limited risk: Systems interacting with humans, such as chatbots, must clearly disclose they are AI-operated.
  • Minimal risk: Low-impact applications, such as spam filters or content recommendations, are not subject to specific requirements.

This European model has become a reference for other countries, including Brazil, reinforcing the importance of regulation that fosters trust, competitiveness, and rights protection at a global scale.

Regulation in Brazil

Brazil has made significant progress in the debate on AI regulation. The main milestone is Bill No. 2,338/2023, currently under review in the Chamber of Deputies. Inspired by international models such as the EU AI Act, it proposes a risk-based approach aiming to balance technological innovation with the protection of fundamental rights.

The bill’s risk classification follows a tiered logic:

  • Excessive risk: Systems incompatible with fundamental rights and human dignity are prohibited. Examples include subliminal manipulation technologies, real-time biometric identification in public spaces without legal authorization, and social scoring systems.
  • High risk: Applications impacting sensitive areas such as healthcare, education, public safety, credit, and labor relations are allowed but subject to strict obligations. These include transparency on system functioning, detailed technical documentation, operational records, human oversight, and mechanisms for contestation.
  • Moderate or low risk: Systems with less impact on individual rights face lighter requirements, such as disclosure of AI usage, or may be exempt from specific rules.

Additionally, Bill 2,338/2023 complements the LGPD (Brazil’s data protection law), strengthens safeguards for personal data in automated systems, establishes principles of algorithmic governance such as explainability and auditability, and empowers regulatory authorities to oversee and enforce compliance.

This bill is currently the most advanced federal framework on AI in Brazil. Its outcome will be crucial in positioning the country alongside the European Union, China, and the United Kingdom as nations with modern and comprehensive AI legislation.

What to expect?

AI regulation is moving from a theoretical debate to a concrete reality across jurisdictions. The progress of the EU AI Act and the review of Bill No. 2,338/2023 in Brazil demonstrate that the world is advancing toward a new standard of technological governance—based on risk, transparency, and accountability.

Related posts

EVERTEC, Inc. (NYSE: EVTC) today announced that it is one of 325 companies selected for the 2020 Bloomberg Gender-Equality Index
Participamos en varios eventos de la industria del Turismo en Colombia para ofrecer soluciones de pagos digitales.

Gain access to financial market trends

Receive first-hand content produced by financial market experts.