The Framework Convention on Artificial Intelligence: Embedding Human Rights in the Digital Age

Written by Jomart Joldoshev as part of the International Courts Reporter Series

The adoption of the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law in May 2024 marked a milestone in the regulation of artificial intelligence (AI). For the first time, an international treaty directly addresses AI, situating its development and deployment within the established framework of human rights law. Rather than attempting to regulate technical design, the Convention grounds AI governance in principles of legality, necessity, proportionality, transparency, accountability, and non-discrimination. In doing so, it provides courts, policymakers, and practitioners with a structured framework for evaluating how AI can be introduced into public life, ensuring that technological innovation remains subordinate to democratic values and the rule of law.

The Convention emerged after years of consultation within the Council of Europe, involving governments, experts, civil society, and industry representatives. Negotiators recognized that AI raises legal challenges that transcend existing boundaries, including data protection, algorithmic bias, automated decision-making, and surveillance practices. While the European Convention on Human Rights and the modernized Convention 108+ on data protection already provided safeguards, they were not designed for machine learning and predictive systems. The new treaty fills this gap by setting binding standards that apply to all stages of the AI lifecycle, from design to oversight.

The scope is intentionally broad. It applies to both public authorities and private actors whenever AI affects human rights, democracy, or the rule of law. This ensures that obligations extend not only to state surveillance or policing but also to applications in employment, healthcare, finance, and education, where automated systems increasingly shape opportunities and rights. At its core, the Convention affirms guiding principles of legality, proportionality, accountability, and respect for human dignity, anchoring AI regulation in human rights law while leaving states flexibility in its implementation.

Crucially, the Convention requires states to embed safeguards into domestic law. Signatories must define the permissible scope of AI applications, establish oversight mechanisms, and provide avenues for redress. Particular emphasis is placed on sensitive data, such as biometric information, which requires heightened protection. The treaty also obliges states to conduct human rights impact assessments before deploying high-risk AI systems. By insisting on anticipatory regulation, it shifts the focus from reacting to abuses to preventing them.

The Convention does not stand alone. In parallel, the European Union has advanced the AI Act, which introduces a risk-based framework and designates specific applications, such as live facial recognition in public spaces, as high-risk, subject to strict safeguards. While the AI Act is binding on EU member states, the Council of Europe’s treaty extends across a wider membership, including non-EU states. Taken together, the two instruments create a layered legal architecture, comprising detailed sectoral rules within the EU and broader human rights-based standards across Europe.

Outside Europe, the United States has also begun to build its own governance framework. In 2023, President Biden issued an Executive Order on Safe, Secure, and Trustworthy AI, directing agencies to adopt safeguards for privacy, civil rights, and workplace equity. The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework, which is widely used in both government and private sectors. U.S. courts are beginning to weigh in as well, signaling caution about automated decision-making in areas ranging from criminal justice to employment. While the U.S. relies on a combination of executive action, agency guidance, and case law rather than a binding treaty, these efforts mirror the Convention’s emphasis on accountability, proportionality, and human rights protection.

The international dimension is equally important. The United Nations High Commissioner for Human Rights warned in 2020 of the risks posed by AI surveillance, particularly facial recognition at protests, questioning whether such uses could ever meet the tests of necessity and proportionality. OECD and UNESCO have also adopted guidelines emphasizing transparency, fairness, and safeguards against misuse. The Council of Europe treaty reflects and reinforces this convergence, providing the first binding instrument to translate such principles into enforceable legal standards.

A central feature of the Convention is its rejection of vague authorizations. Domestic laws must clearly define the purposes for which AI may be used, the categories of data that can be processed, and the safeguards available to individuals. Independent oversight by data protection authorities and, where appropriate, courts is essential. Compliance cannot rest on administrative discretion; it requires transparent rules and enforceable guarantees that limit state power.

The treaty does not impose a blanket ban on high-risk technologies such as biometric identification. Instead, it insists that their use be strictly justified, proportionate, and subject to rigorous safeguards. This balance reflects an acknowledgment that AI can bring benefits, but only when aligned with democratic values and subject to adequate oversight. For lawyers and judges, this underscores the importance of proportionality as the central analytical tool for assessing whether AI systems are compatible with human rights obligations.

The Framework Convention on Artificial Intelligence is therefore more than a symbolic gesture. By affirming that AI must operate within the boundaries of human rights, democracy, and the rule of law, it provides courts, policymakers, and practitioners with a common reference point. While it will not resolve all challenges posed by AI, it establishes a durable foundation for aligning innovation with the protection of fundamental freedoms. For the international legal community, it signifies that the rule of law must remain central to guiding technological change, ensuring that artificial intelligence is not a force outside the law but is constrained and shaped by it.