Sorry, you need to enable JavaScript to visit this website.

Governing artificial intelligence for all

How will artificial intelligence (AI) change the way governments pursue common interest and leave no one behind? At its 23rd session in April 2024, the United Nations Committee of Experts on Public Administration highlighted the vast potential for AI to accelerate progress on the Sustainable Development Goals, from healthcare to education to social welfare. While its promises are immense, so are its risks. 

Traditional and generative AI are two distinct approaches in the AI landscape. While the advantages of generative AI lie in creativity, the handling of uncertainty, and novel applications, traditional AI excels in efficiency, interpretability and specific task solving. Both approaches have their strengths and limitations, and their future in the AI field holds tremendous potential for groundbreaking advancements and transformative applications.

AI systems nevertheless carry inherent challenges that warrant attention. These encompass biases that can perpetuate inequalities and discrimination if fairness is not central to their development and implementation. Equally significant are data security and privacy concerns, the environmental impact of large data centers and the imperative of robust data governance. Additionally, the lack of quality data, particularly for underrepresented groups, further compounds these challenges, highlighting the need for better data governance. 

The evolving AI landscape also raises challenges for the future of work, including in the public sector, such as increasing skills gaps and job displacement. Addressing these challenges necessitates proactive measures such as workforce retraining to mitigate the risk of deepening digital disparities. Moreover, ensuring transparency and interpretability in public decisions relying on AI is essential for upholding public trust and fostering accountability. Overdependence on the technology and potential loss of human agency further underscores the importance of balancing technological advancements with the preservation of essential human capabilities. 

The misuse of AI poses serious ethical and societal concerns, potentially leading to human rights violations and undermining democratic principles. These complex challenges require comprehensive ethical frameworks to guide the responsible development and deployment of AI technologies across diverse sectors, in line with the guiding principles developed by OECD and UNESCO. Additionally, the dominance of big tech companies in AI development and its impact on geopolitics necessitate careful consideration and regulatory oversight to ensure equitable access to and utilization of AI resources.

Governments serve dual roles as both users and regulators of AI, holding significant responsibility in establishing robust safeguards and guardrails for privacy, security, and ethical AI use. Leading by example, they not only set standards but also gain valuable insights into effective regulation through their own use of the technology. 

The integration of AI in the public sector holds multiple promises in terms of improving the efficiency of government operations, the effectiveness of public policies, the quality of public services and the integrity of public management. By leveraging AI capabilities, governments can mitigate biases and enhance efficiency in critical areas, such as education and taxation. This potential for improved decision-making underscores the importance of integrating AI technologies responsibly and ethically to ensure equitable outcomes and uphold public trust.

Furthermore, the utilization of AI in the public sector necessitates a robust framework of accountability and ethical considerations to mitigate potential biases inherent in algorithmic decision-making. Careful consideration is imperative when employing AI for surveillance to safeguard privacy, security, and prevent misuse, such as racial profiling. Ensuring that AI tools respect fundamental rights and ethical principles is essential to mitigate the risk of discriminatory practices and uphold principles of fairness and equity.

There is a need for transparency and oversight mechanisms to ensure that AI-driven processes align with societal values and respect fundamental rights. It is essential to facilitate dialogue and collaboration among stakeholders to exchange insights and lessons learned, thereby advancing the responsible use of AI as a tool to augment human expertise rather than replace it in policy development and service delivery. In this sense, the G7 presidency of Italy, with the support of the OECD and UNESCO, is acknowledging the importance of addressing the responsible use of AI in and by the public sector.

More broadly, the governance of AI should aim to close the gap between accountability, transparency, ethics and integrity in technological advancement. It should comprise a legal framework to ensure that AI technologies are developed with the goal of helping humanity to navigate the adoption and use of these systems in ethical and responsible ways.

Additionally, AI governance must prioritize the focus on achieving the Sustainable Development Goals while ensuring equitable benefits for all and avoiding exacerbating existing inequalities. Building capacities for developing countries is crucial to avoid them being left behind in the AI revolution. Multistakeholder involvement is essential for the effective shaping of AI governance. Regular dialogue and review mechanisms should be established to facilitate continuous discussion and evaluation of the rapidly-evolving AI governance frameworks. This iterative process is crucial to adapt to the evolving technology landscape and address emerging challenges effectively.

The forthcoming United Nations Summit of the Future and its Global Digital Compact provide a unique opportunity to get it right, building on the work of the United Nations High-Level Advisory Body on AI. The United Nations system has a pivotal role to play in advancing a human-centred, rights-based approach to AI. In March 2024, the United Nations General Assembly adopted a landmark resolution on the promotion of “safe, secure and trustworthy” AI systems that will also benefit sustainable development for all.

The year 2024 is a pivotal one for a fairer digital revolution with greater inclusion of vulnerable groups and developing countries. Despite the challenges, the potential benefits of AI far outweigh its risks. To truly unlock its transformative power, a well-designed governance framework is essential.

By Sherifa Sherif, Member of the Committee of Experts on Public Administration and Professor of Public Administration, Faculty of Economics and Political Science, Cairo University, in collaboration with members of the Committee's working group on digital government