As the world moves forward towards the challenges of the XXI century, Artificial Intelligence is frequently identified having both potential in the sphere of possible threats – e.g. reproducing misallocated discrimination based on historic data biases – as well as the potential for becoming a relevant instrument in addressing strategic global scale of events such as Climate Change.
With AI’s increasing adoption, the relevance in the mixture of the two prior spheres gains focus from the public governance perspective.
How can the usage and implementation of AI in a Public Policy environment become efficiently-ethic prepared for the challenges the EU is facing as a whole today – many of which structurally inherited – as well as for the years to come?
The EU is shaping European countries’ digital future with an AI strategy in place, focusing in AI’s excellence and trustworthiness solidified with a legal framework “to address fundamental rights and safety risks specific to the AI systems”, restricting or banning some uses of artificial intelligence within its borders while still failing to avert possible misuses and questions that urge being addressed to this day.
This doesn’t come unadvertised as it is widely understood that in many cases AI algorithms aren’t prepared for approaching simple questions in a holistic fashion, at times intuitive from a human perspective.
Indeed, we have to remain realistic on one of computing founding father’s words: “If a machine is expected to be infallible, it cannot also be intelligent.” – Alan Turing
Meaning, in fact one computing system built on mathematical formulations cannot solely respond to all angles of the questions it’s designed to address if it’s meant to respond them in a consistent fashion.
This is important because the current level of Artificial Intelligence development – frequently manifested with the scope of Machine Learning – reveals itself in “the use of data and algorithms to imitate the way that humans learn, gradually improving the accuracy” (IBM’s definition) in the humanly incomplete fashion described before.
As widespreadly understood in the field, these estimations come with risks in terms of privacy, bias and discrimination, questioning the role of human judgment and fairness carrying with themselves at times silent costs to the purpose algorithms are designed in the first place.
These risks combined with epistemologically dubious model interpretability generate mistrust at face value.
According to the US think tank Center for Data Innovation (CDI), European Commission’s proposed regulation would “kneecap the EU’s nascent AI industry before it can learn to walk”, suggesting that instead of “focusing on actual threats like mass surveillance, disinformation, or social control”, the Commission is “micro-managing the use of AI across a vast range of applications”. It is added small and medium-sized enterprises (SMEs) with a turnover of $12 million deploying ‘high risk’ systems can see as much as a 40% reduction in profit as a result of the legislation. Notwithstanding whether the right EU regulations have been ascertained in place, the AI industry development in the EU is already being shaped by the rigor and effectiveness of today’s AI policymaking.
While the past 15-year work developments have been moved towards highly data intensive algorithms capable of discovering new drugs; developing self-driving cars; speech recognition and attempting to mimic human language – where the main focus has been made mainly in the modelling stage – today, the future of AI finds a complementary change of course, “from bits to things” (quote by Andrew Ng, among the most prominent figures in AI, he is also founder of LandingAI and DeepLearning.AI), where the fallible data-centric questions – sensible to developers’ intuitive criteria and algorithm’s incomplete interpretation of big data – can instead acquire a new structured framework, independent from data size, enriched with the context and domain knowledge still unused to its full potential to this day and marking the future for the 10 years to come. Data-centric AI is envisioned to automatically deal with explanatory factors and detect data inconsistencies, removing data noise with clearer instructions and more accurate error analysis, more autonomously preparing data, simplifying AI development.
These challenges should not only account for data cascades and human cognitive forms of bias but as well both conscious and subconscious forms of human thinking and interpretation over contextualized fields of knowledge.
Under this reality, AI’s data-centered ethical methodologies are forming a new meaning of relevance. This has been followed with the introduction of value sensitive AI Ethics by design and increased human involvement in AI development.
Being at the center of the data ethics debate as a regulator and owner of data at the societal level, public institutions carry a sensible responsibility in being capable of identifying and addressing algorithmic decision-making biases, capable of affecting socioeconomic policy implications dangerous for society as a whole.
In a systematic literature review over the “Implications of the use of AI in Public Governance”, Zuiderwijk, A., Chen, Y. C., & Salem (2021)1 confirm how more clear collaborative, widespread and focused development is missing to this day.
In their words, the rise of AI use in government is triggering many public governance questions for public institutions worldwide, challenging economic problems; societal concerns; social and ethical dilemmas and governance questions. In their view, future research on the topic should become more multidisciplinary, methodologically diverse with more specific forms of AI with conscious understanding over the implications of scaling AI across public institutions.
While many remarkable achievements have been accomplished in AI over the past few years, there’s still much we don’t understand. As the technology’s usage increases across sectors, original computational flaws present since inception remind us very firmly that the criticality for ethically-based innovations at the core have always been there from the start and today’s world is needing it more than ever.
1“The rise of AI use in government, coupled with increased sophistication of AI applications, is triggering many public governance questions for governments worldwide. These include challenging economic problems related to labor markets and sustainable development (OECD, 2019a; World Economic Forum, 2018); societal concerns related to privacy, safety, risk, and threats (Yudkowsky, 2008); social and ethical dilemmas about fairness, bias, and inclusion (International Labour Organization, 2019); and governance questions related to transparency, regulatory frameworks, and representativeness (OECD, 2019b). For example, how does the implementation of specific AI technologies affect how an actor is accountable and responsible when government officials make decisions based on AI-technology (Wirtz et al., 2020)? And what policies and regulations can be used to govern AI use in specific government organizations?[…]
Process-wise, future research on the implications of the use of AI for public governance should move towards more public-sector-focused, methodologically diverse, empirical, multidisciplinary, and explanatory research and focus more on specific forms of AI rather than AI in general. […] Furthermore, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector.”
Zuiderwijk, A., Chen, Y. C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577.