The field of AI ethics emerged from the need to address the individual and societal harms AI systems might cause.
These harms rarely arise as a result of a deliberate choice. Most AI developers do not want to build biased or discriminatory applications or applications which invade users’ privacy.
The main ways AI systems can cause involuntary harm are:
- misuse - systems are used for purposes other than those for which they were designed and intended.
- questionable design - creators have not thoroughly considered technical issues related to algorithmic bias and safety risks.
- unintended negative consequences - creators have not thoroughly considered the potential negative impacts their systems may have on the individuals and communities they affect.
The field of AI government ethics mitigates these harms by providing project teams with the values, principles and techniques needed to produce ethical, fair and safe AI applications.
Just this year Deloitte released its report on Artificial Intelligence Ethics during the World Government Summit held in Dubai.
“While there is an increasing interaction between AI technologies and our socio-political and economic institutions, consequences are not well defined. The rapid development of AI has raised numerous ethical concerns, pushing us to redefine the future of work and society,” said Rashid Bashir, Partner and Public Sector Leader at Deloitte Middle East.
In this report, Deloitte outlined 3 key recommendations for artificial intelligence in government:
- Predicting the impact of the evolution of AI: Governments are encouraged to develop a risk-assessment framework and prediction tools that will anticipate the impact of artificial general intelligence (AGI), including psychological effects on human beings.
- Focus on Transparency and Trust: AI systems should be designed on principles that allow systems to be assessed objectively for transparency and accountability.
- Setting up a global-standards body: Over a short period, an increasing number of countries have announced the release of multiple AI ethical guidelines. The current challenge is to build a code of ethics for AI that has global reach and is acceptable internationally.
Individually though some countries are far ahead of others in terms of 'AI readiness' and are already in the midst of developing their own guidelines for ai in government ethics.
In Australia, the CSIRO have released a discussion paper in which they highlight a need for the development of AI in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration.
They aim to discuss key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. In order to guide the framework, the CSIRO have created eight core principles:
- Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
- Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
- Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
- Privacy protection. Any system (including AI systems) must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.
- Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.
- Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
- Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm
- Do no harm. Civilian AI systems must not be designed to harm or deceive people. It should be implemented in ways that minimise any negative outcomes.
Other countries, that are high up the list in terms of AI readiness, like the United Kingdom have created similar guidelines for AI ethics in government.
Moving forward, as artificial intelligence permeates every aspect of our lives more and more, it is vital that governments across the world develop their own guidelines in order to ensure that AI and other emerging technologies are geared towards greater societal benefits.