B. Technology in service of democracy and fundamental rights
"Because the computer says so" can never be an acceptable explanation for a government decision that affects citizens. The application of automated decision-making calls for checks and balances in order to protect human dignity and ensure good governance. The GDPR sets legal limits for the use of algorithms in decision-making. The general rule[1] is that governments or companies cannot assign decisions to computers if such decisions could bring about significant disadvantages for citizens or consumers. In exceptional cases in which automated decision-making is allowed, the citizen or consumer has the right to obtain an explanation, to object, and to request that a new decision is taken by a person instead of a computer.
ICT systems must therefore make it possible for government professionals to overrule the algorithms based on their own considerations of data and interests.[2] An official must be able to say 'no' even if the algorithm says 'yes'.
Governments need to demonstrate that their algorithms are fair. Automated decisions need to be well-reasoned so that they can be verified by the citizen(s) concerned, the more so because the rules for automated decision are not always a seamless translation of the underlying laws and regulations. Governments should make the algorithms they use public, explain their decision-making rules, assumptions, legal and data sources, and have the algorithms tested by independent experts, including ethicists. These tests must be repeated regularly, in particular for self-learning algorithms.[3] This involves, among other things, ensuring that the algorithm does not develop a discriminatory bias with regard to certain social groups.[4]
The cities of Helsinki and Amsterdam have jointly developed a public register of algorithms. Such a register lists the algorithms that the municipality uses and explains their workings. Citizens are invited to give feedback, with the aim of building human-centred artificial intelligence.[5]
Amsterdam is also developing a method to assess the algorithms that are used in the city – both by the municipality and by companies – for detrimental effects such as discrimination. One of the reasons for the assessment was an experiment with a self-learning algorithm that automatically handled complaints about a neighbourhood. If the algorithm had been put into service, it would have led to a situation where neighbourhoods with well-educated citizens who know how to complain would have been better cleaned by the city’s sanitation department than other neighbourhoods.[6]
Governments can better comply with their duty to state reasons if they include the right to explanation as a design requirement in the writing of the algorithm code. Truly smart algorithms must be able to explain in understandable language how they have arrived at an outcome. This facilitates human intervention in the decision-making process.[7]