Focus on AI in central government

The Dutch government does not know if many of its artificial intelligence (AI) systems work as intended. Government organisations say they have not weighed up the opportunities of more than half their AI systems against the risks. A focus investigation by the Netherlands Court of Audit of 70 government organisations, furthermore, concludes that there is an incentive for the organisations to classify their systems as low risk.

Together, the 70 organisations said they were using or had used 433 AI systems. AI is not yet widely used within central government. Most systems (167) are still experimental and a clear majority of the organisations (88%) use no more than 3 systems. Only 5% of the systems have been published in the Algorithm Register. The organisations that use AI the most are the police and the Employee Insurance Agency (UWV), with 23 and 10 systems respectively. The Court of Audit’s investigation provides a first insight into the use and purpose of AI in central government.

Deployed chiefly to improve internal processes

About two-thirds of the systems are deployed chiefly to improve internal processes and do not have a direct impact on citizens and businesses. They are used to analyse and process large volumes of information in order to optimise workflows. AI that automatically converts speech to text or instantly anonymises documents can save the government a lot of time and money. Analysing internal documents also costs far less effort when performed automatically by an AI system rather than manually by a civil servant.

AI often used for inspection and enforcement, and knowledge processing

Applicaiton of AI systems. The most cxommon applications are knowledge processing (124 times) and inspection and enforcement (82 times). Two-thirds of the systems have no direct impact on citizens and businesss.

Performance unknown

Remarkably, the organisations often do not know whether their AI systems are working correctly. It is not known whether 35% of the systems in use are living up to expectations. This can be because goals were not set in advance. Conversely, it might not be known if the systems are successful. Interestingly, the organisations are often enthusiastic about AI systems they no longer use. 82 of the 141 terminated systems (58%), according to the organisations, performed as expected or even better. Nevertheless, they were terminated owing, for instance, to lack of capacity for further development.

Risks differ per system

The use of AI offers opportunities but is not without risk. Risks will be mitigated in part by a new EU AI regulation. Under the new rules, all AI systems must be risk assessed. A system that administers benefits for vulnerable households, for instance, poses more risks than one that summarises internal documents. AI systems with an unacceptable risk will be prohibited and high-risk systems must meet additional conditions in the future. The organisations classify most of their AI systems as ‘minimal risk’. However, this does not mean they are risk free. There may still be a risk of privacy violations, weak information security or harm to citizens and businesses through unfair disadvantage.

Do you have any feedback on this investigation?

We welcome all feedback on our audits and investigations. What do you think about our report? If you have any questions or need further information, mail us at feedback@rekenkamer.nl. We read all emails carefully and treat them in confidence.