Central government often does not assess risks of AI
AI deployed chiefly to increase efficiency of internal processes
The Dutch government does not know if many of its artificial intelligence (AI) systems work as intended. Government organisations say they have not weighed up the opportunities of more than half their AI systems against the risks. A focus investigation by the Netherlands Court of Audit of 70 government organisations, furthermore, concludes that there is an incentive for the organisations to classify their systems as low risk.
Together, the 70 organisations said they were using or had used 433 AI systems. AI is not yet widely deployed in central government. Most systems (167) are still experimental and a clear majority of the organisations (88%) use no more than 3 systems. Only 5% of the systems have been published in the Algorithm Register. The organisations that use AI the most are the police and the Employee Insurance Agency (UWV), with 23 and 10 systems respectively. The Court of Audit’s investigation provides a first insight into the use and purpose of AI in central government.
Deployed chiefly to improve internal processes
About two-thirds of the systems are deployed chiefly to improve internal processes and do not have a direct impact on citizens and businesses. They are used to analyse and process large volumes of information in order to optimise workflows. AI that automatically converts speech to text or instantly anonymises documents can save the government a lot of time and money. Analysing internal documents also costs far less effort when performed automatically by an AI system rather than manually by a civil servant.
AI often used for inspection and enforcement, and knowledge processing
Performance unknown
Remarkably, the organisations often do not know whether their AI systems are working correctly. It is not known whether 35% of the systems in use are living up to expectations. This can be because goals were not set in advance. Conversely, it might not be known if the systems are working successfully. Interestingly, the organisations are often enthusiastic about AI systems they no longer use. 82 of the 141 terminated systems (58%), according to the organisations, performed as expected or even better. Nevertheless, they were terminated owing, for instance, to lack of capacity for further development.
Most organisations use AI
Number of organisations | |
---|---|
Use AI | 40 |
Are experimenting with AI | 15 |
Do not use AI | 14 |
32 of the 40 organisations use 3 AI systems at most
The results of 35% of the AI systems are unknown
Results of AI usage in all organisations | |
---|---|
Unknown | 35 |
Known | 65 |
Risks differ per system
The use of AI offers opportunities but is not without risk. Risks will be mitigated in part by a new EU AI regulation. Under the new rules, all AI systems must be risk assessed. A system that administers benefits for vulnerable households, for instance, poses more risks than one that summarises internal documents. AI systems with an unacceptable risk will be prohibited and high-risk systems must meet additional conditions in the future. The organisations classify most of their AI systems as ‘minimal risk’. However, this does not mean they are risk free. There may still be a risk of privacy violations, weak information security or harm to citizens and businesses through unfair disadvantage.
Ewout Irrgang, the Court of Audit’s vice-president, believes there are generally benefits to the use of AI in central government, ‘Many systems used for operational management have few risks and are already making the government more efficient. Use is still limited but our impression is that government processes are faster, cheaper and more effective. But we also see risks. One problem is that the government itself must assess the risks. There’s an incentive to classify an AI system as minimal risk so that it doesn’t have to satisfy all the requirements. A form of oversight and control is needed so that the House and government can be confident enough safeguards are in place for AI systems with a higher risk.’