The government's report on AI doesn't recommend regulating it

The review of artificial intelligence argues a new AI council should be created but it wouldn't be in charge of regulating systems



Nine months after the government commissioned an independent review into artificial intelligence, the authors have revealed their fundings. The major recommendations? AI and its applications shouldn't be subject to direct regulation, but an AI council should oversee the industry.



The report, commissioned in February, was described as a "major review" into the development of AI in the UK. After speaking to more than 100 experts, the authors have revealed their findings. Jérôme Pesenti, from BenevolentAI, and Wendy Hall, a computer scientist at the University of Southampton, say there should be greater training and access to data if the UK is going to compete with other countries around the world on AI.
The proposed AI council, the report's authors argue, should "operate as a strategic oversight group" and allow for discussions around "fairness, transparency, accountability and diversity". By not recommending regulation of AI technologies, the report goes against recent calls from the likes of Elon Musk to introduce more checks and measures.
In January a report from the Alan Turing Institute called for an AI watchdog to be setup to audit and scrutinise algorithms. "They could go in and see whether the system is actually transparent and fair," the authors of the Turing Institute said.
The government's report, titled Growing the artificial intelligence industry in the UK, says AI governance shouldn't be covered by the proposed AI council. It does, however, say guidelines proposed by the Royal Society could be used for AI applications in the UK.
Elsewhere within the new review's 18 recommendations, the authors argue there should be a framework created to explain how decisions are made by AI systems. This should be created by the data protection regulator, the Information Commissioner's Office (ICO) and should seek to outline how processes and AI services work. Academics working within machine learning have expressed concerns that the systems are "black boxes" that come to decisions that can't be explained. The EU's General Data Protection Regulation (GDPR) will also place more transparency obligations on businesses.
The report also says that data trusts should be created. These would create a body to advise on how data used for training AI systems is handled. It could, in theory, stop complex incidents such as the NHS and DeepMind's unlawful data sharing deal. "To use data for AI in a specific area, data holders and users currently come together, on a case by case basis, to agree terms that meet their mutual needs and interests," the report says. Data trusts would not replace the ICO.


Pesenti and Hall also argue there should be greater resources made for AI education. Their recommendations say 300 PhD places should be created at universities, The Alan Turing Institute should become the national institute for artificial intelligence and there should be online courses available to people.
Separately, the House of Lords is conducting a review into AI and its economical, ethical and social implications on the world.


Comments