“The UK has a unique opportunity to shape AI positively for the public’s benefit, and to lead the international community in AI’s ethical development, rather than passively accept its consequences,” said Lord Tim Clement-Jones, Liberal Democrat peer and chairman of the committee.
According to the report, AI in the UK: Ready, Willing and Able?, AI will have an enormous impact on the nature of work in the UK, with some jobs disappearing, some being enhanced, some yet unknown jobs being created, and many changing. In order to cope with this change, the committee has recommended that early education in AI and retraining will become a necessity throughout life in order to allow citizens to “flourish mentally, emotionally and economically alongside [AI].”
Among other approaches, this could involve the creation of a state-run service to provide free education throughout life, as proposed by Leader of the Opposition Jeremy Corbyn at the Labour Party conference in September 2017: a ‘National Education Service’.
The report also suggests that children should learn about ethical design and use of AI tools at school, with the subject becoming an “integral part of the curriculum”.
“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use,” said Lord Clement-Jones.
“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”
The report is largely dedicated to how AI can be adopted with an ethical and controlled approach in accordance with five principles chosen by the committee: AI should be developed for the “common good and benefit of humanity”, that it should operate with intelligibility and fairness, that it should not come at the expense of privacy, that education is required to allow citizens to live with advances, and that AI should never have the capability to “hurt, destroy or deceive human beings”.
The committee have called for these five principles to serve as the heart of a ‘cross sector AI code’ which could be adopted nationally and even internationally.
The development of AI-equipped weapons has been a source of fear worldwide, with ongoing UN meetings dedicated to discussion of a worldwide ban on lethal autonomous weapons. Meanwhile, the dependence of machine learning – the computational methods serving as the basis for many AI applications – on huge amounts of data has led to concerns over the security of individuals’ personal data.
These fears have been exacerbated following reports of mass data harvesting for the purposes of developing political adverts based on personality profiles of Facebook users.
The report recommends that data-gathering methods must change such that individuals have fair access to their data and the ability to protect their privacy and agency. This requires new legislation, as well as a move against monopolisation of data by tech giants, beginning with a review into the use of data by the UK competition watchdog, the report recommends.
The peers have also warned against AI systems displaying bias due to past and present prejudices being “unwittingly” built into these systems, suggesting that datasets used in AI should be audited for bias. For instance, a recent study suggested that most commercial facial recognition tools failed to identify women and people with darker skin. Recruiting and training more diverse groups of people to become AI specialists could also help prevent these biases, the report suggests.