The European Commission’s High-level Expert Group on Artificial Intelligence presented the Ethics Guidelines for trustworthy Artificial Intelligence on 8 April. EUA welcomes the guidelines, which come after a stakeholder consultation held earlier this year.
They are one of the components of the European strategy on Artificial Intelligence launched by the Commission in April 2018, aiming to increase public and private investments in artificial intelligence, make more data available, foster talent in the area and promote trust.
The purpose of the guidelines is to promote trust to artificial intelligence among citizens and build a competitive advantage for Europe in the field. They outline key European values and principles that need to be taken into consideration while developing and using artificial intelligence solutions, thus ensuring the most is taken out of the technology for the benefit of society.
The guidelines propose three preconditions to achieve “trustworthy artificial intelligence”: (1) it should comply with the law, (2) it should fulfill ethical principles and (3) it should be robust. The preconditions are complemented by seven key requirements that artificial intelligence applications should respect to be considered trustworthy: i) human agency and oversight; ii) robustness and safety; iii) privacy and data governance; iv) transparency; v) diversity, non-discrimination and fairness; vi) societal and environmental well-being; vii) accountability. The guidelines also include an assessment list that operationalises the key requirements and offer guidance to implementation.
EUA supports the Commission's efforts to promote favourable and safe conditions for the development and application of artificial intelligence solutions in Europe. Already today, it has a high impact on societies and the economy and it is very likely that it will significantly change the way live and work. This raises many ethical and legal questions that EU and the society at large should address to avoid the risk of misuse. In fact, the guidelines proposed in the report are quite general, and they could be widely applied in many other industrial technology and business service sectors.
While the publication of the guidelines is an important step towards reassuring citizens on the ethical development and use of artificial intelligence, it is essential that the Commission maintains a vigilant attitude towards emerging applications. The Commission should also establish a structured dialogue with universities on further developments, as they will be key in fostering the understanding of related artificial intelligence ethical issues. With their unique profile, universities can significantly contribute to fostering ethical mindsets by ensuring adequate training for future developers, deployers and end-users, as well as by providing the proper skills and training of ethicists in this area. Universities are also an important platform for societal deliberation and raising awareness on the impact of artificial intelligence solutions.
Following the presentation of the guidelines, the Commission will launch in a pilot phase this summer to test the assessment list. All interested stakeholders, including public institutions, research institutes and companies are welcome to participate, in order to gather feedback for the improvement of the guidelines.
EUA will continue its engagement in the discussion of ethical considerations and support the implementation of the European artificial intelligence strategy.