ICT research and technology are leading to an unstoppable growth of AI, but what is the relationship between artificial intelligence and ethics? Let’s find out together in this article.
Numerous benefits are expected from the availability of new technologies with ever higher and more powerful computational capabilities, from the evolution of the Internet of Things and from the large amount of data that these new devices are able to collect and process. Thanks to artificial intelligence, new possibilities are opened for the use of robotic systems and machines that have not only with motor skills, but also reasoning skills. Robots are designed to support men in household chores but also in health care. All these innovations, at the same time, bring new ethical problems, towards which Europe has shown a certain sensitivity, not only with regard to the new forms of human-machine interaction, but above all with reference to the identity and security of the human person, to fair access to technological resources and freedom of research.
In 2018 the European Union issued the guidelines to ensure an ethical approach to artificial intelligence. The initiative was launched subsequently the creation of a working group and with a public consultation, in which citizens, as well as, researchers and institutions took part. On April 8th, 2019, it was published a list of requirements necessary to place the adjective – Trustworthy – before the term Artificial Intelligence (AI).
The European Commission has the distinction of being the first institution in the world to have drawn up, in April 2019, the ” Ethics Guidelines for Trustworthy Artificial Intelligence”. The purpose of the document – drafted by 52 High-Level experts – is to provide that the deep learning systems are, from a technical point of view, “respectful of the law and ethical values, taking into account the social environment”.
In a document issued by the European Commission it can be read that: “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
Key Guidance for Ensuring Ethical Purpose
This document, Ethical Guidelines for Trustworthy Artificial Intelligence, reflects on the existence of an accurate regulatory framework, which highlights the need for artificial intelligence to ensure legality.
- ensuring that AI is human-centric: AI should be developed, deployed and used with an “ethical purpose”, grounded in, and reflective of, fundamental rights, societal values and the ethical principles of Beneficence (do good), Non-Maleficence (do no harm), Autonomy of humans, Justice, and Explicability. This is crucial to work towards Trustworthy AI
- rely on fundamental rights, ethical principles and values to prospectively evaluate possible effects of AI on human beings and the common good. Pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities or minorities, or to situations with asymmetries of power or information, such as between employers and employees, or businesses and consumers
- acknowledge and be aware of the fact that, while bringing substantive benefits to individuals and society, AI can also have a negative impact. Remain vigilant for areas of critical concern
AI systems do not operate in a lawless world. A number of legally binding rules at European, national and international level already apply or are relevant to the development, deployment and use of AI systems today. Legal sources include, but are not limited to: EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector).
The requirements for ethical Artificial Intelligence
In the document issued on 8th April 2019, the European Commission indicates the requirements for ethical AI, together with a roadmap, that provides for the involvement of different stakeholders.
Let’s see this list of requirements.
Human agency and oversight
The use of artificial intelligence systems should be respectful of fundamental rights, supporting social equity, without restrictions on human autonomy.
Technical robustness and safety
Trustworthy AI requires the adoption of safe algorithms that are able to counteract errors and inconsistencies during the entire life cycle of the systems and that hinder any illegal operations.
Privacy and data governance
A data protection approach is required early in the design process of the AI systems, in order to ensure to citizens the full control over their data, ensuring that these will be not used against them, for example for discriminatory purposes.
Transparency
The traceability of artificial intelligence systems must be guaranteed.
Diversity, non-discrimination and fairness
Artificial Intelligence systems should take into account all human capacities, without being affected by unsuitable governance models and these systems should ensure accessibility for all.
Environmental and societal well-being
The use of artificial intelligence systems must be aimed at creating a situation of well-being and improving environmental sustainability and ecological responsibility, for sustainable development.
Accountability
Mechanisms must be adopted that guarantee accountability for Artificial Intelligence systems and their results, through continuous auditability of the systems.
Final considerations
European Commissioner for the Digital Agenda Marija Gabriel asserted she is “confident that these ethical guidelines will bring innovation in the field of Artificial Intelligence”.
There are also some contrasting opinions, such as that of Daniel Castro, vice president of the ITIF think tank, according to whom this path will cause a further delay of Europe to reach the level of China and the United States. Casto motivates his opinion by saying that: “Consumers care that a product is effective. There is no evidence that they are willing to pay for a product simply because it is ethical. “
Indeed, Europe’s great attention to privacy, especially following the GDPR regulation, limits the amount of data that companies can collect and use. This inevitably entails a possible penalisation compared to other countries, where the protection of personal information is clearly less rigid (USA and China). However, to these objections, the group of experts responded by underlining that just as there are consumers willing to pay more to consume organic food, there will be consumers who will require trustworthy AI, a more demanding market niche and therefore interested in ethical products made in Europe.
Stay up-to-date by reading our JOurnal.