Nowadays, the most common artificial intelligence applications today are on display in self-driving vehicles; in speech recognition systems capable of identifying human language, then processing and interacting with it; in computer vision applications that process, analyse and understand images to formalise and process them; and in machine learning systems able to learn a task and improve their performance based on past experience.
As an example of these last two applications, at PICVISA we have designed Ecopick, our intelligent robot for waste separation using machine vision and machine learning to streamline the sorting process in recycling plants or quality control processes. Thanks to Ecopick, waste plants can increase their productivity at different stages of the value chain, with the consequent economic savings. Smart automation of recycling plants enables enhanced occupational safety in waste management tasks.
Artificial intelligence continues evolving and future advances and developments are expected, but the benefits are already plentiful: this tech allows us to understand X-ray results better and faster, and we can know our customers better and adapt our marketing accordingly.
Nobody doubts the benefits, but there is an open debate about the possible consequences of artificial intelligence, and its applications in robotics and other systems, for the economy, employment and society as a whole. The debate involves governments, universities and industry, and each institution raises ethical dilemmas that will underpin future legislation for artificial intelligence.
The ethical dilemmas of artificial intelligence systems and applications
Some of the ethical dilemmas associated with artificial intelligence are listed below. It can be seen that these risks are not uncommon in other technologies or automation applications:
- Will adoption and increased penetration of intelligent systems result in mass job losses? This technology boosts the appearance of other forms of employment and new professional skills, but will these be enough to offset potential job losses?
We don’t know yet, but we believe that the challenge posed by this disruptive technology lies in the ability of companies, employees, societies and governments in general to adapt.
- Are the risks of manipulation, security and bias greater than with other computer systems? Many artificial intelligence applications operate with algorithms based on large amounts of data and on statistical models. This can lead to biased decisions or deviations towards a type of preference implying some kind of discrimination. This tech can also be manipulated to nudge it towards achieving a certain goal: changing prices, manipulating or influencing elections, etc. This article from McKinsey explores this in further detail, even providing leaders with keys to meet such challenges.
- Is there a chance our cognitive skills and certain human relationships may be transformed? Delegating decision making, communication, planning and even some diagnostics to applications with AI could lead to a loss of personal competences and skills, as some authors have argued (AI & Global Governance: A New Charter of Rights for the Global AI Revolution, 2018. Groth, Nitzberg and Esposito).
- Loss of power for society and, in some applications, lack of control over data and privacy. We all remember cases such as Cambridge Analytica7. Although AI was not used, this example can be reviewed to show how technology and data can be manipulated to benefit certain political or economic interests.
- Difficulty in assigning responsibility in the event of errors or failures with applications featuring AI. We might believe that, if an error in an intelligent system causes damage, the people who designed the algorithm may be held responsible. However, this is an increasingly grey area as the autonomy and decision-making power of intelligent systems increases. Self-driving cars are always put forward as examples. They may decide to crash into another individual to avoid a fatal accident for the car passenger. To make this decision, the algorithm decides which human life is worth more. If this decision causes third party damage, who is responsible: the vehicle owner, the smart system or the manufacturers? Another important issue: how does the algorithm decide that one life is worth more than another?
Linked to this problem is the fact that explaining or tracing the decision the intelligent system makes is difficult. The more complex the algorithm, the more difficult it becomes to explain the decision-making process. This makes assigning responsibility difficult.
Ethical regulation and legislation are necessary to accompany AI development
Since its inception, AI has had its supporters and detractors. Famous scientists such as physicist Stephen Hawking and Marvin Minsky, creator of the AI concept in 1956, have on many occasions spoken out against it, or at least warned of the huge risks it poses. Hawking went as far as saying that “the development of full artificial intelligence could spell the end of the human race.” In contrast, other experts see many advantages, although they accept the need to regulate its application. For instance, Amazon CEO Jeff Bezos argues that “AI is a perfect example of something that has really positive uses, so you don’t want to put the brakes on it; but, at the same time, there’s also potential for abuses of that kind of technology, so you do want regulations.”
Many experts now agree on the need to regulate the proper use of some aspects of AI on a global scale to ensure it is a fair, safe and transparent technology. In fact, one of the most widespread fears about AI is that it may be used in an arms war. Indeed, a recent campaign called Stop Killer Robots seeks for an international agreement to be signed so UN member states agree to not use this technology for arms.
The questions facing automated industry with artificial intelligence
In recent years, different industrial sectors have committed to automating their processes by tapping into advances in AI and robotics. At PICVISA, with 15 years of recycling sector experience, we are well aware of this. An increasing number of industries are demanding robotic solutions involving greater automation of their recycling plants to boost efficiency.
One of the most common criticisms of automating industrial processes is that it leads to job losses. However, a report by the McKinsey & Company consultancy, while acknowledging that half of industrial activities could be automated in the future, also argues this could benefit workers by improving their health and safety conditions. Automating certain tasks also frees up time and resources, which can then be devoted to other, higher added-value tasks.
Therefore, it is a mistake to assume that increased automation means job losses, as robots will be used to perform the repetitive, lower added-value and higher risk tasks. Humans will focus on adding value where machines cannot, and it is expected that new jobs will be created that we are unable to imagine at present. The solution involves working on creating an environment where both actors, robots and humans, can live together and supplement each other.
The EU has created Ethics Guidelines for handling applications with AI
The Ethics and Artificial Intelligence report published by the CaixaBank Chair of Corporate Social Responsibility (IESE) mentions publications from several organisations and institutions to have drawn up a list of recommendations to guarantee the ethical and fair use of artificial intelligence. Noteworthy names include:
- The list of principles published by various EU bodies,
- The recommendations of the World Economic Forum
- And the reports published by the UNICRI Centre for Artificial Intelligence and Robotics of the United Nations and by Unesco.
As seen above, the risks listed concern both public and private leaders, even if no firm legislation is in place. The EU has drafted Ethics Guidelines seeking to establish good practices for AI management. These key points stand out:
- AI applications should support fundamental human rights, and not diminish, limit or infringe on human autonomy.
- AI requires algorithms to be safe, reliable and robust enough to deal with errors or inconsistencies during all system life cycle stages.
- Citizens must have full control over their own data, and these data will not be used to harm or discriminate against them. For waste management, working with intelligent sorting systems can limit the right to privacy, as a lot of information can be obtained about citizens’ consumption habits.
- Ensuring AI system traceability is always important to maintain transparency.
- AI applications must consider all types of skills, characteristics and requirements for all types of people, and guarantee accessibility and non-discrimination.
- Applications with AI must be used to enhance positive social change and improve sustainability and ecological responsibility.
- It is important to establish mechanisms to ensure accountability for AI systems and their results. The design and use of intelligent systems must be preceded by a clear allocation of responsibility for any damage they may cause.