Artificial intelligence as a threat
For decades we have been threatened by the dangers of technological advances, in particular the development of artificial intelligence (AI). From the taking of our jobs, to automated surveillance and profiling deciding our place on earth, to the destruction of the species by mechanical terminators. As usual, it is the authors of sci-fi books or blockbusters who take the lead in describing these effects. However, scientists also recognise these consequences. They are trying to describe them, measure them and find ways to limit the negative consequences.
Pros and cons
No one denies that AI affects the education system, the labour market, medicine, industry. It can ultimately affect our quality of life or, in extreme cases, threaten it. It is supposed to improve and accelerate decision-making. The increased use of artificial intelligence includes the following effects:
- Thanks to AI, it is possible to track people suspected of committing serious crimes or terrorism. It is also possible to surveillance people associated with certain beliefs. From this, it is only a step to scoring citizens (‘morality’ or ‘ethics’ in general), which can pose a threat to human rights and democracy.
- AI can be used to create audio and visual content or manipulate it to give it authenticity (fake news, undermining the authenticity of video footage, e.g. in courts)
- Micro-targeting, using profiling based on automatically collected information, allows content and forms of communication to be adapted to each individual recipient. This can lead to polarisation and division of society and manipulation of elections (both individual and collective).
- Incorrectly programmed algorithms or incorrect data can cause decisions made by machines to be detrimental to those affected (e.g. not granting credit based on wrong assumptions).
- Over-reliance on technology can be life-threatening (e.g. accidents caused by autonomous vehicles or self-driving systems operating nuclear power plants).
- Machines that are lethal autonomous weapons systems may have the skills to make decisions about who, when and where to fight – without human intervention.
Artificial intelligence and the labour market
The potential loss of jobs should not be forgotten either. Oxford Economics estimates that 20 million jobs will disappear worldwide by 2030 due to increasing automation[1]. And this applies only to industry. McKinsey&Company went even further. It indicates that in the same period of time, up to 800 million people may lose their jobs due to automation (although the authors of the report are more inclined to a more optimistic scenario – 400 million)[2]. Millions of people will have to change jobs or upgrade their skills.
Physical activities in predictable environments, such as operating machinery and preparing fast food, are the most susceptible to automation. Data collection and processing can also increasingly be done better and faster by machines. This is bad news for bank employees, lawyers or accountants. But before a new generation of luddites rushes into battle, destroying data centres or cutting IT buses, it is worth noting that such significant job losses in some sectors of the economy are likely to be offset by increases in others. These include IT, construction, health care and elderly care. On top of that, professions will emerge that we don’t even know about today.
Is there hope?
Isaac Asimov created the three laws of robots in 1942. Their purpose was to regulate the question of relations between future thinking machines and humans. They were as follows:
- A robot must not harm a human being or, through inaction, allow a human being to be harmed.
- A robot must obey human orders unless they conflict with the First Law.
- A robot must protect itself, as long as this does not conflict with the First or Second Law.
Later, Asimov proposed one more law, superior to the previous ones: „A robot may not harm mankind or, by failing to act, cause harm to mankind”.
Laws formulated in this way could only apply to machines and automatons with a very low level of autonomy. However, science has surpassed the artist’s vision, and we can no longer count on being able to impose our laws on something that may in a moment acquire a consciousness of its own.
Artificial intelligence in itself is not a threat to human security, freedom and privacy. In the right hands, it can contribute to improving public services, business operations, democracy and security. However, too much public trust in artificial intelligence can be a serious problem. AI used with the wrong intentions or defined in the wrong way can have a disastrous effect. As always, vigilance and common sense are necessary.
The issue of the ethics of designing solutions based on artificial intelligence is also important. It seems necessary to undertake regulatory initiatives that will indicate the limits of the design, creation and use of modern technology. Such attempts have already been made in the European Union a few years ago[3]. In the document prepared, experts pointed out that trustworthy AI is possible if it is concurrently:
- legal, respecting all applicable laws and regulations
- ethical, ensuring compliance with ethical principles and values
- robust, from both a technical and social perspective
Whether this will be of any use – we shall see.
The film ‘Eagle Eye’ shows the intrigue of a machine acting in good faith, according to principles instilled in it by humans. However, it used deadly tools to achieve its goals. As a warning, it is worth quoting from the makers of this blockbuster: “Sometimes the means to protect our freedom become a threat to it”.
[1] https://resources.oxfordeconomics.com/how-robots-change-the-world?source=recent-releases
[2] https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
[3] European Commission, Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office, 2019, https://data.europa.eu/doi/10.2759/177365