Report on Artificial Intelligence in a Digital Age
The Special Committee on Artificial Intelligence in a Digital Age adopted the own-initiative report by Axel VOSS (EPP, DE) on artificial intelligence in a digital age.
The report noted that the world stands on the verge of the fourth industrial revolution, one which draws on its energy from an abundance of data combined with powerful algorithms and computing capacity. Todays digital revolution has triggered a global competition as a result of the tremendous economic value and technological capabilities that have accumulated in economies that commit the most resources to the research, development and marketing of artificial intelligence (AI) applications. It is estimated that by 2030, AI is expected to contribute more than EUR 11 trillion to the global economy.
On the other hand, digital tools are increasingly becoming an instrument of manipulation and abuse in the hands of some corporate actors as well as in the hands of autocratic governments for the purpose of undermining democratic political systems. This report stressed that the digital transition must be shaped with full respect for fundamental rights and in such a way that digital technologies serve humanity.
Members also warned that the EU has fallen behind in digital investment. As a result, there is a risk that standards will be developed elsewhere in the future, often by non-democratic actors, while the EU needs to act as a global standard-setter in AI.
Clear regulatory framework
The report noted that a clear regulatory framework, political commitment and a more forward-leaning mindset, which are often lacking at present, are needed for European actors to be successful in the digital age and to become technology leaders in AI.
EU Roadmap up to 2030
With a view to making the EU a global leader in AI, the report presents its EU Roadmap for AI with clear policy recommendations for the next years. These include:
Improving the regulatory environment
Members called on the Commission to only propose legislative acts in the form of regulations for new digital laws in areas such as AI, as the digital single market needs to undergo a process of genuine harmonisation. They called for consistent EU-wide coordination, implementation and enforcement of AI-related legislation.
Digital legislation should always be flexible, principle-based, technology-neutral, future-proof and proportionate, while adopting a risk-based approach where appropriate, based on respect for fundamental rights and preventing unnecessary additional administrative burden for SMEs, start-ups, academia and research.
The report highlighted that an underlying objective of the EUs digital strategy, as well as that of the AI strategy, is creating a European Way in a digitalised world. This approach should be human-centric, trustworthy, guided by ethical principles and based on the concept of the social market economy. The individual and the protection of their fundamental rights should always remain at the centre of all political and legislative considerations.
Members are convinced that it is not always AI as a technology that should be regulated, but that the level of regulatory intervention should be proportionate to the type of individual and/or societal risk incurred by the use of an AI system. They underlined, in this regard, the importance of distinguishing between high-risk (which needs strict additional legislative safeguards) and low-risk (which may, in a number of cases, require transparency requirements for end users and consumers) AI use cases.
For their part, Member States are asked to review their national AI strategies, as the several of them still remain vague and lack clear goals, including regarding digital education for society as a whole as well as advanced qualifications for specialists. The Commission should help Member States to set priorities and align their national AI strategies and regulatory environments as much as possible in order to ensure coherence and consistency across the EU.
Improved research
The report called for the EU to increase investment in research into AI and other key technologies, such as robotics, quantum computing, microelectronics, the IoT, nano-technology and 3D printing. It urged the expansion of the digital Europe programme and considered that its allocated funding of EUR 7.6 billion should be increased. The structure of research funding, including grant application requirements, should also be simplified.
Ecosystem of trust
The report also identified further policy options that could unlock AIs potential in health, the environment and climate change, to help combat pandemics and global hunger, as well as enhancing peoples quality of life through personalised medicine. AI, if combined with the necessary support infrastructure, education and training, can increase capital and labour productivity, innovation, sustainable growth and job creation. However, the EU and Member States should create awareness raising campaigns to inform and empower citizens to understand better the opportunities, risks and the societal, legal and ethical impact of AI to further contribute to AI trustworthiness and democratisation.
Mass surveillance, military concerns
The report noted with concern that such AI technologies pose crucial ethical and legal questions. Certain AI technologies enable the automation of information processing to an unprecedented scale, which paves the way for mass surveillance and other unlawful interference and poses a threat to fundamental rights, in particular the rights to privacy and data protection.
Members called on the Commission and Member States to prioritise funding AI research that focuses on sustainable and socially responsible AI, contributing to finding solutions that safeguard and promote fundamental rights, and avoid funding programmes that pose an unacceptable risk to these rights, including funding systems of mass surveillance, social scoring and other systems that have the potential to lead to negative social impacts.
The report concluded that the EU's AI strategy must not overlook the military and security considerations and concerns that arise from the global deployment of AI technologies. Members stressed the challenge of reaching a consensus within the global community on minimum standards for the responsible use of AI and expressed concern about military research and development on autonomous lethal weapons systems.