Recently, a leaked document appeared online, which appeared to be a draft whitepaper on Artificial Intelligence, purportedly by the European Commission. The white paper summarizes the current EU approach towards assuring the block remains at the forefront of AI development, and also properly regulates future AI implementations. The final whitepaper is rumoured to be announced on February 19th but, to give you a sneak peak, Data Insights read the leaked draft and summarized a few of the key take-away messages.
The whitepaper is exceptionally pertinent, as it concerns the current state of AI in industry. It also includes predictions for future trends. For a consultancy firm as our own, the facts and expected trends presented therein will likely provide a strong help with easing companies into AI oriented solutions. For anyone curious, here are a few excerpts.
- Currently, in the EU, more than half of top manufacturers have implemented at least one case of AI in manufacturing operations, compared to 30% in Japan and 28% in the US (Source: CapGemini 2019).
- EU funding for research and innovation for AI has gone up by 1.5b euros in the past 2 years, an increase of 70%.
- It is expected that, in the next few years, less data will be stored in Data Centres, and more will be stored locally at network edge-locations. Currently 80% of data is stored in data centres, and by 2025 it is expected 80% will be stored at edge-locations (factories, hospitals, etc.). Platforms will no longer be dominant in this area (Source: IDC Data Age 2025 study).
- In December 2018 the Commission presented a Coordinated Plan – prepared with Member States – to foster the development and use of AI in Europe.
- “EU excellence” testing centres will be created that can federate European, national, and private investments. They will oversee AI investment and research.
- The commission and the European Investment Fund will launch a pilot scheme of 100m euros in Q1 2020 to provide equity financing for innovative developments in AI. This will be scaled up by at least a factor of 10 from 2021 (through InvestEU).
- To ease companies and individuals into adopting AI technologies, the commission set out an AI strategy on 25 April 2018 addressing the socio-economic dimensions in parallel with the reinforcement of investment in research, innovation, and AI-capacities across the EU (see COM(2018) 247). These are summarized in the Guidelines on trustworthy AI, published in April 2019.
- This has been tested by around 350 organizations.
- The white paper outlines the definition of ‘High-Risk AI’ as:
- system and products that endanger the human life, relate to public health, or present severe risks for the public order (for example, medical equipment)
- national security (predictive policing)
- that involved in the provision of services within the public sector, including public procurement (for example in traffic management systems)
- that involved in the provision of services of public interest (airports, private hospitals, and other services provided by private operates with a public impact)
- biometric identification systems
- recruitment processes
For these cases they list a number of regulatory measures that are expected to be put in place, but have not been finalized. These include training only on European Data which meets standards of quality (e.g. non-biased data), model traceability, transparency, and human operational overseeing.
When the official whitepaper is released, there will be contact information for AI experts, data specialists, and public policy makers to send feedback. Feedback will be incorporated before the suggested regulations are turned into concrete law. So, if you fall into one of these three areas of expertise, be sure to weigh in with your thoughts!