Written byCedric Villani, French mathematician and politician
In the last few years, we have seen incredible achievements that were previously assumed to be impossible, like mastering the game of Go or winning against professional StarCraft players.
Those accomplishments were recently recognized with the ACM Turing award earned by Yoshua Bengio, Geoffrey Hinton and Yann LeCun for their substantial contributions. Nobody (except those three?) saw that revolution coming, and AI is already being implemented successfully as part of our daily lives, be it smartphones, computers, cars, health monitoring systems…
As for any other emerging technology, no one can tell how fast AI will develop and the extent to which its spectrum of applications will broaden.
The good news is that AI has the potential to be overwhelmingly beneficial and we have already seen it in quite a lot of applications like healthcare or transportation.
AI is able to improve people’s lives, and could help us tackle the major challenges of the 21st century. However, AI is like any other technology, subject to misuses and unethical behaviours.
This is a risk we have to acknowledge, and it is one more reason to keep doing research to better understand and address the issues that might come up in the future.
We typically want to avoid natural and misguided reactions to these risks, in the shape of over-engineered short-sighted decisions tailored to a given case, which would slow down progress, and probably not even prevent the risks they were meant for in the first place.
As we stand at the very beginning of the process, we have a great opportunity and plenty of time to shape the direction of AI technology as long as we accept that this will be the result of an experimental and iterative methodology.
Here, the key to successful AI development is to do it responsibly, in a human-centric fashion and built upon trust: AI should reflect our values, benefit everyone - not a happy few - and be designed to empower humans.
To achieve that, in addition to the needs of any AI applications - mainly data, computing resources and experts - it requires mixing skills from a wide range of expertise which are not used to collaborating with each other: AI, specialist know-how for the application at hand, ethics, business etc.
Along this experimental journey, we should allow ourselves to learn along the way, and bear in mind that a good framework is one that you don’t need to change.
It should boil down to our principles, which are not subject to the same rate of change as technology, use cases and society. Principles that we want to stick to no matter what, and that are sufficiently high level to give the ecosystem room to breathe.
Only with those principles in mind will we be able to dive in on specific real-world issues and use cases, favouring pragmatism over pure philosophical discussions. Establishing and standing by strong AI principles, ramping ourselves up as policy makers as well as investing massively on research and education seems like a reasonable and pragmatic approach to the future of AI.
What must be done to ensure that the potential offered by science, technology and innovation towards achieving the SDGs is ultimately realized?
In the context of the UN Commission on Science and Technology for Development, the CSTD Dialogue brings together leaders and experts to address this question and contribute to rigorous thinking on the opportunities and challenges of STI in several crucial areas including gender equality, food security and poverty reduction.
The conversation continues at the twenty-second session of the CSTD and as an online exchange by thought leaders.