The energy was high today at AI House Davos after a vibrant start yesterday — with many exciting topics yet to be covered. Our theme on Tuesday centered on “Opportunities and risks.” AI has huge promise, but how can we work to ensure that in five or ten years we look back and say that AI was indeed a force for good?
During another round of panels, roundtables and networking events, AI experts shared different insights within their respective industries, from the regulatory front to the business world.
How can AI contribute to solving some of the biggest challenges that we as humanity will face in the next century, like climate change and loss of biodiversity? How do we find the right level of regulation for the technology – safeguarding our value system, preventing abuse, creating a level playing field with clearly defined rules, while avoiding stifling innovation or erecting prohibitive barriers for smaller entities. What could global, regional or local AI governance look like, what institutions play what role and how can we use hard law, certifications and audits?
These were the big questions of the day.
In the early morning, our “Towards International AI Regulation” panel explored the balance between innovation and compliance, and how to balance these regulations while considering economic interests.
Shortly afterwards, the conversation shifted to AI safety, and the need for a collaboration between AI industry leaders and academia to develop efficient yet sustainable risk frameworks for the rapid adoption of AI in organizations. Speakers included Yann LeCun, Vice president and AI Chief Officer of Meta and MIT’s Max Tegmark.
Over lunch, we discussed how to leverage open-source development to enable innovation and how we can use these approaches for safe and socially beneficial technology development, while also reducing AI risks.
In the afternoon, we continued with the panel “AI Governance and Safety: G7 Hiroshima AI Process”, where experts shared insights on the current state and challenges of AI governance, including the G7 Hiroshima AI Process, and discussed how to advance international cooperation for the responsible deployment of AI. Panelists included Anna Makanju, the Vice President of Global Affairs at OpenAI, Yoichi Iida, Assistant Vice Minister for International Affairs of Japan, and Arisa Ema, Associate Professor at Tokyo College.
The rest of the day made space to address the impact of AI on humanitarian issues. The panel “Solving for Humanity with AI” was a discussion with leaders from the government, technology, and policy sectors about how we innovate for AI, when and where we regulate, and how we should continue iterating AI technologies and implementation, for the greater good.
Brad Smith, Amandeep Gill, Doreen Bogdan Martin, and Marie Laure Salles delved into the intricate geopolitical implications of AI, engaging in a dynamic and insightful dialogue that navigated the complex intersection of technology and policy. Their discussion underscored a crucial message: understanding governmental directions is essential for conducting business in an era increasingly shaped by AI.
During a discussion on “Modern Slavery in the Age of AI,” it was made clear that although AI has the potential to accelerate efforts to end modern slavery and improve the quality of survivor programs, it can also put more people at risk by increasing the need for human labor to develop the technology. Former UK Prime Minister Theresa May, Princess Eugenie and John Schultz explained how they are fighting against this issue by creating a robust cross-sector community of senior global leaders and changemakers.
After an enriching and thought-provoking day, the day ended with a dinner featuring futurist and author Amy Webb.
Join us tomorrow, Wednesday 17th of January for another full day of talks, panels and roundtables on the topic of “AI Applications” where we will be diving into the use of AI in different industries such as Healthcare, Education and Art.