In Seoul summit, heads of states and companies commit to AI safety


    Government officials and AI industry executives agreed on Tuesday to apply elementary safety measures in the fast-moving field and establish an international safety research network.

    Nearly six months after the inaugural global summit on AI safety at Bletchley Park in England, Britain and South Korea are hosting the AI safety summit this week in Seoul. The gathering underscores the new challenges and opportunities the world faces with the advent of AI technology. 

    The British government announced on Tuesday a new agreement between 10 countries and the European Union to establish an international network similar to the U.K.’s AI Safety Institute, which is the world’s first publicly backed organization, to accelerate the advancement of AI safety science. The network will promote a common understanding of AI safety and align its work with research, standards, and testing. Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the U.K., and the U.S. have signed the agreement.

    On the first day of the AI Summit in Seoul, global leaders and leading AI companies convened for a virtual meeting chaired by U.K. prime minister Rishi Sunak and South Korean president Yoon Suk Yeol to discuss AI safety, innovation and inclusion. 

    During the discussions, the leaders agreed to the broader Seoul Declaration, emphasizing increased international collaboration in building AI to address major global issues, uphold human rights, and bridge digital gaps worldwide, all while prioritizing being “human-centric, trustworthy, and responsible.”

    “AI is a hugely exciting technology — and the U.K. has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year,” Sunak said in a U.K. government statement. “But to get the upside, we must ensure it’s safe. That’s why I’m delighted we have got an agreement today for a network of AI Safety Institutes.” 

    Just last month, the U.K. and the U.S. sealed a partnership memorandum of understanding to collaborate on research, safety evaluation, and guidance on AI safety. 

    The agreement announced today follows the world’s first AI Safety Commitments from 16 companies involved in AI, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.) 

    The AI companies, including those from the U.S., China, and the United Arab Emirates (UAE), have agreed to the safety commitments to “not develop or deploy a model or system at all if mitigations cannot keep risks below the thresholds,” according to the U.K. government statement. 

    “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Sunak said. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.” 



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here