NEW DELHI, Feb 21 — Dozens of nations including the United States and China called for “secure, trustworthy and robust” artificial intelligence, in a declaration issued today after a major summit in New Delhi.
“Advancing secure, trustworthy and robust AI is foundational to building trust and maximising societal and economic benefits,” said the declaration signed by 86 countries and two international organisations.
It did not include any concrete commitments but highlighted several voluntary, non-binding initiatives, including to pool AI research capabilities internationally.
“We believe that AI’s promise is best realised only when its benefits are shared by humanity,” said the statement from the five-day event.
It called the advent of generative AI “an inflection point in the trajectory of technological evolution”.
The AI Impact Summit, attended by tens of thousands of people including top tech CEOs, was the fourth annual global meeting to discuss how to handle generative AI, and the first hosted by a developing country.
The United States, the world’s leading AI power, did not sign last year’s summit statement.
Hot topics at the Delhi gathering included the societal benefits of multilingual AI translation, the threat of job disruption and the heavy electricity consumption of data centres.
“Recognising the growing demands of AI on energy, infrastructure, and natural resources, we underscore the importance of developing energy-efficient AI systems,” the statement said.
Analysts had said ahead of the declaration that the summit’s broad focus, and vague promises made at its previous editions in France, South Korea and Britain, would make concrete commitments unlikely.
The summit declaration said that “deepening our understanding of the potential security aspects remains important.”
“We recognise the importance of security in AI systems, industry-led voluntary measures, and the adoption of technical solutions and appropriate policy frameworks that enable innovation while promoting the public interest”. — AFP
You May Also Like