The US, UK, and 18 Countries Agree on Guidelines to Keep AI Systems Safe

IBL News | New York

The United States, Britain, and eighteen countries — including Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore, among others — unveiled this month an international agreement that carries general recommendations on how to keep AI safe from rogue actors, monitor systems, and protect data from tampering.

The agreement, written in a 20-page nonbinding document, pushes companies to create “secure by design” AI systems, keeping people safe from misuse.

The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said to Reuters that “this is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs.”

The agreement is the latest in a series of initiatives by nations to shape the development of AI.