How these proposed standards aim to tame our AI wild west

9 hours ago 12
rodeo55gettyimages-956161024
ferrantraite/Getty Images

Technology standardization has been something of an elusive holy grail, with new tech emerging faster than standards groups can keep up. Yet, somehow, things eventually come together -- at least for mature systems -- and achieve interoperability, be it email networks or developer tools. 

Now, a new race against time has come to the fore, with efforts to tame one of the fastest-developing technologies seen to date -- artificial intelligence. Can standards groups, with their purposely slower and highly participative deliberations, stay ahead of the AI curve? And can they achieve standards across a technology as abstract and amorphous as AI, that also shifts every few months?

Also: 60% of managers use AI to make decisions now, including who to promote and fire - does yours?

There's a good case to be made for AI standards, as it is a technology riddled with traps: deepfakes, bias, misdirections, and hallucinations. And, unlike technologies that have gone before it, AI presents more than a software engineering problem -- it's a societal problem. 

A consortium of standards bodies seeks to take a new approach to AI, recognizing its wide implications. Active efforts are underway to involve non-technical professionals in formulating the standards that will define AI in the years ahead. 

A collective of standards bodies -- the AI and Multimedia Authenticity Standards Collaboration (AMAS) -- seeks safer AI for a justifiably skeptical world. The initiative, seeking to address the misuse of AI-generated content, was announced at the recent "AI for Good" Global Summit in Geneva. The effort is spearheaded by the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU). 

Also: I used Google's photo-to-video AI tool on my selfie - and it made me do the tango

The group hopes to develop standards that will help protect the integrity of information, uphold individual rights, and foster trust in the digital ecosystem. They seek to ensure users can identify the provenance of AI-generated and altered content. Human rights, never mentioned in technical standards, are top of mind for today's standards proponents. 

All good stuff, for sure. But will major enterprises and technology firms fully buy into AI standards being handed down that may hamper innovation in what is a fast-moving space? 

"We're basically saying the AI space is a bit of a mess, because the technology goes in all directions," said Gilles Thonet, deputy secretary-general for IEC, in a private briefing. "You won't find the answer in the AI application. You need to define what a system is."

As AI systems involve interaction at many levels, it may be critical to define those systems. For example, Thonet continued, "consider visualization of systems for driving a car: distance keeping, rotation of the wheel, all the sensors. It's up to software developers to define what a system is. Is everything a system? What is the system within the system?"

The incentive to follow standards is market access, Thonet said. "It's basically trying to understand a chain of needs." In the process, the role of standards organizations such as IEC is evolving -- from closed efforts limited to engineers to greater activism involving a broader cross-section of society. 

Also: 5 reasons why I still prefer Perplexity over every other AI chatbot

"This change in mindset is important to us," Thonet continued. "Before, if I were to speak to any engineer and mention the term 'human rights,' they would respond that 'it's not our job, we just worry about standards.' One of the things we've seen happening in recent years is [that] the makeup of the technical committees or subcommittees is changing. So once upon a time it would have been mainly engineers, now we're seeing ethicists, social scientists, and legal experts joining in the standardization work." 

Categories of standards under development within AMAS include content provenance, trust and authenticity, asset identifiers, and rights declarations. Such efforts began in earnest about five years ago, with the formulation of a foundational standard for trustworthiness in artificial intelligence. The standard provided guidelines for assessing the reliability and integrity of AI systems. Earlier this year, IEC and ISO published the first part of a new JPEG Trust series of international standards for media, including video and audio -- a key weapon against the rise of deepfake videos and images.

Standards just released this year under the aegis of AMAS include the following:

  • JPEG Trust Part 1: Focuses on trust and authenticity in JPEG images through provenance, detection and fact-checking. It provides a framework for embedding metadata directly into JPEG files in the form of trust indicators.
  • Content Credentials: Outlines methods for documenting content credentials to ensure that digital content is traceable and its authenticity can be verified. It specifies the types of metadata that should be included and the formats for storing this information. 
  • CAWG Metadata: Provides a framework for expressing metadata that captures detailed information about the content, including ownership and authorship.

Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time

There are a number of standards now in the pipeline that also seek to build trust in digital media and AI:   

  • Digital Watermarking: Overseen by IEEE, this proposed standard offers methods for evaluating the robustness of digital watermarking. It includes guidelines for creating and maintaining evaluation files, which can be used to document the evaluation of digital assets. 
  • Original Profile: Includes guidelines for creating and maintaining profiles that capture detailed information about the content's creator and creation process. 
  • Trust.txt Outlines methods for establishing trust in digital content and includes guidelines for creating and maintaining trust.txt files, which can be used to document the trustworthiness of digital assets. 
  • Use Case Vocabulary: A standardized vocabulary of use cases that can be targeted when expressing machine-readable opt-outs related to text and data mining and AI training. Enables declaring parties to communicate restrictions or permissions regarding the use of digital assets. 
  • Framework for Authentication of Multimedia Content: Specifies a technical solution for verifying multimedia content integrity, enabling users to confirm the authenticity of the content by its creators. The solution is based on the digital signing of data streams. The content creator (encoder) uses a private key to sign the content, while the recipient (decoder) uses a corresponding public key to verify authenticity. 

Also: Why neglecting AI ethics is such risky business - and how to do AI right

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

Read Entire Article