Home   >>   tech  >> AI Security Bill To Curb Safety Breaches Of AI Models

AI Security Bill To Curb Safety Breaches Of AI Models

Team Gossip  |   May 3, 4:15 AM   |   6 min read

banner image

Highlights

  • A new bill has been filed in the Senate, which aims to track security issues by mandating the creation of a database recording all breaches of AI systems.

  • The Secure Artificial Intelligence Act would bring an Artificial Intelligence Security Center to the National Security Agency.

  • The bill will also need the NIST and the Cybersecurity and Infrastructure Security Agency to develop a database of AI breaches

A new bill has been filed in the Senate, which aims to track security issues by mandating the creation of a database recording all breaches of AI systems. The Secure Artificial Intelligence Act, introduced by Sens, Mark Warner (D-VA) and Thom Tillis (R-NC), would bring an Artificial Intelligence Security Center to the National Security Agency.

 

This center will focus on researching what the bill calls “counter-AI,” or building techniques to learn how AI systems can be manipulated. This center would also create guidance for curbing counter-AI measures.

 

The bill will also need the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency to develop a database of AI breaches, including “near-misses.”

 

Also Read: NVIDIA ChatRTX Receives Support For New AI Models In Its New Update

 

The Bill Will Focus On Countering AI Efficiently

 

Warner and Tillis’ proposed bill emphasizes techniques to counter AI and classify them as data poisoning, abuse attacks, privacy-based attacks, and others. Data poisoning is a method where code is infused in the data that has been scraped by an AI model, which eventually corrupts the output of the model. It has become one of the popular methods to prevent AI image generators from copying art on the web. 

 

AI safety was among the key areas in the Biden administration’s AI executive order, which told NIST to establish “red-teaming” guidelines and needed AI developers to submit safety reports. Red teaming refers to a process where developers try to get AI models to respond to prompts they aren’t supposed to.

 

Usually, developers of powerful AI models test the platforms for safety and put these models through extensive red teaming before making them live for the users. Big firms, including Microsoft, have developed tools that help add safety guardrails to AI projects easily. The Secure Artificial Intelligence Act will be put in front of a committee before it can be taken up by the larger Senate.

 

Also Read: Assassin’s Creed Mirage Coming To IPhone 15 Pro & Select IPads

 

Microsoft And OpenAI Dragged To Court 

 

Several news organizations such as the New York Daily News, Orlando Sentinel, Chicago Tribune, San Jose Mercury News, and four more have filed a case against ChatGPT’s parent company OpenAI and Microsoft over copyright infringement allegations.

 

All these publications are owned by the hedge fund Alden Global Capital, and they have claimed that Microsoft and OpenAI are training their AI model using their content, without giving out any compensation or consent.

 

The plaintiffs also presented evidence showing several excerpts from conversations with Copilot and ChatGPT. The evidence showed that both AI chatbots regenerated lengthy excerpts from specific articles, which suggests that their training datasets have content from those articles.

Trending tags

Author Avatar

Team Gossip

Gossip News Desk

logo

The Gossip News Desk is a common byline for news, features, and guides by contributing authors.

Comments

0 Comments

image

View More Comments

Latest