hometechnology NewsEuropean Union plans stricter rules to regulate advanced generative AI models

European Union plans stricter rules to regulate advanced generative AI models

The EU is poised to become the first Western government to place mandatory rules on artificial intelligence. Under proposed legislation known as the AI Act, systems that predict crime or sort job applications would have to undergo risk assessments, among other requirements.

Profile image

By Bloomberg  Oct 18, 2023 12:29:30 PM IST (Published)

Listen to the Article(6 Minutes)
6 Min Read
European Union plans stricter rules to regulate advanced generative AI models
The European Union is considering a three-tiered approach to regulating generative AI models and systems, according to a proposal seen by Bloomberg, part of groundbreaking efforts to rein in the rapidly advancing technology.

The three levels would establish rules for different foundation models — the AI systems that can be adapted to a range of tasks — and require additional external testing for the most powerful technology, according to the document.
The EU is poised to become the first Western government to place mandatory rules on artificial intelligence. Under proposed legislation known as the AI Act, systems that predict crime or sort job applications would have to undergo risk assessments, among other requirements. Negotiators want to hone the legislation at the next meeting, on October 25, with the goal of finalising it by the end of the year.
Under the proposed rules, the first category would include all foundation models. A second tier of “very capable” systems would be distinguished by the amount of computing power employed to train their large language models — the algorithms that use massive data sets to develop AI capabilities. Those models could “go beyond the current state of the art and may not yet be fully understood,” according to the proposal.
Finally, a third category known as general purpose AI systems at scale would include the most popular AI tools and would be measured by the total number of users.
The European Commission wasn’t immediately available for comment.
Since the release of OpenAI’s ChatGPT last year, generative AI tools have exploded in popularity — and sent much of the tech industry scrambling to develop their own versions. Generative AI software can respond to simple prompts with text, pictures and video based on its large language models, often with an eerie level of skill.
The EU still faces a number of key issues in how to approach the technology, including how exactly it would regulate generative AI and whether to completely ban live facial scanning in crowds. Plans from the EU’s Parliament and council drew criticism that the government could hinder smaller companies’ ability to compete with major tech giants.
Representatives from the EU’s three institutions generally backed a tiered approach at a meeting earlier this month, but technical experts worked out a more concrete proposal. Based on the October 16 document seen by Bloomberg, the ideas are now taking shape, though they could change as negotiations unfold.
Here’s how the three tiers could be addressed by the EU:
1. All Foundational Models
AI developers would be subject to transparency requirements before placing any model on the market. They’d have to document the model and its training process, including the results of internal “red-teaming” efforts — where independent experts try to push models into bad behaviour. There also would be an evaluation based on standardised protocols.
After a model is on the market, companies would need to provide information to businesses using the technology and enable them to test the foundation models.
Companies will have to include a “sufficiently detailed” summary of the content they used to develop their models and how they manage copyright issues, including ensuring rights holders can opt out of having their content used to train models. Companies also must ensure that AI content can be distinguished from other material.
Negotiators have proposed defining a foundation model as a system that can “competently perform a wide range of distinctive tasks.”
2. Very Capable Foundation Models
Companies that produce this tier of technology would adhere to stricter rules. Before being placed on the market, these models would have to undergo regular red-teaming by external experts who would be vetted by the EU’s newly created AI Office. The results of these tests would be sent to that agency.
Companies would also have to introduce systems to help discover systemic risks. After these models are placed on the market, the EU would have independent auditors and researchers perform compliance controls, including checking if companies are following transparency rules.
Negotiators are also considering creating a forum for companies to discuss best practices and a voluntary code of conduct that would be endorsed by the European Commission.
Very capable foundation models would be classified based on the computing power needed to train them, using a measure known as FLOPS, or floating point operations per second. The exact threshold would be determined by the commission at a later stage and will be updated as needed.
Companies could contest this assessment. Conversely, the commission could consider a model very capable even if it doesn’t meet the threshold following an investigation. Negotiators are also considering using the “potential impact” of the model — based on the number of high-risk AI applications that are built on it — as a way to categorise the technology.
3. General Purpose AI Systems at Scale
These systems would also have to undergo red-teaming by external experts to identify vulnerabilities, and the results would be sent to the commission’s AI Office. Companies would also have to introduce a risk assessment and mitigation system.
The EU would consider any system that has 10,000 registered business users or 45 million registered end users to be a GPAI at scale. The commission would later determine how to calculate user numbers.
Companies could appeal their status as a general-purpose AI system at scale, and similarly, the EU could make other systems or models adhere to these additional rules — even if they don’t meet the thresholds but could “give rise to risks.”
Further discussion is needed to determine guardrails to ensure illegal and harmful content isn’t generated by both general-purpose AI systems and very capable AI systems.
The additional rules for the general-purpose AI at scale and very capable foundation models would be overseen by the new AI Office. It could request documents, organise tests of compliance, create a registry of vetted red testers and carry out investigations, according to the document. The agency could even suspend a model “as a last resort.”
The agency, while housed in the commission, would be “self-standing.” The EU could get money to hire people for the office using a fee from the general purpose AI at scale and very capable foundation models.

Most Read

Share Market Live

View All
Top GainersTop Losers
CurrencyCommodities
CurrencyPriceChange%Change