Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From Sunday of the European Union, block regulators can ban the use of the AI systems they hold, which are an “unacceptable risk”.
February 2 is the first time limit for EU AI AI AI, a comprehensive AI regulatory framework approved by the European Parliament in March last year after the development. The law officially entered into force on 1 August; Which is now the first of the compliance deadlines.
The features are included in Article 5But in a broader sense, the law is designed to cover countless cases of use when AI can appear and interact with individuals, from consumer applications to physical environment.
THE Block approachThere are four wide risk levels: (1) minimal risk (eg E -Mail spam filters) will not have regulatory supervision; (2) A limited risk, which includes customer service chatbots, will hold a light-tagging regulatory supervision; (3) High risk – AI of Health Recommendations is one of the examples – will face serious regulatory supervision; and (4) Unacceptable risk applications, in the focus of monthly compliance requirements, are completely prohibited.
Some of the unacceptable activities are as follows:
Companies that are found to use any of the above AI applications in the EU should be fines with a fine, no matter where their headquarters are. Up to EUR 35 million (~ $ 36 million), which is 7% of their annual revenue for the previous financial year, depending on which one is larger.
The fines have not been filed for some time, said Rob Sumroy, technology manager at Slaughter and May British law firm, Techcrunch.
“Organizations are expected to fully meet until February 2, but … the next large deadline that companies should be aware is in August,” Sumroy said. “By then, we will find out who the competent authorities and the fines and enforcement provisions come into force.”
The deadline for February 2 is a formality in a sense.
In September last year, more than 100 companies signed the EU AI PACT, a voluntary promise to start applying the AI AI Principles before entering the application. As part of the pact, signatories, including Amazon, Google and Openai, are committed to identifying AI systems, which are likely to be classified as high risk under AI AI.
Some technical giants, namely META and Apple, missed the pact. The French AI starter Mistral, one of the worst criticisms of the AI law, also decided not to sign it.
This does not suggest that Apple, Meta, Mistral, or others who disagree with the pact do not meet their obligations, including the prohibition of unacceptably risky systems. Sumroy points out that due to the nature of the prohibited use, most companies are not involved in these exercises.
“For organizations, the key concern about the EU AI law is that clear guidelines, standards and codes of behavior are coming in time -and it is crucial that organizations make it clear,” Sumroy said. “At the same time, working groups have adhere to their deadlines for the Codex of the … developers.”
There are exceptions to many prohibitions of AI AI.
For example, the law allows law enforcement to use certain systems that collect biometrics in public places when these systems help to perform a “targeted search”, say, the victims “Threat to life. This exemption requires the permission of the appropriate governing body, and the law emphasizes that law enforcement cannot make a decision that “results in a harmful legal effect” on a person based solely on the outputs of these systems.
The law also calculates exceptions of systems that infer emotions in workplaces and schools, where there is a “medical or safety” reason, such as systems designed for therapeutic use.
To the European Commission, the EU executive branch, He said he would release additional guidelines After consulting with stakeholders in November, “early 2025”. However, these guidelines have not yet been published.
According to Sumroy, this is not clear how other laws for books can interact with the prohibition of AI and its related provisions. Clearness arrives only in later stages of the year as the execution window approaches.
“It is important for organizations to remember that AI regulation does not exist isolated,” Sumroy said. “Other legal frameworks, such as GDPR, NIS2 and Dora, interact with the AI law, creating potential challenges – especially about notification requirements for overlapping events. Understanding how these laws fit together is just as important as understanding the AI act itself. ”