Inzenius’ Use of AI In Its Systems        

At Inzenius payroll, we carefully consider the protection of data from our systems.

The world’s first legislation designed to regulate artificial intelligence has arrived, and it comes with hefty fines” Patrick Daly – Noth Eastern Education.

Considering how we could use AI I in our business software, our developers factor in the appropriate security.

Our guiding principle is only to include data from our software system and individual client employees that is not shared in the algorithm. We do not want to see data from our system employee data set used by any third party and protected in transit and within the system. #ThatsInzenius #Payroll

EU AI Act sets precedent with $37 million fines for non-compliance. Experts explain the impact on UK and US developers

Northeastern experts set the scene for how the EU AI Act will work and what the pros and cons are of governments stepping in to regulate the advanced technology.

by Patrick Daly

June 13, 2024

The European Union’s AI Act regulating the technology has been described as the ‘first of its kind in the world’. Philipp von Ditfurth/picture-alliance/dpa/AP Images

LONDON — The world’s first legislation designed to regulate artificial intelligence has arrived and it comes with hefty fines for developers who fail to comply.

The European Union’s AI Act has a reach that extends beyond the territorial remit of its 27 member states and contains predetermined punishments up to $37 million. For mega corporations, the fines could potentially be even higher.

Mathieu Michel, the Belgian digital minister, heralded the act as a “landmark law,” calling it the “first of its kind in the world” when each EU country’s parliament signed off on it last month.

Anton Dinev, an assistant professor in law at Northeastern University, explains that developers in the United Kingdom, which left the EU in 2020, the United States and elsewhere will have to prepare themselves for the law’s impact as the rules designed to protect the fundamental rights of European citizens gradually come into force.

The London-based professor says anyone using an AI system that processes data from the EU or has an output that then has repercussions inside the bloc will find themselves covered by the legislation.

“The AI Act has a very broad extraterritorial reach,” Dinev says. “It applies not only to users of AI systems located within the European Union but also to providers placing into the European market or putting into service AI systems [that impact EU citizens], irrespective of whether those providers are established within the EU or a third country.”

The law, which was three years in the making, categorizes different types of AI according to risk. AI systems presenting only a limited risk would be subject to light transparency obligations, according to the EU, while “high-risk” AI systems would be authorized, but subject to a set of requirements and obligations in order to gain access to the EU’s single market.

AI systems — defined as being able to conduct autonomous machine learning and having the capacity to make “outputs” such as predictions and recommendations — deemed to hold an “unacceptable risk” to the rights of EU citizens will be outlawed entirely.

Bans will come into place next year on activities such as using the advanced technology for social scoring — a process that could deny people access to public services based on their behavior — or for predictive policing based on profiling, with punishments for failing to comply including a fine worth as much as 35 million Euros ($37 million) or 7% of global revenues, whichever is higher.

Dinev explains that U.S.-administered qualifications such as the Test of English as a Foreign Language (TOEFL) will find themselves covered under the AI Act.

Those undertaking TOEFL in Europe have their answers scored in the U.S. by a process that utilizes both AI and human effort. He says that means the activity falls into the EU’s “high risk” AI systems category, which covers educational institutions’ use of AI, as well as its deployment in health care and recruitment settings.

“With the TOEFL test, the provider is based in the U.S., AI is being used in the U.S. — but the AI Act will still apply to that activity,” he says.

Developers whose systems come under the “high risk” category will need to set up a code of practice by March 2025 that ensures compliance with the EU legislation. Fines for breaches can reach up to 15 million Euros ($16.2 million) or 3% of a firm’s global revenue, Dinev explains.

General purpose AI models, whose definition covers large-language models like ChatGPT and Google’s Gemini software, will face some limited requirements, mostly in regard to transparency, according to Brussels. But the legislation states that those models regarded as presenting “systemic risks” will have to comply with stricter rules.